Ep. 223: The Road Ahead: AI’s Role in Shaping Wealth Management with Vinay Nair, TIFIN

Come on in and sit back and relax. You’re listening to Episode 223 of the WealthTech Today podcast. I’m your host, Craig Iskowitz, founder of Ezra Group Consulting. This podcast features interviews, news and analysis on the trends and best practices, all about wealth management technology.

We love talking about wealth management technology here, and my guest on this episode is Vinay Nair, the founder and CEO of TIFIN. A quick background on Vinay, he’s got a PhD in financial economics from the Stern School of Business at NYU. Vinay started his career in academia as a finance faculty member at Wharton Business School, but he’s also taught at other universities such as MIT and Columbia. In between he launched and managed a quantitative hedge fund and from that experience, he founded a tech and what is also called a TAMP more of a TAMP, which is called 55ip and they focused on outsource tax transition services and direct indexing 55ip was successfully sold to JP Morgan and it’s still operating there today.

TIFIN is operates as a collection of fully owned subsidiaries in wealth management and investing. I never really understood how it worked until talking with Vinay before this episode and during this episode so I have a much better understanding of how TIFIN works. And to me, you’ll hear in the episode Vinay doesn’t like to call it an incubator or an accelerator but it seems more like that to me or more of a collection of subsidiaries which are trying to build up and sell off, which is great for any industry, but specifically wealth management.

We don’t have enough of these innovative ideas, and you have to be willing to try new things and maybe fail and pivot and certainly TIFIN, if you follow the news on some of their different subsidiaries, have pivoted quite a bit. It’s not a bad thing. You’ve got to keep trying and seeing what works and then moving away from things that don’t work and towards things that do work so interesting. I’m glad that I got Vinay on the program to learn more about TIFIN and how they’re operating. And you’re going to hear all about it right after this.

But before we get started, let’s talk about tech stacks. At Ezra Group, we’ve seen tech stacks of hundreds of RIAs and let me tell you, most of them are loaded down with tech debt. So you shouldn’t feel too bad about yours. But let’s face it tech debt is like a giant anchor, holding back your business growth. If you want to free your firm for exponential growth, you should run, not walk to our website EzraGroup.com and fill out the Contact Us form. Our experienced team can evaluate your current tech ecosystem, deliver targeted recommendations, optimize your existing systems and operations or run an RFP and help you implement new software to take your firm to the next level. You can take advantage of our free consultation offer by going to EzraGroup.com.

Topics Mentioned

  • The AI Platform for Wealth
  • The Mechanics of AI Behind SAGE and Helix
  • Actionable Insights and Compliance in AI-Driven Wealth Management

Episode Transcript

Craig: I’m excited to introduce my next guest on the program. It is Vinay Nair, founder and CEO of TIFIN. Hello, Vinay! Thanks for being here.

Vinay: Hey Craig! Thank you for having me here. It’s great to be here.

Craig: I’m glad you could make it. Where are you calling from—your home in Boulder?

Vinay: Yes, I’m in Boulder. It is a perfect time to be here with all this ski season now.

Craig: I’m jealous. I’m in New Jersey. As everyone knows, I’m in New Jersey, but I’m looking forward to getting out to do some snowboarding in Colorado this season. We’ll see if we can meet on the slope somewhere.

Craig: But let’s jump into this. I’m glad you’re here. You and I have been talking a bit offline, [having] a lot of AI discussions, and I thought it’d be great to have a podcast about some of the great AI products that TIFIN has been coming out with. We’re going to do a deep dive into two of them. But before we do that, could you give everyone listening a 30-second elevator pitch for TIFIN?

The AI Platform for Wealth

Vinay: Sure. TIFIN is essentially an AI platform for wealth. The T and I in TIFIN stand for technology and innovation for finance. What we do is apply technology, typically data science, AI, and start new companies, which is innovation that is all focused on changing wealth outcomes. We look at where people have their money, which is typically a self-directed account, an advised account, or a workplace account. We look at what people do with their money and try to change [their] experiences in these accounts and what they do with their money. All our companies effectively try to bring innovation to the wealth ecosystem.

Vinay: We’ve built 12 companies to date, 10 [of which we] still control and operate under TIFIN. All have been done with the backing of several strategic investors, some of whom include J.P. Morgan, Morningstar, Franklin Templeton, etc. Think of us as a center of excellence for applying data science and AI in the wealth and asset management industry.

Craig: That was a great overview. You have so much going on. It’s hard to believe you can keep it all in your head at one time. Twelve companies with two exits—that’s a great track record that any other firm would be looking for. Let’s dive into two different products that are AI-based products. One is called SAGE and one is called Helix. SAGE is an AI-powered investment system that you’re calling a CIO for every client. Can you talk about the underlying mechanics of the AI behind that? And how does it work?

Vinay: Absolutely. Craig, as you know, early this year we started three companies to apply generative AI to the world of wealth. SAGE and Helix fall into that category. TIFIN Work is the third, which coincidentally recently announced a partnership with Franklin Templeton to take it to the workplace setting.

Vinay: I’ll focus on the first two, as you’ve pointed that out. Essentially, SAGE is a personalized CIO for every client delivered to the advisor. What happens in many firms? Think of large enterprises where there’s a CIO office and advisors get on a call to get the CIO’s views. Often, the advisor then asks, “What does this mean for Craig, my client?”

Vinay: Let’s say the house view is that oil goes up in the next six months. The question for the advisor who’s going to meet their client is: How does it affect my client’s investments and portfolio? What is most affected? What should I change, if anything? What happens if I change it and the prediction is not true? There might be many other questions that the client may have that the advisor needs to answer. With this, SAGE is now an AI assistant for the advisor, which can [provide] 24/7 personalized real-time answers to these questions. It’s the bridge between a very centralized—call it an investment view, a macro view, or a market view—from the house to a very client-specific implication delivered to the advisor.

Craig: I think it’s an interesting concept and it’s certainly the next step if you think about how most firms have a centralized CIO or at least someone who’s making the investment choices. If you want to ask them why they did something, you’ve got to send an email and have a meeting with them. As you mentioned, the AI, the chatbot for SAGE, is always on. But how does the AI work to deliver this information to the advisors? What are you training it on?

The Mechanics of AI Behind SAGE and Helix

Vinay: Let’s take both AIs together and then we can talk about it collectively. Helix is an AI assistant for alternative and private markets. SAGE is focused on taking a CIO view across public markets. We are building SAGE with J.P. Morgan Private Bank as our first client. It’s getting built with JPM’s data applied for private bank clients. Helix—we are building it in collaboration with Hamilton Lane.

Vinay: As your listeners might know, Hamilton Lane is the largest allocator to alternatives publicly listed. But more importantly, for over 30 years, they have built and digitized a pretty phenomenal data set around private markets. We have exclusive access to the data set to train and build an AI assistant around private markets and alternatives. In both of these cases, the AI assistant utilizes multiple components to build it. One is data, as we spoke about JPM and Hamilton Lane data, but there’s also third-party data that is open to vendor data, if you will.

Vinay: In addition to that, there is also what we like to call an ontology, a taxonomy, or a set of interrelated taxonomies that allow the application of LLMs—think OpenAI, Llama 2, or any sort of foundation of LLM—within the context of a particular vertical. We can talk more about how we’ve built it and how we have an edge on it. One of our companies got a patent on the ontology, which was magnified when it first launched an AI assistant.

Vinay: How this is all built is a combination of training plus data lake creation plus an ontology guardrail. Then all of that is used to understand intent. The users ask a question. Once you understand the intent, from there on, it goes and services that intent with various modules. Some of these modules may or may not be AI. For example, you might ask your assistant: What is 2 + 2? You don’t need an AI module to answer what 2 + 2 is, but you need to understand that it’s arithmetic and transfer it to an arithmetic module. It’s a combination of many modules, some of which are not AI, all talking to a translation layer, which is AI interacting with real-life questions people need answered.

Craig: You mentioned patterns on ontology. Can you explain that to people who are listening? The generic definition of ontology is the nature of being. It’s a metaphysical term, but we’re using the computer science term ontology, meaning representations, formal namings, categories, and such. How does that work in the AI context?

Vinay: Yes. When we built Magnifi, which is an AI assistant for consumers—we launched it before OpenAI released GPT last summer—what allowed us to do it was that we had created a significant word cloud, for lack of a better word. When the words all have relationships with each other, the word cloud becomes ontology, which is one way to think about it. That word cloud was built over many search results and searches people had done on Magnifi. What that helped us do was help us understand how financial terms are used, how they interact with each other, and what interrelationships these words have with each other. That improves a lot of the LLM responses because many of these responses are trained on a general dataset, not a vertical domain-specific dataset.

Vinay: One way to think about this for folks listening is if you go back 20 years ago when search came out, a general search was not good enough for a firm like Amazon. You would go to Amazon and type in ‘blue dress shirt’, and often you get a blue dress, not a dress shirt. Amazon built a very vertical e-commerce-based search, which outperformed Google’s e-commerce search at that time. Similarly, there are many examples of verticalization in SaaS. Think of this as verticalization in AI, which helps you build higher quality and compliance sometimes in the types of areas that you and I are in.

Craig: That is a term that most people may not have heard of as well: Verticalization of AI. You’re taking this and making it an AI that’s trained in public and private markets so it understands what it’s doing. When it’s generating content, it’s also generating with the basic knowledge—you said it’s using the Llama 2 LLM—that is very generic. Then you’re customizing it here. That leads me to the next question, which is: How does this platform that’s verticalized in AI—customized with this set of interrelated taxonomies—fit into the advisor’s toolkit and the interfaces that they work with?

Vinay: Both of these AI systems that we are taking out to the market are essentially APIs. Our view is that advisors, wherever they are and whichever workstation platform they’re using, these AI assistants should just sit there. It is no different from an old-school bot. What chatbots did was robotic automation; they did something very specific, something very repetitive and they helped answer those things. What AI systems are doing is more like mimicking intelligence; they’re mimicking how we make decisions, not exactly the outcome of the decision. But as far as delivery goes, it should sit on a workplace platform or workstation as a simple API, and then advisors should be able to ask questions and get answers around it, which is how we’re delivering this as well.

Craig: If I can paraphrase what you’re saying, SAGE will be integrated into whatever platforms the advisors are using and it will appear as a native tool inside of what they’re already doing.

Vinay: That’s right. For enterprise clients, that’s the case. For RIAs right now, they just go to a website, log in and use SAGE. But if you’re a J.P. Morgan advisor, it’ll be within the platform that they currently use or any such enterprise platform.

Craig: We’re talking about this built-in chatbot that an advisor can log in to and say: “We put my client into this. How do I explain it to them?”—if they’re asking questions like that. Or if the market has changed, how does this change in the market impact our portfolios? What kind of [inaudible] are they going to be getting back? Is it just text? Is it also graphs and charts? Is it going to be video or audio? Will it be multimedia? What are we going to be getting back here as an advisor?

Vinay: It’s not multimodal right now in the sense of audio and video, but it does cover text, arithmetic, quantitative computations, charts, graphs, etc.—anything that is still screen-based across different forms.

Craig: Anything screen-based. Okay. Interesting. On your website, it says that SAGE will generate actionable insights. Can you give me some examples of success stories or data that show the effectiveness and concrete outcomes from SAGE?

Vinay: Yes. One of our foundational philosophies in everything that we have built is combining personalized intelligence with immediate actionability. As you know, Craig, 55ip was a firm that I founded, and that was the foundation of it, which is actionable intelligence. What does actionable mean here? An advisor can go through several questions, as we discussed. Let’s say he comes to the conclusion that this particular position is the one that creates the most exposure to oil dropping in the next six months for my client. The advisor could say, “Give me a substitute position” and then say, “Swap it out.” That “swap it out” is the actionability.

Vinay: The AI assistant, in the case of the enterprise clients we are working with, will be linked to the backend execution side and it will execute that trade. That’s actionability. Another example of actionability could simply be that the advisor might want a report generated for the client after all the back and forth that the advisor does with the co-pilot. And finally, the action might be: “Write me one page to explain all this using simple words such as income and growth for my client.” That’s actionability. But it’s something that is either deliverable to the client or the prospect or something that changes the portfolio then and there.

Craig: That will be revolutionary. Is that working now? Is SAGE connected to a backend system? The advisor can type in exactly what you said, “Swap it out,” and it’s going back and making trades? Is your first client, J.P. Morgan Private Bank, allowing that?

Vinay: Yes. It’s getting built at JPM with that intent. As you can imagine, all the integrations that are needed take time, but that’s what’s happening there right now. We know we can do it because Magnifi, which was our starting experiment with AI assistance, can already do it. You can go as an individual to Magnifi and execute investments and trades at Magnifi.

Craig: That’s a little different. That’s an individual executing trades on his own. If you’re talking about an advisor taking advice from the AI, what does compliance say to that? How do you deal with compliance issues? Who’s making the trade here? Is it the AI making the trade or is it the advisor making the trade?

Actionable Insights and Compliance in AI-Driven Wealth Management

Vinay: It’s a great question. This is why the AI assistant can only offer suggestions from a centralized system, which is an enterprise-approved list, or the CIO’s views. In other words, the AI is not coming up with the decision; it’s a decision support system for the advisor. The AI is not coming up with the investment to make; it’s translating the CIO’s views to the advisor with a client-specific implication.

Vinay: As the industry has scaled, it has to be centralized. Think model portfolios. It’s a great example of something that is more and more centralized. Advisors use model portfolios to make them client-specific. But how do you build something personal for each client with a centralized process? The AI assistants play that gap, that bridge. It’s not that the AI assistant is making the decision, but it’s communicating and customizing the decision with guardrails.

Craig: Would it also block it if, for example, there was a restriction on that? Say there’s a “Do not sell” on this. It would link into that as well and say, “You asked me to swap this out; you can’t do it with this client because they don’t want you to sell that”?

Vinay: Correct. For each client, the restricted lists are inputs into what suggestions to make and what’s allowed.

Craig: It seems like it would be very useful for an advisor as a portfolio manager who wants to build his own portfolios but [also] take advice from what the CIO is saying. “If I built this portfolio… ” Would you talk to the AI that way? “Here’s the portfolio I like. Does this align with what our company is talking about for this kind of person?”

Vinay: It doesn’t do that today. It’s a great idea. But you’re right. I think that it works well for advisors who want to act as PMs or active tweakers of model portfolios or investments. I think it also works well for advisors who sometimes get asked [questions] by clients who are investment savvy and don’t know how to answer them quickly. This helps them get smarter and look smarter in front of clients who might be asking simple questions: “Things going on in the Middle East, what is your view on oil?” The advisor can say, “What is our house view on oil?” Or it can also say, “What is the market’s view on oil?” Specifically, “What is [inaudible] view on oil?”, etc. It can help them just converse more easily about some esoteric topics and investments.

Craig: One thing I think will be interesting is if it linked into… Many years ago, when I was working at a market data vendor doing a consulting project, they built a system that would do secondary investment ideas. Going through 10Ks and other data documents, it would say: “Here’s a publicly traded company. Here are all their vendors. Here are all their suppliers. Here are all their clients. Here’s all their partners.” And it would have [inaudible] that were publicly traded. And then you could say, “If this stock is up sharply today, their suppliers might be up, their vendors might be up, and their clients might go up as well.” That would be interesting if I could ask the AI: “I’ve had a big move on Microsoft today; who are Microsoft’s biggest vendors?”

Vinay: Yes. I am familiar with that work from my academic days. And I think it’s a great piece of work—the economic linkages between suppliers and vendors. But that’s not the intent for SAGE because we are trying to walk this line between not doing investment research outside of what the CIO’s office already does. In other words, if JPM’s investment team has several views, I think SAGE is not designed to come up with views outside of that. If they feed us, “Here are the economic linkages,” sure, we can surface it, but not the other way around.

Craig: We’re always talking about the future, Vinay, I’ll tell you. There’s always something around the corner that we can build. And I see the clock on the wall; I know you have a hard stop. Where can people find out more information about TIFIN?

Vinay: The easiest is to head to TIFIN.com. You’ll see all our holdings and all our subsidiaries. Then, depending on which one is most interesting, they can click in there. SAGE and Helix are both there. They’re both new companies, so the information there is light, but that’s what it is.

Craig: Awesome. Vinay, thanks so much for being here! I know you’re super busy. I appreciate your time!

Vinay: Thank you, Craig! I appreciate it.

SEARCH

ABOUT ME

The Wealth Tech Today blog is published by Craig Iskowitz, founder and CEO of Ezra Group, a boutique consulting firm that caters to banks, broker-dealers, RIA’s, asset managers and the leading vendors in the surrounding #fintech space. He can be reached at craig@ezragroupllc.com

SUBSCRIBE TO OUR NEWSLETTER VIA EMAIL

@CRAIGISKOWITZ

ARCHIVES

Archives