Podcast: Play in new window | Download | Embed
“If you were to ask, what is AI going to do? It’s actually catering to your needs, specifically. It’s like creating Jarvis, it’s the ultimate digital assistant for an advisor so they can be much more efficient when they do their job. But going back to the conversation, we know who you are, we know what you want, and we know how to get you there.”
–Raj Madan, Head of Common Services, Architecture & Innovation, BNY Mellon | Pershing
Raj is a technology executive professional with many years of experience in the strategy, development, management and architecture of complex multi-tiered systems. Currently Head of Common Services, Architecture & Innovation, Technology Executive Committee member, and Digital Platform Officer in Pershing, responsible for all aspects of running, growing and transforming the business.
As Director of Technology Solution Consulting and Client Engagement, Michelle is responsible for managing a team of specialists that consult broker-dealer and advisory firms on how Pershing solutions can help them grow, scale and compete. The consulting team works with top-tier clients helping them to design solutions and execute strategic initiatives and integration efforts leveraging Pershing’s platform. The client engagement team act as experts on Pershing technology for advisors and investors, and engage strategic clients and prospective clients with a focus on automation, integration, collaboration and efficiency.
Now hit the Play button!
Companies & People Mentioned
- BNY Mellon | Albridge [11:00]
- Rasa Enterprise [21:50]
Topics Covered in this Episode
- Advanced Technology Lab Background
- Enhanced Search with Machine Learning
- Natural Language Generation
- Service Chat Bots
- AI for Operations
- #ItzOnWealthTech Ep. 48: The Triumph of the Fiduciary Model with Ben Harrison
- #ItzOnWealthTech Ep. 63: How to Avoid “The Race to Average” with AdviceTech.LIVE
- 8 Updates on the AI Digital Marathon in Wealth Management
Complete Episode Transcript:
Craig: What a fantastic day it is today in the wonderful world of wealthtech, and you are listening to episode 66 of the Wealth Management Today podcast. I’m your host, Craig Iskowitz, and I run a consulting firm called Ezra Group. We’re experts in everything related to wealthtech, we deliver growth-oriented solutions to banks, broker dealers, assets managers, RIA consolidators as well as their wealthtech providers and vendors through our premium advice and targeted market research. On this podcast, I speak with some of the smartest people in the industry who are on the leading edge of technology and innovation.
I’m happy to introduce on the episode of the Wealth Management Today podcast, my guests. We have two guests today, both from Pershing, we have Raj Madan, Managing Director of Technology, hello Raj.
Raj: Hello Craig, how are you?
Craig: Fantastic! And we have Michelle Feinstein, Director of Technology, Client Engagement.
Michelle: Hey Craig, thanks for having me.
Craig: All right, this is gonna be good! This is a technology podcast, we’re always looking to do deep dives into technology and Pershing has a lot of great tech, and we managed to convince Raj and Michelle to come on the podcast and talk to us a bit. What we’re going to focus on in this episode is the Pershing Advanced Technology Lab, something a lot of people may not know about, but I want to let them introduce themselves first. Can you guys each give us a 30-second intro on yourselves, and then we’ll talk about what the Technology Lab is. So Michelle, why don’t you go first?
Michelle: I’m Michelle Feinstein, I’m the Director of Technology, Client Engagement here at Pershing, and I run a team that really focuses on working with advisors and our firms to help them take advantage of all the Pershing technology that’s out there to the best of their capabilities so they can gain efficiency and have better interactions with clients. So it’s really important that we pay attention to how advanced technologies can play a part in that, and that’s why today we’re going to be talking to you about some of the things we tell our clients are coming soon, to help them with money movement transactions, reporting and analytics, and different ways they can have conversations with their clients in a digital way.
Craig: Great. Raj, give us a 30-second overview of yourself and how you came to Pershing.
Raj: My name is Raj Madan, I’m the Managing Director of BNY Mellon | Pershing Technology. I manage a division that has several teams in it called Common Services Architecture and Innovation. One of the hats I wear is I actually manage what’s called the Advanced Technology Labs. I’ve been at Pershing for about 7 years and BNY Mellon for quite some time, 20 years.
Advanced Technology Lab Background
Craig: Can you guys tell us about the Advanced Technology Lab, what is it and how it was formed?
Raj: So Craig, there’s multiple forms of innovation within Pershing technology, and within Pershing in general. A lot of innovation actually occurs within our typical product roadmap. So when you think about our product roadmap, it has a time horizon of let’s say 1-3 years, and that roadmap is effected by clients’ feedback, it’s effected by competition, and industry. And that has innovation in itself. Advanced Technology Labs was actually introduced around 7 years ago, when we go outside the box of our typical product roadmap and try to think in an untethered way when we think about our proof of concepts.
The goal of our Advanced Technology Labs is to look at technology trends, see which of those trends are the most relevant to our products, and for those that have the highest correlation we execute proof of concepts. Now this is an innovation lab, so when we execute these proof of concepts, not all of them do go onto production. The things we’re talking about today, they won’t necessarily all hit production, and some of them are at different stages of the incubation lifecycle. But whenever you think about innovation, you have to consider the ability to fail. I know it’s a tough word, no one likes to say it, but in the case of Advanced Technology Labs, it is possible. That said, since we’re at the end of our filtration process of all ideas, we do have a high success rate for the Advanced Technology Labs.
Enhanced Search with Machine Learning
Craig: Terrific. One thing that I write about a lot on my blog and what I think is one of the most interesting technologies that’s really going to change wealth management advisor technology in general is artificial intelligence. And I know the Advanced Technology Lab at Pershing has a lot of AI-based initiatives and proof of concepts which we’re going to talk about. So the first one that we’re going to talk about is Enhanced Search with machine learning, Michelle, can you give us a quick overview of what that is?
Michelle: In the NetX360 platform, which is what our advisors are living in day in and day out, they have a capability today called Search. They can put in a client name, look for information, look for a particular transaction to trade. And the idea behind Enhanced Search is, how can we bring more information to the advisor all at once, even information they weren’t thinking about to have better client conversations. So one of the ways we’re doing that is we’re leveraging a machine learning algorithm to associate reports, operational alerts, product information, items for attention that could be associated to an account or a client, and present that to the advisor in what we call a Spotlight Panel. And that Spotlight Panel will give them their initial search results, but the machine learning is prioritizing this information and serving up all these other things at once. The reason that that’s powerful is it helps the advisor not have to hunt and search through the platform for the reports, for alerts, or have to think of these things on their own. We’re serving it up to them and meeting them where they are on the platform at any time.
We’re pretty excited about it, and our plan in the first phase is to focus on operational alerts and tell them, Hey, is there and incoming or outgoing transfer, or are there behavioral changes happening with these accounts where maybe the investor is doing something they haven’t done before. Maybe they’re doing more trades, maybe they’re withdrawing more funds, etc.
Craig: How does that work? How does it know – I’m assuming there’s some sort of pattern matching, behavioral matching related to big data. How are you doing this and how does your big data setup run so that it enables this to be delivered?
Raj: So the search facility that we currently have within Pershing does use a big data solution, it’s called ElasticSearch, if anybody’s interested in the technical aspects of that. And that does enable this capability of indexing very quickly and responding, as Michelle was saying. It also allows us, via the indexing capability, to create facets like that Spotlight feature she was referring to. On the AI aspect, what we’re thinking of doing is sort of using a feedback loop if you will, on the information that people are clicking into, and then producing algorithms where it says, Okay, what are the preferred elements that people are sort of gravitating towards? And since those are preferred within the algorithm, basically putting them at a higher priority so when we do show the results, it shows the high priority results.
Craig: Similar to the way a Google search would work.
Raj: Very, very similar to that.
Craig: So it sees what you’ve searched and tries to think ahead and give you the things you’re most likely to talk about. Does it also, like Google, know where you are in the country, know what other people in your region have been searching, know the things you’ve bought or other things you search for and kind of cross reference them that way?
Raj: It can. Again, this is one of those things are we probably should have started with, these are things within the Advanced Technology Lab, so some of these items aren’t proof of concept based. This one in particular is making progress so it looks like it will happen but we should point out that some of these items don’t always make it to production all the time. But that is the whole innovation lifecycle we’re in.
But in the case of understanding the persona of the user and making that pertinent to the search criteria, and seeing what client firm they’re a part of and making that a part of the overall Search algorithm can occur and it does occur currently within the whole indexing process.
Craig: It’s interesting that you mention that, I do a presentation for conferences where I’m speaking on a website called the Google Graveyard, if you’ve ever heard of that. It’s all the products that Google has killed, and I think it’s up to 170, 180 products now, I haven’t checked in a couple months. That’s just the ones they’ve killed, not the ones that were successful. And what I mentioned was, we don’t have enough of that in wealth management, people are too afraid to try to new thing for fear of failing, and failure isn’t necessarily a bad thing if you’re trying that’s out of the box and you didn’t know if it would work. You might learn a lot from it and we don’t have enough of that in our industry so I’m happy to hear you guys are doing that.
Raj: If you fail fast and you fail cheap, at the end of the day you’ve learned a lot and you’re extensively adding to your intellectual capital of understanding the next challenge.
Craig: Exactly. Alright, so the next use case we wanted to talk about, we just hit Enhanced Search, can we talk about Natural Language Generation? Can you give us a quick overview for people who don’t know what Natural Language Generation is or Natural Language Processing, and what’s the use case for it inside the lab?
Natural Language Generation
Raj: At its simplest state, Natural Language Generation, NLG, is part of the artificial intelligence belt. And ultimately it’s the process of taking data snd converting it to plain text, plain English. And on the use case that we’re using this for, what we’re looking at is the advisor let’s say is creating a performance report, which they can do within Pershing technology under Albridge, where they use wealth reporting and insights to produce an aggregated view of the clients portfolio and overall performance. Now, within wealth reporting and insights, there is this thing called a Report Creator. Incidentally, they’re actually redoing the Report Creator to allow the user to break the report down into components, basically creating their own customized experience for that particular report. And these components are these autonomous, flexible, standard components that create a rich experience for the ultimate report. Now, this is where NLG comes in. If you think about let’s say, the client’s investment experience, not all clients can easily take a particular graph or report and convert it down into correctly focused data points, it all depends on their overall experience. What we’re looking to do here is make their lives easier by taking that graph and those tables, and creating a snippet of text that’s related to the graph and in that snippet of text would point out the salient points from within the graph or tables that they’ve chosen to show on that performance report. Very interesting stuff.
Craig: That’s cool. And why did you pick that, was there a customer request for it, or were there certain efficiencies you saw that would benefit clients more than others?
Raj: So this is one of those things where you start looking at the market and what your competitors are doing. We noticed there are a lot of spaces where people are looking at market data and producing this text, and we actually did it in another case where someone started to produce an order, we would show the text equivalents of their order, so we saw this natural progression occurring where people want to see more information on the NLG space, and maturity in the space. By the way, NLG is again, very early in the incubation cycle of the Advanced Technology Labs, so it’s one of the things we’re still working on. But what we can also do with this content, because we’ve created this snippet of text from the graph, is we can actually use it in the future for let’s say smart home speakers. So a smart home speaker is like Alexa, Google Home, and since we have this text now, we can theoretically in the future look at providing that to the investor as well.
Craig: And that’s where an investor might say, I want to buy some Apple stock, or I need $10,000 because I’m going on vacation, or something.
Raj: That’s right. And another thing we’re also thinking of doing to add onto that, is in the ADA space, the Americans with Disabilities Act. So now you have text that is specific to a particular component – and again, this is all exploratory so it’s very fresh, very new – if I can have the computer orally speak that text to you in whatever fashion, then it is meeting the needs of someone with let’s say a visual impairment. So there are a variety of things that can come out of this strategy.
Craig: I could say, Alexa, what’s my portfolio performance this month?
Raj: That’s exactly right.
Craig: And hopefully people aren’t listening to this podcast on a speaker and their devices are now starting up. I apologize to anyone that happened to.
Service Chat Bots
Craig: Let’s move onto the next use case which I really am interested in, chat bots and how they impact the service experience. I saw the proof of concept chat bot at the Pershing Insight Conference last year, back when we used to be able to go to conferences, and I thought it was very cool. Pershing’s been around awhile and NetX360 has been around awhile, any system that’s got as many customers as you guys have tends to grow in complexity, and it becomes harder and harder to find stuff. And if I can just say, Hey, I need the form for a client who lost their debit card, and the form just pops up right away, that’s a huge time saver in operations. So can you talk a little about hoe the chat bot works, and why you did it, and what’s going to be the benefit to clients besides what I just mentioned?
Michelle: I’ll take this one. You’re exactly right, the platform has a lot of information and today, usually when an advisor is looking for a status and they’re going to our Service Center dashboard, that’s where they’d begin, and it would be more of an email exchange. So with the chat bots, what’s exciting is now they can have an interaction, they’re not searching through a dashboard, and they can do a simple service inquiry and the bots give them an instant response. An example would be, What’s the status of this ACAP transfer, or, What’s the wire number associated to this wire transfer, my bank didn’t receive it.
Right now we’re in the early stages with the chat bots, training them on multiple use cases, we have about 25-30 use cases documented starting out with simple inquiries but they’re going to get more and more complex as we keep training the bot. Another thing is getting a little bit smarter routing calls to the right service agents and giving advisors the ability to set up callbacks right through NetX360. So if they are using our Service Center pass, they don’t want to wait, they want to make sure it gets to Michelle the expert in a particular area, they can go and see the estimated wait time and schedule a callback right through the platform knowing when that’s going to occur. So we’re excited about that as well. There’s great stuff happening in Service. Our goal is how can we offer a multi-channel experience? All advisors are different, some want the dashboard, some want to talk to a representative and they want to get the right one the first time, and others are going to just love the bot. Raj, maybe you want to talk a little bit about the technology we’re using for the chat bot just to get a little techie for a second.
Raj: So the basic part of routing and so forth is something we’re going to do via the CRM we have which is a vendor product and is one of those things that’s in flux right now because we’re changing our CRM. But when the user clicks to talk, it does that routing within the CRM process because it looks to which use is making the request, and then it looks at the queue of the request, and says, Who is the appropriate person to send that information to? That’s part of the peer to peer communication that’s going to happen going forward. The advanced stuff, which again is still in the proof of concept phase, is what Michelle was talking about which is conversational interface. You know, you think about Pershing, which is a very robust system. We do from A-Z, many different things. So how do you navigate through the Pershing portal? Well, we arrange everything as well as we can, but at the end of the day if I want a user, a user that speaks English, if they can type in a question that they’re looking for, and the chat bot can help them navigate through those questions, that is really the ultimate goal of these chat bots, sort of the endgame.
Now, how does that happen technology-wise? Well, think about this: someone comes up to you on the street, and they’re going to ask you a question. What’s the first thing you’re going to think about?
Craig: Well that depends on the question.
Raj: You’re going to say, “What do you want?”
Craig: And who are you, and why are you asking me questions?
Raj: The first step in a chat bot conversation is to understand the intent of the user. What’s your objective? What is it you want to do? Once I figure out your intent, then I can go into this, for lack of a better term, a dialogue workflow, going back and forth between you and me, just like we’re doing now. If you think about it in terms of a protocol where we’re asking and answering questions, it’s a very similar concept. So now the computer is saying, I got you, I know what you want to do. Let me take you through my typical dialogue workflow that allows you to get to the information you’re looking for.
That’s what I would boil down all this technology to. And we aren’t necessarily using a vendor to do this we are using an open source package to do that so it will be proprietary stuff going forward.
Craig: Can you talk about the open source package you’re using?
Raj: Yeah. So the current package we’re looking at is called Rasa, and it is one of those things that speaks to intent and dialogue workflow, and looking at the objective of what you want to do and try to accomplish that. Again, this is still proof of concept, I want to repeat that.
Craig: Of course. But if we could just talk a little bit deeper – intent in a chat bot is a little bit different than human intent.
Raj: Yes, I mean if you you’re talking to a chat bot, you could be asking a few things. One is, can you get me the following piece of information? So that’s a “get me” use case, get me the following. In that case, what we’re doing with the overall flow is okay, this guy wants information so to me what will become is that chat bot will interface with out APIs, require the information that they’re looking for and present it in some logical format. The other intent is, I need to do something. And again, Pershing is a robust system and we have a lot of technology. Get me to this form if you will, that will help me fill out the form.
Then the chat bot has to figure out, Okay, so you want to know how to get somewhere. What I’m going to do is I’m going to answer that question by bringing you to the appropriate location within out portal so you can fill in the appropriate form.
Craig: I’m not a chat bot expert, but the difference there is there’s the flow based chat bots, which are the ones you get frustrated with today online where you ask it anything more than a couple basic questions and it gets confused and says “I’ll get you an agent”. It can’t do anything because it only follows a certain set conversational path. So how much better are the intent based chat bots?
Raj: Well, first of all there’s always the case where if a person is getting frustrated and we can hopefully going forward understand their sentiment, we can actually shift over to a human. So that is an important aspect that is done in a ubiquitous fashion these days. In this case, the difference is that we are trying to understand what it is you’re trying to do, and then orient you in the right location so we hopefully have a better hit ratio of understanding your objective and getting you there.
Craig: Will your chat bots just be a chat? I’ve seen some artificial intelligence faces where it’s not a real person, but its an AI face so you sort of get eye contact. It’s not real eye contact, but people tend to interact differently when they’re talking to a face even when they know it’s just a computer, than they do with text. Is that something you’re looking at?
Raj: At this point we’re not. Two years ago we did something in the Advanced Technology Labs where like I said before, sometimes in the ATL we shoot really far in the time horizon where we actually did facial recognition and facial sentiment recognition.
Craig: Right, right.
Raj: We were actually looking to see, is this person annoyed today or not? What’s happening on their face? But that was, again, very exploratory and that’s very far in the future.
Craig: Because I’ve written about this in my blog, there is a broker dealer I know using facial recognition technology for onboarding and risk tolerance where it has the user watch little vignettes about different life experiences and it measures their facial responses. They don’t actually type anything, they just watch like 15 second vignettes and then it responds. Is that something you guys would be interested in? Not that I’m selling it.
Raj: We’re definitely not doing that right now, but like I said, two or three years ago we did look at software that did that sort of thing where it would look at a face and actually determine whether someone was unhappy, or aggravated if you will. And what we did in the proof of concept was we actually asked people to create different experiences on their faces and see how successful the bot was in figuring out their emotion.
AI for Operations
Craig: I think that’s the future. I can tell you’re really happy right now, so I’m going to ask you some more questions. So another use case we were discussing was leveraging AI for operations, and that’s a huge issue. I know my firm, we work with a lot of broker dealers, we help them optimize, improve their operational processes, or they’re RIA aggregators, consolidators, and they all have very different ways of approaching operations. So what are you guys, as a custodian, one of the biggest custodians both on the broker dealer side and the RIA side, what are some things you’re seeing where AI can help and how are you leveraging AI in operations?
Raj: So this one is interesting because in many ways it has benefits to two use cases and two personas. One is the request that Michelle was speaking about, someone submitted a request. The other is the persona of the operations manager who’s dealing with the queue of these requests and the worklist of items. Now going back to the first persona. You have a user, he submits a request. In the past, depending on what request it is, they’re given this text saying, We’re going to take the following number of days to complete this request. This very standardized response, which by the way, doesn’t happen. But depending on the situation and scenario, some users can be a little bit anxious about the request, right? They could be very interested in making sure that, for instance, five wire gets completed in a particular time so that they can execute their investment, whatever it may be.
So what we’re looking at here is introducing something Michelle mentioned it before, and the thought here is when that person does submit a request, they get a something that specifically says, when do we think your request will be completed. And the way this is being done, and the reason this is related to the AI Ops space by the way, because there’s two sides to this formula if you will, is we are looking at past experience, past use cases, past performance of the request that is specific to that user, and the overall request. And depending on our digestion of that data, we can give that user better information about when that request will be completed. Very similar to how, let’s say, Amazon will tell you, Your package will arrive in two days, but in this case we’re going to get much more precise because it’s very focused on that user and their particular request.
The other side of this transaction is the operations manager. Again, what is the operations manager interested in? He’s interested in looking at his queue of wordlist items and making sure they’re going to get done before the end of the day to make sure he hits his target. So they don’t currently have the transparency of understanding those queues, what’s going on, and how are they going to beat that. The goal of the AI Ops is to actually create a dashboard of historical trends, provide a current workload, and also predictive data for that user, the operational manager, to see. Along with that, since we have that information, we can actually have a digital assistant provide information to the operational manager regarding, let’s say an alert if the digital assistant sees something wrong that the operational manager would be interested in. So for example, if the digital assistant says, Hey, there’s a 55% chance you’re not going to hit your targets today before the market closes, that’s a very important thing for the user to know.
Michelle: Can you also explain how this digital assistant might recommend a split work in order to meet their service levels and process that workload?
Raj: That’s right. So one of the thoughts here, and again this I sone of those items that’s in proof of concept here so it’s in very early stages, is if the system, and it will, has data regarding the particular users, it can actually predict, Okay this user was very good at providing response times very quickly for this particular use case. So you can use some sort of “what if” analysis to say to the operational manager, If you engage Worker A into the queue, this is the performance to expect. So it’s giving a forecast for the different users that will be part of that overall queue.
Craig: I’m always of two minds with this type of thing. One is, I think its super cool to be able to say, Hey, we monitored this person’s work and they’re very efficient at this type of task, let’s give them this type of task and it will improve our operations efficiency. This other person is very good at this other type of task, but then that means you have to monitor every single thing a person does, every keystroke, every action, and its very Big Brother-ish. So how do you balance the creepy privacy issues versus “we just want to be operationally efficient”?
Raj: Yeah. So this is all sort of the user activity within the portal and it’s all things that we currently have. When you, let’s say click through the different screens and so forth, there is something called End User Monitoring to see your overall usage and how you’re leveraging there platform. Zoo that is part of the overall product set that we use now. And we do use that many ways, both for product development and even on the development side we use it to debug. Like if someone says, Hey my performance seems a little slow today, we actually look at their end user monitoring information to provide a better user experience for them.
Craig: Oh yeah, I agree. All those things make sense. My question is, how do we balance the “we need to be more operationally efficient” against “why are you watching everything I do”? Where is the balance there?
Raj: I think that’s an interesting question, we kinda run through that in a lot of AI spaces right, Craig? People often ask, “I’m a user on your platform, I’m working for a client firm”, that client firm is really the entity which owns all of your data. Another person could say, “Well, I’m an investor, can I use the investor’s data?” And the answer is no. You cannot use the investor’s data, that’s against all the regulations that are out there and the regulations are continuing to enhance.
Let’s say you’re on the train and someone comes by with that little clicker, counting the number of people. They’re trying to figure out what the numbers are within the train so they can improve the overall schedule. Is that inappropriate? That’s what we would say, you’ve just been observed, you’ve been clicked. So now they question is, an investor logs in, is it the same to say, “We anonymize them, we don’t know who the investor is”, but they logged in at a particular time. Is that inappropriate to observe that click?
Craig: Well nothing is necessarily inappropriate on its own. Any one piece of data is not inappropriate, it’s all the data put together, then it becomes inappropriate. My favorite story, is the story about Target, its already a 10-year old story, where they found that women, when they become pregnant, tend to buy certain things and that’s when they’re most likely to change stores. But once they find a store, they won’t change. And they realized through tremendous amount of data, that women who buy multivitamins, rugs, and unscented talc, certain types of things that don’t seem related, women who buy those are most likely pregnant. And they were so accurate they were sending out coupons to women who bought those things and they upset a lot people, like how do you know I’m pregnant? You’re sending me coupons for diapers, I haven’t told anyone yet! And that’s a true story, it was in the Journal. So the thing is, you can have lots of data, and any one data point may seem completely innocuous. It’s when you put it all together that it could create privacy issues. And I’m not saying you shouldn’t do that, I’m asking how do we balance it and how do we get the message to consumers, investors, and advisors that, “here’s what we’re doing, here’s the benefits, here’s what we’re going to avoid any of the negatives, the downside”. Is that something you guys are thinking about?
Raj: So in the different spaces of this whole thing, there’s providing the better user experience that occurs and then having information about let’s say the investors. I do believe there’s really two different personas that we need to consider here, right? One thing we did a few years ago on the professional side – again, these are people who are using the professional platform, not the retail platform – we did this thing called a Clustering algorithm where we looked at their overall usage of the system and we tried to figure out what persona they were without them telling us. Sort of how Netflix figures out I’m a Sci-Fi geek because I watch Sci-Fi all the time, right? So we went through that whole process, and then when we talked about it we actually said, is it inappropriate to be clustering this use case into this persona? That particular use case didn’t move forward, probably because we couldn’t see the value of clustering, but we don’t ever do things like that on the investor side because, again, they are not professionals and their data specifically is theirs in that space.
It is sort of a fine line. If you look at let’s say, GDPR, the regulation that came out, there’s a reason why whenever you go to thee websites that you see this little cookie thing that says, hey, we’re tracking your information, can you click OK in order to continue? That little button came from these regulations and is saying, are you okay with someone tracking your usage? So it is a debatable item I would say.
Michelle: Yeah. And every use case that we do, by the way, has to get vetted by the teams that manage and pay attention to the privacy law, Legal and Compliance and all that. Sometimes some of our ideas do get halted based upon their opinions and direction.
Craig: Indeed. So we’re out of time this was a great conversation. I wanted to do a quick last round of questions, so one for each of you. Michelle, in 30 seconds, what do you see as being the biggest impact of AI in the client engagement area over the next 5 years?
Michelle: I think it’s going to be advisors being willing to try it and trust it, and not fear it. We’re seeing a big shift amongst the advisor community in using all kinds of technology and they’re seeing the benefits fast. So I think here it’s just making sure they understand and learn how it’s going to benefit their business and their interaction with their clients.
Craig: And Raj, from a technology point of view, the next 5 years in terms of custodian technology, the Advanced Tech Lab being part of your purvey, what do you see is the biggest area AI can impact?
Raj: There’s so much that can be done here. Understanding the user instead of what it is that they wanna do, even in the chat bot example is a very important aspect of this because it’s really trying to understand that specific individual and sort of catering to their needs. So if you were to ask in the global space, what is AI going to do? It’s actually catering to your needs specifically. It’s basically going back to the conversation we had. We know who you are, we know what you want, and we know how to get you there. That’s one of the big ticket items here.
Michelle: So like the ultimate way to personalize the experience to the user.
Raj: It’s like creating Jarvis, it’s the ultimate digital assistant for an advisor so they can be much more efficient when they do their job.
Craig: Now I’m going to be writing about how Pershing is building an Iron Man suit and it’ll be out soon. I really wanna see that. Guys, thanks so much, I really love this kind of conversation, I’m glad we could find the time, and I look forward to our next conversation of all the things we didn’t get a chance to talk about. There was just too much. Thank you so much Raj and Michelle.