“The promise of artificial intelligence vastly outweighs the impact it will have on jobs. In the same way that the invention of the airplane negatively affected the railroad industry, it also opened a much wider door to human progress.” — Paul Allen, Co-Founder of Microsoft
According to an Accenture survey, 80% of wealth management firms have reported that they’re either deploying or scaling both client- and advisor-facing AI-powered technology. But are they deploying that technology efficiently?
While some may laud the seemingly endless potential of AI, real experts know that it is not equally powerful in all applications. The same system that makes one task instant can make another slightly different one much more complicated.
As part of our ongoing educational series on artificial intelligence in wealth management, Ezra Group hosted a webinar called ‘From Data to Dollars’, with industry experts discussing how AI and machine learning are transforming the space. The session focused on leveraging these technologies to enhance investment decision-making, optimizing portfolio strategies, and delivering personalized services to clients.
The panel was moderated by Craig Iskowitz, CEO of Ezra Group, and featured Andy Lientz, Chief Technology Officer of Apex Fintech Solutions; Lee Davidson, Chief Analytics Officer of Morningstar; Ted Denbow, VP for RightCapital; Henry Zelikovsky, CEO of Softlab360; and Dani Fava, Group Head of Product Innovation at Envestnet.
This article covers the contributions of Leintz, who described the nuances around incorporating AI processes from a fintech vendor’s perspective. He laid out the strong and weak points of current AI applications, how they can be best applied, and how they can support financial education and improve client outcomes.
In recent months, FINRA published two Regulatory Notices regarding the fraudulent transfer of customer accounts using the ACATS (Automated Customer Account Transfer Service) system.
One of the use cases for AI that Apex applies is for fraud detection, specifically to around ACATS . ACATS fraud occurs when a bad actor opens an account using stolen information and attempts to initiate securities transfers from a legitimate customer’s account.
Leintz explained that Apex holds large amounts of customer data which they analyze to identify patterns of expected transfers, times, and days of use. They then apply predictive analytics to generate a score based on what types of trades and behaviors each client demographic engages in. Apex also evaluates the source of each trade, and combines that information with the trend information. (See Heavy Lifting: Leveraging AI to Drive Actionable Insights in Wealth Management)
Apex is also investigating large language models (LLMs) to assess requests and other textual data in the fraud detection process. To do this, they need to parse their CRMs through the LLM and then combine that data with predictive analytics. By comparing the expected requests to what was actually received, the systems can identify and flag suspicious transfers.
In Regulatory Notice 23-06, published in March, FINRA listed some potential indicators of ACATS fraud that they observed through their regulatory programs:
- Repeated Rejections of TIFs (Transfer Requests) – Due to incomplete or inaccurate information, such as errors in account type or other information, carrying member’s rejection of receiving member’s account transfer request for the same customer on multiple occasions.
- Request for Asset Transfer Soon After a New Account is Opened – Soon after assets have been moved into a new brokerage account, a bad actor sends instructions to quickly move the assets to an external account.
- Changes in Customer Communication – Changes in a customer’s communication patterns, such as switching from telephone to email or transfer requests in email that contain grammatical or spelling errors.
TLC for Your LLMWhile LLM systems offer great potential, putting them into use requires an incredible amount of training and set up, and Leintz noted that Apex doesn’t yet have the confidence in their LLM to incorporate it into standard workflows. They are, however, training the program for “request and detecting projects” on their extensive corpus of customer requests, inquiries, and support question cues from all their correspondents going back some 15 years.
The dataset is unique and full of useful insights, but also quite messy, Leintz disclosed. In fact, part of the work effort is to clean those datasets up so that they are practical to integrate. According to Leintz, managing historical data used for predictive analytics is a much simpler task. If you aren’t careful with LLM systems you could end up with false positives, and “you don’t want someone getting 500 requests of fraud detection and then ignoring them,” Leintz contended.
Leintz used the 2022 Uber hacking via ‘Multi-Factor Authentication fatigue’ as an example, where a hacker flooded an employee with two-factor authentication requests until one of them was eventually approved. “There are interesting questions in how you present this information to someone in a way that’s effective for helping the decision process,” he concluded. (See AI’s Judgment Day: How ChatGPT-4 is Reshaping Wealth Management)
Bulk AnalysisApex also invests heavily in user benchmarking, Leintz said,, which they split into correspondents (broker-dealers) and end investors. The benchmarking of investors is useful for marketing, offering high-level assessments based on machine learning techniques and linear regression (an analysis used to predict the value of a variable based on the value of another variable). These analyses can show correspondents how to get customers to increase their overall investment in the platform based on the approaches of other firms in their area, and how they stack up against their competitors. (See Running Up the Score: How Predictive Analytics Gives You an Advantage Over Your Competitors)
“It’s about getting practical recommendations to correspondents so they can improve their outreach to their own customers,” Leintz illustrated. Rather than just following standard advice looking at average risk tolerance levels, the advanced analysis can look at clients in similar demographics with matching trade patterns to provide different, tailored options. Standard models don’t necessarily make sense for each individual client, so it’s very valuable for advisors to be able to personalize their recommendations and meet those investors where they’re at.
Advisor and Client Education
Beyond direct financial advice, Apex can also analyze a client’s investment behavior and determine what they would be interested in learning more about, Leintz said. Using predictive analytics, they can identify when a client may be ready to change their investing patterns, and provide both them and their advisor with the education and materials they need to take the next step. (See Unlocking Advisor Insights Using Predictive Analytics)
This client education approach has provided “a boost in self-directed activity, but also a real boost in advisor-led activity when you educate the investor and empower them to understand the process better and make better decisions with the advisor’s help,” Leintz noted. Answering client questions and providing relevant educational materials is also another practical application for LLMs.
To further this commitment to investor empowerment and learning, Apex acquired Zogo Finance, a financial education app with a Gen-Z focus, in 2022. Since its founding in 2018, Zogo has given over 16 million lessons to young investors on a wide range of topics including healthcare, investing strategies, insurance, and even e-sports. Apex-Zogo users have access to over 450 modules of relevant topics with the ability to earn rewards for achieving high levels of financial literacy.
Process vs. Outcome Systems
As the use of LLMs expands across industries and use cases, questions come up as to how to optimize their output, especially around reasoning tasks. Is it better to employ outcome-based approaches that supervise the final answer, or process-based approaches that supervise the reasoning process itself?
Recent research has shown improvements in mathematical reasoning when using process based supervision of AI instead of outcome based. Process based supervision can help avoid logical errors that are inherent in large language models. Leintz noted that the entire thesis of AI, as laid out by the original Google paper, is that it’s unsupervised and uses outcomes only and he would therefore worry about the power you could get out of a process approach.
The transformer models, Leintz explained, all rely on the user adding increasing amounts of computing into a system and then checking the outcomes after. Even in his experience working at search engines in the past and using a large judgment process, it was still outcome-based.
“While you could probably learn to supervise through the process, it would be an art in itself,” Leintz emphasized, and he doesn’t think that a company like Apex would be prepared to invest in that type of intensive change. However, if another larger AI specialist firm were to offer a pre-devised system, that could perhaps be incorporated or tested.
Efficient ApplicationsThe best way to apply any new technology is to assess what it’s already designed to do well, rather than spending resources to adapt it to a different function. Apex has put this approach into efficient practice, with AI seamlessly integrating into fraud detection and benchmarking, and LLMs answering client questions and cataloging their vast corpus of textual data.
AI is exciting to consider and work with, but it can also lead to costly mistakes if incorporated blindly. Be sure to set yourself up for success and take the time to understand how the tools you’re using work best, and why.