Heavy Lifting: Leveraging AI to Drive Actionable Insights in Wealth Management

Working out isn’t just for us humans trying to get the perfect beach bod — the robots are hitting the gym, too. And if you know how AI training and machine learning works, you can ensure your firm is getting the most out of its technology. 

Ezra Group hosted the ‘From Data to Dollars’ webinar, investigating how AI and machine learning are transforming wealth management. The session focused on leveraging these technologies to enhance investment decision-making, optimizing portfolio strategies, and delivering personalized services to clients. Craig Iskowitz, CEO of Ezra Group, moderated the panel, featuring Andy Lientz, Chief Technology Officer of Apex Fintech Solutions; Lee Davidson, Chief Analytics Officer of Morningstar; Ted Denbow, VP for RightCapital; Henry Zelikovsky, CEO of Softlab360; and Dani Fava, Group Head of Product Innovation at Envestnet

This article will be highlighting the contributions of Zelikovsky, who shared his expertise helping wealth management firms build AI-powered data analytics solutions. According to a recent survey from Accenture, 84% of technology executives agree that AI will transform the wealth management industry within the next five years — but more than 80% of firms are stuck in the proof of concept stage and unable to progress. The first step to embracing AI technology is learning how it works and how it can best fit into the structure of your business.

The Heavy Lifting

One way that AI is propelling wealth management companies ahead is through advanced, specialized prediction tools. Zelikovsky described the process by which Softlab360 was brought on to develop a predictive analytics platform for Intergen Data, an AI-based machine learning company building proprietary algorithms that generate actionable insights around the probability of life events occurring. (See Running Up the Score: How Predictive Analytics Gives You an Advantage Over Your Competitors)

For the project, Softlab360 had to build a system which could process extremely large sums of data from a wide variety of sources such as government census data, health data and IRS reports. They leverage this data to identify trends such as how individuals from certain groups were expected to use their income versus how they actually did, often relating to health costs and other life events. 

A cancer diagnosis, a divorce, or the birth of a child may seem difficult to predict. But with the right data, Intergen Data (powered by Softlab360) is able to define the statistical probability that any individual client may encounter a specific health or life event. 

When he started the company, CEO Robert Kirk had a lot of questions, but no way to answer them. The first was “can you tell me when somebody makes the most amount of money in their life?”  This is when Zelikovsky’s team went to work collecting the right data set and then developing the necessary predictive analytics to determine the answer.

Surprisingly, the machine learning system generated two answers! The first was 47 and the second answer was 51. The reason for two answers was because 47 was when 80% of Americans make the most money in their life, around $62,000. And the second answer, 51, was where the remaining 20% of people made the most money, which was around $157,000. 

A non-AI algorithm that was limited to just one answer might have taken an average of the two values and ages, providing 49 years and $109,500. But this would not be as valuable as the two distinct answers that provide more granular results. 

Other questions that Kirk was looking for answers to included, “what is the income utilization versus expected expenses in specific age groups?” (See Big Data as a Service is the Next Big Thing in Artificial Intelligence)

Using the Right Equipment

Luckily, the best tools for building AI and machine learning systems are already in your possession. “Your unique set of historical data is like a jewel for your company once you can set it in an internal AI or ChatGPT system,” Zelikovsky emphasized. In many cases, this historical data is perfectly suited for predictive analytics. 

IBM defines predictive analytics as a branch of advanced analytics that makes predictions about future outcomes using historical data combined with statistical modeling, data mining techniques and machine learning. Companies employ predictive analytics to find patterns in this data to identify risks and opportunities. Predictive analytics is often associated with big data and data science.

The data that will be used for learning, training and validation of predictive analytics for wealth management clients is selected by internal experts at Softlab360 along with the compliance department, with the help of private specialty area personnel. 

For their project with Intergen Data, Softlab360 structured their data analytics system using the Bayesian method. Based on Bayes’ theorem, the concept combines available knowledge in a statistical model with new information of observed data. 

Hitting Those AI Reps

The underlying technology was built entirely by Softlab360 and evolved over a number of years from a pre-cloud computing infrastructure. Even though they now use Amazon Cloud AWS for their platform they don’t use any of the AWS features for machine learning. Zelikovsky’s team believes that their internally-developed tools deliver better results for wealth management clients than the generic AWS ones. 

Older algorithms like the Bayesian methods are built in C++ and Java, and they also have a mechanism where Zelikovsky can drop a new algorithm into their internal platform either in a native language or in WEKA, which is open source machine learning software written in JAVA. It contains tools for data preparation, classification, regression, clustering, association rules mining, and visualization. 

This technology is a multi-node distributed architecture which runs on dedicated servers and virtual machines in Amazon AWS. The platform is able to scale itself to the amount of horsepower needed for a particular algorithm or method to run a particular dataset, and they scale up and down by using Amazon’s APIs. 

To run the system, they have the support of a few databases. Intergen Data relies on Microsoft SQL server, with their solution built in Azure within a collection of machine-learned data that was deposited into SQL and stored in Parquet files.  Parquet is an open-source file format that’s used in big data for fast analytical querying.

To use the Bayesian structure, data analysts take a large set of data and cut it into pieces: one is chosen to be the training set, and the other is the validation set. The training data acts as the test questions, and the validation set acts as an answer key. 

The algorithms are then trained on the first set, running it over and over in different ways until they are able to consistently calculate results that align with the values on the validation set. By training the algorithms on one shared data set they can clearly see which ones are the most consistent compared to the others. 

Keeping Perfect Form

Once the algorithm consistently produces correct results, it can then be used to make predictions on new data with reasonable accuracy. Softlab360 uses a goal of 94% accuracy in predicting historical data before feeding an algorithm new, open-ended data. When the algorithms are accurately running sections of the training data at that level of accuracy, they then run the entire unsliced set as a final check. Softlab360 recently trained a system with 7 million trades from two years of TD Ameritrade’s records.

In order to keep the algorithms accurate over long periods of time, analysts will regularly clear the data and retrain the program. The regularity of retraining is dependent on how often new data is coming in and how much data there is, with analysts approximating if there will be enough of an influence to warrant a reset. 

From The Ultimate Guide to Model Retraining:

Since we expect the world to change over time, model deployment should be treated as a continuous process. Rather than deploying a model once and moving on to another project, machine learning practitioners need to retrain their models if they find that the data distributions have deviated significantly from those of the original training set. This concept, known as model drift, can be mitigated but involves additional overhead in the forms of monitoring infrastructure, oversight, and process.

When running this type of system it’s essential to keep an eye on costs, Zelikovsky cautioned. Growth should be planned in an incremental manner, taking individual datasets and strategically choosing what you want to train your platform on. The initial dataset doesn’t have to include millions of data points, though it does need to be at least a few thousand to contain enough distribution to be useful for training. By incrementally increasing the number and size of the datasets you use and the number of algorithms you’re training, you can build sustainably and have a good understanding of how much it costs to run that much computing power. 

The average organization is over budget for cloud spend by an average of 23 percent.  Partially due to inefficient allocation of resources and spending. Cloud waste averaged 30 percent of companies’ cloud budgets in 2021, with that figure jumping to 32 percent by the end of 2022. This was one of the factors behind Amazon AWS introducing a new monitoring service called Cost Anomaly Detection, which is itself powered by machine learning with the goal of notifying cloud admins of “unexpected or unusual spending”.

The industry term for this is “Bill Shock” when an extremely (relatively) large bill appears from a cloud computing provider.  There is usually one of two factors behind bill shock: services were turned on and left running when they weren’t required, or services were scaled up without a full understanding of cost implications.

Long-Term Fitness

With only basic data about the starting point of a trade account, Softlab360’s system can derive insights about the types of trades that have occurred by size, frequency, and type of security. They can also calculate a range of contingency traits associated with a type of account, and what demographic of people represent those traits. From there, the system can predict what those clients are most likely to do next in terms of trading, when they can be expected to amass a certain amount of assets in their account, and what they’re likely to do with that cash flow and what products they’re most likely to be interested in. 

Zelikovsky noted that these capabilities are particularly useful for companies that allow for self-trading, as they’ll need to try and convert those customers to a more engaged status and retain them in the platform long-term. 

Along those same lines, AI can be useful for identifying which clients are most likely to churn. Zelikovsky broke down the way that Softlab360 would analyze behavioral data such as investment plans and customer goals, with a focus on how the initial plan created with the advisor differs from the actions taken or events encountered. 

“We’re trying to find where these features can correlate to our training data set, and how we can gain perspective on where the diminished capacity is,” he explained. If a decision or situation is different from the projection, they aim to clarify what the difference is specifically and to what degree. A generative AI system can then look at those differences and make a tailored recommendation or ‘monetization function’ of alternative products and strategies that are most likely to keep that customer engaged. 

Stretching It Out

While it can be tempting to treat complicated digital technology like AI as a magic black box, understanding how these systems work is the first step to riding the machine learning wave. Existing data assets lying dormant at most wealth management firms can be transformed by powerful AI tools to drive efficiency, boost advice capabilities, and reduce client churn. 

By incrementally growing your analysis systems and diligently applying them as broadly as possible across your company, you can take advantage of their full capabilities to push ahead in the AI arms race. 

SEARCH

ABOUT ME

The Wealth Tech Today blog is published by Craig Iskowitz, founder and CEO of Ezra Group, a boutique consulting firm that caters to banks, broker-dealers, RIA’s, asset managers and the leading vendors in the surrounding #fintech space. He can be reached at craig@ezragroupllc.com

SUBSCRIBE TO OUR NEWSLETTER VIA EMAIL

@CRAIGISKOWITZ

ARCHIVES

Archives
%d bloggers like this: