Unlocking Advisor Insights Using Predictive Analytics

Every day we create billions of gigabytes of data, just by making decisions. We publish content on social media, complete intake forms at the doctor’s office and invest money based on news or a conversation with a friend. Those actions generate information about us—tiny clues about who we are and how we think—which is then captured and stored by corporations.

Whether data is the new oil (as British mathematician, Clive Humby, first wrote back in 2006), or the new gold or the new global currency, we know that it’s incredibly valuable and still a mostly untapped asset at many companies. In fact, a significant portion of data’s potency comes from our own uncertainty about the future: we don’t know exactly how useful any given dataset might be. Data scientists can clean and analyze raw information today, to answer one question. Tomorrow, they may reuse the same information to answer other questions and reveal completely new insights.

In the wealth management industry, raw data is plentiful, while insights are scarce. 

Buried within many firms’ CRM, portfolio management and other systems are copious amounts of data. Relatively mundane information, such as addresses and birthdates, is cross-linked to more nuanced data, like the number of interactions a client might have with an automated marketing campaign or what method they choose to interact with their advisors. 

But without advanced data management tools, particularly those that can guide advisors towards the next best action, industry information is as useful as oil in the ground, gold under a mountaintop or funds held in an inaccessible account. In other words, completely useless.

Predictive analytics can help firms to unlock the powerful insights buried deep within their own assets, but the systems need to be provided with enough  consistent, reliable and high quality data to ensure success. 

What is Predictive Analytics?

A recent article in CIO Magazine had a terrific description of the differences between artificial intelligence, machine learning and predictive analytics:

AI isn’t a single technology. Instead, it’s an umbrella term for various approaches that can train machines to solve problems in ways that mimic human intelligence.

Machine learning (ML) is one of these approaches, and predictive analytics are used in machine learning algorithms. Businesses use these algorithms to predict outcomes based on historical data and make smarter decisions. Both machine learning and predictive analytics involve collecting and analyzing data from past events to make better decisions about the future.

Trademarks of Reliable Data: Credibility and Quality

Not all information is created equal, and much of that has to do with the characteristics of the organization that’s collecting, cleaning and providing it, according to Henry Zelikovsky, founder and CEO of outsourced software development shop, SoftLab360. Knowing where to find the right measurements, and how to gauge their usefulness, is the first step in creating predictive insights, he said.

Publicly-sourced statistics from the federal or state government are considered reliable, because they come from organizations that are accountable to the public and have deep, institutional authority. This includes sources like the IRS or the Department of Labor.

Some privately-sourced information may also be considered reliable, based on the reputation of the organization, such as banks, insurance companies, custodians or other regulated entities. Even though they’re not publicly or member-funded, their brand is associated with the quality of their data gathering.

On the other hand, data from specific websites might or might not be credible. If its origin is unknown, or untested, there’s a significant risk that the information it provides could be useless from a predictive analytics point of view. Insights gathered from an internet forum might be interesting, but its veracity is often impossible to ascertain, leaving any conclusions standing on shaky ground. 

The second test point is the quality of the data, which must be at a high level to be used in any AI application, Zelikovsky noted.  There are a number of aspects to data quality, including consistency, integrity, accuracy, and completeness. According to Wikipedia, data is generally considered high quality if it is “fit for its intended uses in operations, decision making and planning, and data is deemed of high quality if it correctly represents the real-world construct to which it refers.”

Even the best business intelligence, data warehousing, or similar initiatives have failed due to low data quality, which breeds a lack of trust in the results the system produces. If users Users cannot trust the data, they will gradually abandon the system impacting its major KPIs and success criteria.

Making Data Meaningful

While developers are making advances in analytics where algorithms can process unstructured data, even the most advanced artificial intelligence systems need to be arranged in a manner that computers can read and learn from inputs. In other words, data needs to be classified for it to be learnable for a computer—and only a computer knows if it can read the inputs it’s fed.

Take a conversation thread, for example. The topics addressed in the dialogue could be meaningful for a person reading it, but it might not be classified in a manner that allows the algorithm to gather any insight. This is referred to as “statistical noise,” Zelikovsky explained.

Data points that are too spread out across an outcome, or not repetitive enough to be significant, are considered statistical noise. It could be plentiful or rare, but in all cases an algorithm won’t be able to make any sense of a “noisy” dataset. It’s rarely even identified by people looking at data with the naked eye. 

Unusable data is often only uncovered after it’s fed into a computer system and classification and clustering are unsuccessful. The results aren’t meaningful. The raw material can’t be grouped. And no predictions can be made from the information at hand.

Deriving Meaningful Business Decisions Using Artificial Intelligence 

High-quality data that is credible and learnable from an algorithm’s perspective can produce powerful results, Zelikovsky noted.  In isolation, a single trade provides limited information about an investor’s mindset, motivation or capacity for risk. But taken in aggregate, when analyzing millions of trades, patterns begin to emerge, Zelikovsky explained.

Equipped with predictive analytics, wealth management firms can leverage their data to provide better customer service or research the effects of geography and environment when selling financial products, like life insurance.

The first step in finding data patterns comes from “machine data slicing,” Zelikovsky said. An initial blob of data, like 7 million trades, is segmented into trades made each month or by timezone. Each slice of data is analyzed and produces results. 

Certain social media platforms or news outlets might be associated with trading patterns. Education levels might also weigh into decision-making. As insights come into focus, they can be projected into future behavior, providing predictive models for asset managers, broker-dealers and RIAs, Zelikovsky stated. 

Another example Zelikovsky provided comes from leveraging data from electronic medical records. By slicing data into segments like zip codes and age groups, patterns around disease begin to emerge. Not only that, but the costs for treating those ailments also comes into focus. 

Equipped predictive analytical insights, advisors might suggest that a client relocates to a different state or change their job to statistically shift themselves out of a cluster associated with a high-risk of a certain diseases and into a more moderate or lower-risk cluster, Zelikovsky said. 

More Data Isn’t Always Better

There’s a common misperception that more data is always better, Zelikovsky noted. That’s simply not true. Better data is better; and better data might present itself as a small collection of information that’s particularly insightful.

Collections of just a few thousand data points have revealed insights obscured by collections with millions of records. In response to specific questions, a tight grouping of information that’s been filtered and analyzed can still inform home offices and c-suite executives, according to Zelikovsky. 

Data that is clearly clustered is already providing clues and answers to questions. Sometimes excessively large data sets are simply unnecessary.

It’s Not Bad Data, It’s Just Incomplete 

While bad data exists, most data sets are just incomplete—or it’s been captured in a manner unique to its function, Zelikovsky said.

Going back to the example of electronic medical records, they are an example of data that’s often good, but incomplete, Zelikovsky observed. A patient might make a visit to the doctor or be admitted to a hospital, but not all vital signs are recorded in the patient’s medical record. A data field could be blank or filled with incomplete information, he said.

Transforming medical records into the likelihood of contracting a disease or trade records into a predictor of customer churn is where predictive analytics shines. Using artificial intelligence to slice data and find patterns is what makes the field so exciting, Zelikovsky said.

Data Science Is Accessible Science 

A common misperception about predictive analytics is that it’s unattainable for most firms, Zelikovsky noted. With the right tools, the knowledge of how to use them, a criteria for identifying and narrowing the problems to be solved and the computing power to automate the process, any firm can conduct its own predictive analytics:

  • Firms conducting their own data science need to assess and understand the reliability and quality of the data they’ll be analyzing. 
  • If the data a firm has is unreliable or missing certain aspects, it needs to be able to identify the missing pieces and purchase proxies or re-segment in a way that accounts for the gaps. Data from a CRM system, for example, might need to be supplemented with data purchased from a privately-gathered source to unlock the full potential in a firm’s data pool.
  • Firms executing their own predictive analytics need to settle on the problem they’re trying to solve. Best practice dictates developing a solutions workflow highlighting which predictions are most helpful.
  • Finally, firms may begin creating their own datasets, so they will need to design their data collection to facilitate future predictive efforts at scale.

Ultimately, successful predictive analytics is a decision-making activity. Results from one stage of analysis are used to inform another, turning raw material into proprietary insights and powerful strategies. 

SEARCH

ABOUT ME

The Wealth Tech Today blog is published by Craig Iskowitz, founder and CEO of Ezra Group, a boutique consulting firm that caters to banks, broker-dealers, RIA’s, asset managers and the leading vendors in the surrounding #fintech space. He can be reached at craig@ezragroupllc.com

SUBSCRIBE TO OUR NEWSLETTER VIA EMAIL

@CRAIGISKOWITZ

ARCHIVES

Archives
%d bloggers like this: