The New Table Stakes: Enterprise LLMs in Wealth Management

AI remains top of mind for many wealth management industry leaders, and the sector shows no signs of slowing down

This year, we’ve reached a tipping-point for generative AI, where it’s finally become accurate, useful, and accessible enough to be deployed across the wealth management industries. Oliver Wyman predicts that generative AI will be able to produce productivity gains of approximately 30-40% in sales and customer service, 25-35% in product development, 30% in investment and research, and anywhere from 20-50% for middle- and back-office functions.

Generative AI models have become table-stakes almost overnight, yet remain insecure for private data when open-access, inconsistent in source and design, and regularly produce inaccurate information. Craig Iskowitz, CEO of Ezra Group recently spoke at the Tiburon CEO Summit about the expanding powers on generative AI, and how it can be most effectively deployed by firms across the wealth management industry.

Building a (Safe) Super-Employee

Large Language Models (LLMs) have gained significant attention in recent years for their remarkable ability to generate human-like responses and assist in various tasks. However, concerns about privacy and data security have also emerged, causing many firms to block usage.

JPMorgan, Citigroup, Bank of America, Deutsche Bank, and Wells Fargo are among the financial services  giants who have reportedly barred employees from using ChatGPT.

Enter enterprise LLMs. 

Enterprise LLMs are designed to address concerns about misuse or mishandling of sensitive information by prioritizing user privacy and data security.  They use encryption and advanced privacy techniques, ensuring that user data remains confidential and inaccessible to unauthorized parties.  These models minimize the need to store sensitive data on external servers, reducing the risk of data breaches and unauthorized access.

Enter the enterprise LLM or, simply put—a company’s private version of ChatGPT. Companies are already beginning to build out their large language models. Unlike ChatGPT, which is built on deep learning models from what it can access off the Web, it is an organization’s proprietary data that is the foundation for building its enterprise LLM. 

An enterprise LLM can be summarized by a few key tenets:

  • It is not open to the public.
  • It is hosted inside of the company’s infrastructure alongside other business workloads.
  • It is trained only on data owned or specific to the company and/or product(s) they make. 
  • It provides contextual information only to parties that have authorized access.

An enterprise LLM will be a comprehensive repository, trained on the entirety of a firm’s data, including, but not limited to, internal policies and procedures, market research, economic analyses, even all internal communications like emails and documents. In essence, the enterprise LLM will become the smartest employee, able to respond accurately to any question about the company. 

A recent survey by MIT found that white collar employees can see productivity increases of up to 40% when using generative AI tools.  No company will be able to stay in business without these tools. Therefore, we believe that in a few years, every large company will have to build out their own enterprise LLM in order to remain competitive.  

One of the most interesting findings from the study was that employees reported that using ChatGPT increased job satisfaction and self-efficacy. The study raises important questions about the impact of these technologies on workers’ happiness in their jobs.

Businesses across industries are experiencing strong increases in worker efficiency after deploying generative AI technology like ChatGPT, which can directly improve their bottom line.  The MIT survey also reported that chat-based genAI can compress the productivity distribution and increase the output of low-skilled workers more than higher-skilled. This reduces disparities in the workplace and promotes a more equal and fair environment.

Goldman Sachs experimenting with generative AI for software development

Novel AI Applications

Another use case for enterprise LLMs is as a customer service agent chatbot. 

Most broker-dealers and large RIAs have a help desk to provide technology support to advisors and clients. This can be a large expense to maintain especially since asd deployed ​software increases in sophistication, the number of calls for assistance increases.

Most of the most common questions asked by advisors could be easily answered by a private ChatGPT trained on an enterprise LLM at a wealth management firm. These include how to access basic functions, complete operations, learn about new products and their processes and sustainability requirements. 

I saw a proof of concept demo of a chatbot for a custodian that showed the advisor typing, “my client (ID: XXXXXX) lost her debit card. how can I order a replacement?” The chatbot opened a window displaying the correct form and pre-filled it with the client’s ​information. After the advisor approved  the form, the system would initiate a Docusign to the client. Very efficient!  

Two (or Three or Four) Heads Are Better Than One

Beyond privacy concerns, LLMs are also known to produce false information or ‘hallucinations’. This is where it generates incorrect or completely fabricated results that sound reasonable. The AI will even defend its fake answers!. The use of multiple LLMs in conjunction is one option that can reduce these may be a large step forward in reducing inaccuracy in AI responses. 

Recently, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) introduced a strategy that leverages multiple AI systems to discuss and argue with each other to converge on a best-possible answer. This method leverages expansive models to heighten their adherence to factual data and refine their decision-making, teaching them to recognize and resolve their own errors. 

The AI Coordination Layer

As many vendors in the fintech space rush to deliver AI features, they’re coming with new products very quickly– which while a good thing on the whole, leads to some structural elements to slip through the cracks. In most situations, companies don’t have any control over the large language models they’re utilizing, including its core functionality, and where or how the AI was trained. Each vendor is different and has a unique approach to training, core functionality, and model architecture. As companies grow and incorporate more products, they run the risk of having too many different sources for their generative AI. 

The key for the next phase of AI prowess is control. Firms desire a system that they can tailor to their specific needs, feeding information into other systems provided by vendors without compromising the security of their data. Instead of sending that data to another company’s cloud and LLM and having it fed back, they want to keep it all internally. One solution is an AI Control Layer (AICL) —a framework that allows firms to maintain control over their generative AI chatbots and PLLMs, dictating how information is shared with external vendors.

AI on AI

In the end, the next phase of AI prowess is not just about embracing innovation but about wielding it strategically. As the industry pivots towards a future filled with thousands of language models, firms that master the art of control alongside innovation will be the ones leading the charge into a new era of wealth management technology.

SEARCH

ABOUT ME

The Wealth Tech Today blog is published by Craig Iskowitz, founder and CEO of Ezra Group, a boutique consulting firm that caters to banks, broker-dealers, RIA’s, asset managers and the leading vendors in the surrounding #fintech space. He can be reached at craig@ezragroupllc.com

SUBSCRIBE TO OUR NEWSLETTER VIA EMAIL

@CRAIGISKOWITZ

ARCHIVES

Archives