Overview
Traditional AI and Generative AI Today
Artificial intelligence has been in the public consciousness for decades. But what has seemed an abstract concept is now more concrete with the introduction of Generative AI (GenAI) through Chat GPT — released in November of 2022 by Open AI. Chat GPT democratized AI for consumers by making its capabilities more accessible than ever before and showing instant results that people could directly apply for individual benefit. The attitude of many, including those in the financial services industry, is that AI isn’t just for data scientists now, but for anyone with the curiosity to use it.
Generative AI is different from traditional AI and machine learning (ML), which uses algorithms to parse large amounts of labelled data, iteratively improving over time to make informed predictions depending on the type of problem presented. GenAI builds on this foundation and goes a step further, turning data inputs into new, original content and even new training data.
Firms are now exploring ways to leverage the technology, but many are still worried about the possible risks. Let’s explore how organizations can reap the benefits of Generative AI for financial services while avoiding its pitfalls.
3 Financial Services Use Cases
While its future uses could be virtually endless, the financial services industry has primarily used GenAI in three ways up to this point.
1. Q&A Engine on Internal Knowledge Base
Companies are using their own internal knowledge base with GenAI to construct a Q&A engine to quickly analyze data sets (structured or unstructured). Organizations are using AI as a sort of internal chatbot —think of a proprietary Siri — asking questions and getting calculations, analysis, and answers in a fraction of the time it previously took.
2. Summarization of Investment Portfolios
Firms are also using AI to ingest multiple pieces of information and summarize them. For example, a firm can upload a client or investment portfolio and quickly create a client report based on the data (think Know Your Customer or KYC). But these reports don’t need to access solely internal data. Firms can feed external links (e.g., news articles) and utilize that information to create a more robust summary. So, if companies want to feed tabular data and text information to summarize a client or investment in business terms, they now have access to tools that can perform that task.
3. Enhanced Compliance and Due Diligence
Certain types of investment portfolios adhere to an investment mandate that determines which investments are allowed and which are prohibited. Generative AI models, when tuned with client agreements, regulatory mandates, and investment controls, offer financial firms an instant ability to query potential investment portfolios for compliance. This eliminates waiting for manual checks, streamlining the question: “What rules or clients are impacted by this change?”
Using Large Language Models (LLMs) for software development is also being explored widely with GitHub Copilot. While these use cases show clear GenAI benefits for organizations, implementing the technology comes with its fair share of challenges.
What Are the Challenges to Implementing GenAI?
While some use cases have already been implemented, firms absolutely have intentions to broaden the scope of how they’re leveraging this latest technology. However, utilizing and benefitting from Generative AI for financial services poses certain challenges that companies are still grappling with.
Use case identification
Generative AI for financial services is exciting, but most firms have very limited experience with it (if any). The temptation is to explore the technology first, then investigate what problems it could help solve, as was the case with machine learning a few years ago. In reality, it is much more effective when a company identifies a problem first, then determines how to utilize AI to solve that specific problem.
At this stage (and for this industry), GenAI has proven effective for accelerating information retrieval. So, identifying the information you want to retrieve in a faster and more organized fashion is a great start. This type of use case naturally extends to others such as Internal Knowledge Management, Investment Compliance, Research and Due Diligence Questionnaire (DDQ) analysis.
Privacy and Data Security
One of the most prominent challenges is keeping data private and secure. Obviously, data privacy is one of the highest priorities for financial services companies. And they go to great lengths and expense to ensure the security of their data and that of their clients. With many public AI solutions, the data companies and their employees feed into a model that is no longer within their locus of control, as in this recent Samsung case. In the example of Chat GPT, the data goes to Open AI, a third-party vendor.
Here are potential ways to address the issue of security:
- Leveraging an open-source LLM and fine-tuning it so that the data utilized by the model is not shared with external third-party environments.
- Configuring a private link if using a public cloud environment to connect resources and disabling all public access to the data.
- Building an Extract, Transform, Load pipeline (ETL) pipeline and automating all the data feed jobs in the production environment to avoid data exposure to unauthorized persons.
As with other new technologies, firms must figure out the infrastructure they need to deploy Generative AI. Cloud providers such as AWS (Amazon Web Services) or Microsoft Azure now provide easy access to GPU computational power, which can be deployed with a click of a button. However, there are other aspects of the infrastructure to consider such as:
- Identifying the right sizing and GPU version, both from the computational requirement as well as cost perspective. For example, a simple Q&A Engine which only responds Yes or No does not usually require the power of Nvidia A100 80GB GPU memory.
- If using open-source LLMs, there could be CUDA version dependencies to match GPU memory or Python libraries, which need to be addressed at the time of configuration. CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements.
- Designing ETL modules like Azure Data Factory (ADF) based on the data characteristics (structured, semi-structured or unstructured), frequency of updates, and size of the data.
- Building API connectivity either through REST or GraphQL architecture that would accept the input request and provide a relevant response in a timely manner.
Operationalizing your GenAI solution
The release of ChatGPT has prompted many financial organizations to increase their AI spend, specifically Generative AI, as highlighted in this Gartner Poll.
However, they still must deal with operationalizing their GenAI pilots, to achieve the desired ROI. Automating data feeds, enabling secure storage for ingestion and inference by LLMs is no easy feat. In addition, maintaining a reasonable level of performance when multiple users are accessing the application requires design considerations of load balancing and prompt engineering. These deployments involve the need to build pipelines for model installation, data processing, and inferencing while ensuring scalability and extensibility of the solution.
While these challenges can complicate AI adoption, the potential benefits of modernizing operations with AI are too tempting to pass up. Fortunately, companies can take steps to mitigate the downside risks.
How Companies Can Mitigate Risk and Implement Generative AI Effectively?
Each of these challenges can be overcome with the careful and strategic deployment of trusted solutions. The key is understanding where GenAI is a more relevant and useful solution as opposed to traditional AI. Even if firms have developed machine learning (ML) applications, creating, and implementing newer Generative AI solutions using Large Language Models (LLMs) is quite different. So, identifying the right use case for GenAI deployment is critical or else there is risk of destroying value.
That’s where Linedata can step in. Our team has been working on AI solutions —both traditional and now Generative AI — and have been developing private models for the financial services industry since 2019. With 25 years of experience working to improve the investment operations of global asset managers, we’re intimately familiar with the unique challenges of the industry and have developed secure solutions that solve real business problems for clients without additional infrastructure investment.
Start with a Proof of Concept
We start by engaging our clients on a proof of concept, evaluating their business needs, and understanding their desired outcome. After the scope is identified, we work with them to source relevant data and agree on the performance evaluation criteria.
Clients are fully engaged during this process, to ensure we are applying the right approach alongside operational and AI expertise, to solve the problem and deliver timely PoC results. Through this process, we are able to demonstrate measurable benefits and ROI in a limited engagement. Once clients see the initial results of the PoC, they can decide to experience similar benefits on a broader scale.
Ask the Right Questions
A crucial part of finding the ideal solution is asking the right business and technical questions from the outset:
These and many more questions need to be answered, including those about the right type of GPU capacity to ensure the performance and accuracy of your LLMs. With new tools and APIs being released in the market almost every week, it can be daunting for teams to figure out the best ways to get started.
- Starting with the business need, in which areas or tasks would you like your knowledge worker to be more efficient?
- What are the key metrics you’re tracking?
- What type of Large Language Model(s) should you be using?
- How do you structure the input data being fed into your GenAI model?
- How do you find the right embedding model that is contextually relevant?
- Should you apply prompt engineering, zero-shot, or few-shot learning or jump to fine-tuning?
- Most importantly, how do you build guardrails to ensure accurate data retrieval (i.e., no hallucination)?
These and many more questions need to be answered, including those about the right type of GPU capacity to ensure the performance and accuracy of your LLMs. With new tools and APIs being released in the market almost every week, it can be daunting for teams to figure out the best ways to get started.
Work with an experienced partner
At Linedata, we help you find the answers to these questions, providing secure, effective Generative AI solutions that produce a rapid time to value. And because we are financial services experts, our solutions are specifically tailored to meet your industry-specific needs.
About the authors
Selvan Gnanakumaran is a senior analytics consultant with Linedata Analytics Service. He has a keen eye for researching and applying the latest AI and Machine Learning technologies to develop practical operational use cases and provide data driven insights to financial services clients. Outside the world of AI and data, he enjoys writing about ideas and reading fantasy.
Aditya Khaire holds a master’s degree in Artificial Intelligence (AI) from Australian National University and works as Lead Data Scientist with the Linedata Analytics Service team. He manages a team of data science engineers working on Machine learning and Gen AI projects. He is responsible for building ML architecture to train and evaluate the ML algorithms and implement the model inference in the cloud infrastructure.
Insights
Learn how generative AI is reshaping business and operations at hedge funds, private equity, and across the asset management...