While some use cases have already been implemented, firms absolutely have intentions to broaden the scope of how they’re leveraging this latest technology. However, utilizing and benefitting from Generative AI for financial services poses certain challenges that companies are still grappling with.
Use case identification
Generative AI for financial services is exciting, but most firms have very limited experience with it (if any). The temptation is to explore the technology first, then investigate what problems it could help solve, as was the case with machine learning a few years ago. In reality, it is much more effective when a company identifies a problem first, then determines how to utilize AI to solve that specific problem.
At this stage (and for this industry), GenAI has proven effective for accelerating information retrieval. So, identifying the information you want to retrieve in a faster and more organized fashion is a great start. This type of use case naturally extends to others such as Internal Knowledge Management, Investment Compliance, Research and Due Diligence Questionnaire (DDQ) analysis.
Privacy and Data Security
One of the most prominent challenges is keeping data private and secure. Obviously, data privacy is one of the highest priorities for financial services companies. And they go to great lengths and expense to ensure the security of their data and that of their clients. With many public AI solutions, the data companies and their employees feed into a model that is no longer within their locus of control, as in this recent Samsung case. In the example of Chat GPT, the data goes to Open AI, a third-party vendor.
Here are potential ways to address the issue of security:
- Leveraging an open-source LLM and fine-tuning it so that the data utilized by the model is not shared with external third-party environments.
- Configuring a private link if using a public cloud environment to connect resources and disabling all public access to the data.
- Building an Extract, Transform, Load pipeline (ETL) pipeline and automating all the data feed jobs in the production environment to avoid data exposure to unauthorized persons.
As with other new technologies, firms must figure out the infrastructure they need to deploy Generative AI. Cloud providers such as AWS (Amazon Web Services) or Microsoft Azure now provide easy access to GPU computational power, which can be deployed with a click of a button. However, there are other aspects of the infrastructure to consider such as:
- Identifying the right sizing and GPU version, both from the computational requirement as well as cost perspective. For example, a simple Q&A Engine which only responds Yes or No does not usually require the power of Nvidia A100 80GB GPU memory.
- If using open-source LLMs, there could be CUDA version dependencies to match GPU memory or Python libraries, which need to be addressed at the time of configuration. CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements.
- Designing ETL modules like Azure Data Factory (ADF) based on the data characteristics (structured, semi-structured or unstructured), frequency of updates, and size of the data.
- Building API connectivity either through REST or GraphQL architecture that would accept the input request and provide a relevant response in a timely manner.
Operationalizing your GenAI solution
The release of ChatGPT has prompted many financial organizations to increase their AI spend, specifically Generative AI, as highlighted in this Gartner Poll.
However, they still must deal with operationalizing their GenAI pilots, to achieve the desired ROI. Automating data feeds, enabling secure storage for ingestion and inference by LLMs is no easy feat. In addition, maintaining a reasonable level of performance when multiple users are accessing the application requires design considerations of load balancing and prompt engineering. These deployments involve the need to build pipelines for model installation, data processing, and inferencing while ensuring scalability and extensibility of the solution.
While these challenges can complicate AI adoption, the potential benefits of modernizing operations with AI are too tempting to pass up. Fortunately, companies can take steps to mitigate the downside risks.