SAS’s Stephen Greer on navigating the challenges of generative AI in financial services

SAS’s Stephen Greer on navigating the challenges of generative AI in financial services

SAS

The SAS team exhibited at Money20/20 USA and focused on how generative AI is reshaping the financial services industry

Banking industry advisor for the firm presented a session at the Microsoft Lounge at Money20/20 USA on problem solving the effects of AI 

Alice Chambers |


There are several key differences between using traditional and generative artificial intelligence in the financial services industry, which leads to potential problems for banks. Stephen Greer, banking industry advisor at SAS, explained some of these challenges and how to solve them in a session called ‘Approaches to Problem Solving with Generative AI in Financial Services’ in the Microsoft Lounge at Money20/20 USA.

First, Greer explained how SAS is a Microsoft vendor that aims to help everyone from data scientists, developers, to business users to use data for informed business decision-making.

“We do that through three layers of our application stack,” said Greer. “The platform, which is essentially the chassis for analytics, a lot of infrastructure sits here. Then the analytics capabilities themselves provide the functionality, automation and various building blocks to solve different problems. On top of both of those we have the solutions for complex business issues.”

Greer also highlighted that decisions in financial services are the output of a combination of predictive models, decision rules, and human reasoning and judgement.

“For us AI is essentially a system to support and accelerate human decisions and actions,” he said. “AI won’t replace humans but humans with AI will replace humans without AI.”

And with generative AI, financial service providers have a new type of creativity they aren’t used to.

“Traditional AI and machine learning learns from input data and maps specific to output,” said Greer. “They generally are trained to solve specific tasks with well-defined inputs and outputs. Generative AI learns the distribution of the data or how it’s structured so that it can reproduce an output that looks identical to data its trained on but doesn’t follow directly from it. As a result, you often get some low-level judgements being made about the output itself, or what appears to be creativity. And this moves us from a world of deterministic outcomes to more probabilistic outcomes.”

SAS has observed that discussions about AI in the financial services industry often become narrowly focused quite quickly.

“Generative AI is treated as the solution to any problem,” said Greer. “We see this a lot with clients, asking ‘why can’t I throw AI at it?’. But within our client base we are seeing limitations begin to form such as hallucinations that require significant monitoring, governance and reporting, pre-processing data for better performance, and bias impacting results.”

These are just a few of the complexities caused by AI when used for decision-making.

“AI to some raises serious red flags, depending on the organisational readiness, risk tolerance, or definition of ‘AI’,” said Greer. “There are still some associated legal and reputation risks or compliance concerns. Some organisations have naturally taken a more cautious approach to this due to regulatory scrutiny, for example during an audit or safety and soundness exam.”

Before implanting generative AI tools then, firms should consider assessing the required level of trust and the impact on business processes, with a strong emphasis on managing risk.

“At the end of the day, we are in the trust business,” said Greer. “And we need to trust that our AI models are not inadvertently opening our organisations up to significant risk.”

Discover more news from Money 20/20 on our dedicated landing page.

Subscribe to the Technology Record newsletter


  • ©2024 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.