This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

A Fresh Take

Insights on M&A, litigation, and corporate governance in the US.

| 8 minutes read

Treasury and FSOC Sharpen Focus on Risks of AI in the Financial Sector

On June 6-7, 2024, the Financial Stability Oversight Council (FSOC or the Council) cosponsored a conference on AI and financial stability with the Brookings Institution (the FSOC Conference).  The conference was billed as “an opportunity for the public and private sectors to convene to discuss potential systemic risks posed by AI in financial services, to explore the balance between encouraging innovation and mitigating risks, and to share insights on effective oversight of AI-related risks to financial stability.” The FSOC Conference featured noteworthy speeches by Secretary of the Treasury Janet Yellen (who chairs the Council), as well as Acting Comptroller of the Currency Michael Hsu.  And in a further sign of increased regulatory focus on AI in the financial industry, the Treasury Department also released a request for information on the Uses, Opportunities, and Risk of Artificial Intelligence (AI) in the Financial Services Sector (the AI RFI) while the conference was happening – its most recent, and most comprehensive, effort to understand how AI is being used in the financial industry.

In this blog post, we first summarize the key questions raised and topics addressed in the AI RFI.  We then summarize the key takeaways from FSOC’s conference on AI and discuss how these developments fit within the broader context of actions taken by the federal financial regulators in the AI space. Lastly, we lay out takeaways and the path ahead for financial institutions as they continue to navigate the rapid development of AI technology.

The AI RFI

The AI RFI includes 19 multi-part questions on a broad range of topics related to AI and the financial services sector. In issuing the request, Treasury explained in its AI RFI press release that it seeks “a broad range of perspectives” on the topics and questions broached in the AI RFI, and is “particularly interested in understanding how AI innovations can help promote a financial system that delivers inclusive and equitable access to financial services.” 

Specifically, Treasury seeks comments on “the latest development in AI technologies, and applications, including but not limited to advancements in existing AI (e.g., machine learning models that learn from data and automatically adapt and improve with minimal human interference, rather than relying on explicit programming) and emerging AI technologies including deep learning neural network such as generative AI and large language models (LLMs).”  Treasury also seeks to understand how AI is being used in the financial services sector. This includes the provision of products and services, risk management, internal operations, and customer service, as well as the potential types of opportunities and risks posed by the technologies.  Treasury specifically identified as potential risks associated with AI “bias, discrimination, monoculture, concentration, fraud, herding, hallucinations, lack of explainability, conflicts, reputational risk, and data privacy risks, among others” and, more generally, the potential ways in which AI may be used to perpetrate cybercrimes or contribute to job displacement.

The questions in the AI RFI are divided into Parts A, B, and C.  Questions in Part A solicit commenters’ responses on the uses of AI, including use cases, types of models being employed, and variability in use and access to AI across financial institutions.  Questions in Part B then focus on opportunities and risks associated with institutions’ use of AI, how financial institutions are exploring AI’s potential benefits and managing its risks, and entities impacted by AI.  Finally, questions in Part C solicit input on potential further actions Treasury could take to advance responsible innovation and competition within the financial sector with respect to the usage of AI.

Written comments on the AI RFI are due on or before August 12, 2024. 

Takeaways from the FSOC Conference

AI Opportunities and Risks

In her keynote address, Treasury Secretary Yellen noted that while AI offers “tremendous opportunities for the financial system,” she also warned of the significant risks posed by AI.  On the positive impacts and possibilities of AI, Secretary Yellen noted that, 

For many years, the predictive capabilities of AI have supported forecasting and portfolio management. AI’s ability to detect anomalies has contributed to efforts to combat fraud and illicit finance.  Many customer support services have been automated.  Across these and many other use cases, we’ve seen that AI, when used appropriately, can improve efficiency, accuracy, and access to financial products.

She also noted that, “[m]ore recently, AI’s rapid evolution could mean additional use cases.  Advances in natural processing, image recognition, and generative AI, for example, create new opportunities to make financial services less costly and easier to access.”   

Acting Comptroller Hsu covered similar ground in his remarks, addressing both the opportunities and risks posed by AI and highlighting risks to financial stability its use may present.  He cautioned banks to be wary of the potentially rapid escalation of AI-related risks.  Specifically, he noted that banks should establish safeguards, or “gates,” between each AI developmental stage as an essential means of ensuring that firms appropriately pause and consider the role and usage of AI before that role is expanded.  The gates proposed by Hsu recall the European machine learning concept of “putting the human in the loop” (such that humans interact with, and iterate on, a machine learning model’s development).

Frameworks for AI

Treasury Secretary Yellen identified FSOC’s new Analytic Framework, published in November 2023, as providing critical insights into the range of risks that AI may pose to the financial system.  These include specific vulnerabilities due to the complexity and opacity of AI models, inadequate risk management frameworks, and interconnections that emerge as many market participants rely on the same data sets and models.

Going beyond the existing FSOC Framework, Acting Comptroller Hsu used his speech to propose a potential “shared responsibilities model” framework for AI for fraud, scams, and ransomware attacks, similar to the one currently used in the cloud computing sector.  The cloud computing “shared responsibility model” framework allocates operations, maintenance, and security responsibilities to customers and cloud service providers, depending on the particular service a customer selects. Hsu suggested instituting a similar framework for AI, with an infrastructure, model, and application layer making up the three components of the “AI stack.”  He also proposed that the recently established US Artificial Intelligence Safety Institute may be well-positioned to coordinate the creation of such an AI framework.

Hsu admitted that questions remained regarding how a “shared responsibilities model” framework could be enforced.  He also noted that his proposed framework is not the only way forward, instead suggesting that other models warrant consideration, such as self-regulatory organizations (like the Financial Industry Regulatory Authority (FINRA), networking membership organizations (like Nacha or the Clearing House), and split reimbursement liability (a model in use in the UK for authorized push payment fraud)).  Regardless of the form such a framework takes, Hsu noted that a framework of some kind—as well as clear gates—can help mitigate risks posed by AI.

The AI RFI and FSOC Conference Signal Growing Interest in AI Risks Among Federal Financial Regulators 

Treasury issued its AI RFI and organized the FSOC Conference against the backdrop of a government-wide effort to understand the use and potential effects of AI in all sectors of the American economy and increasing focus among financial regulators on the risks it may pose to financial institutions, consumers of financial services, and financial stability itself.  The Biden Administration has repeatedly emphasized its commitment to “fostering innovation” while ensuring protection of consumers, investors, and the financial system from risks that AI applications may pose.  To that end, the White House Office of Management and Budget (OMB) issued its first government-wide policy to mitigate the risks of AI and harness its benefit on March 28, 2024 (the OMB Memo), building on President Biden’s landmark October 30, 2023 AI Executive Order on the safe, secure and trustworthy development and use of AI (the AI Executive Order). 

Likewise, FSOC for the first time highlighted the use of AI in the financial services industry as a vulnerability in the US financial system in its latest annual report, published in December 2023.  In her address at the FSOC Conference, Treasury Secretary Yellen noted that “the Council and member agencies have been working to deepen our collective understanding of financial stability risks associated with AI, while also recognizing that AI can improve financial services.” 

Although it is the latest and most comprehensive, the AI RFI is not the first time Treasury has sought to understand the continuously evolving developments and uses of AI in the space.  In November 2022, the agency explored the use of AI in a report examining the impact of non-banking firms on competition in the consumer finance markets and more recently issued an RFI including questions on the use of AI in connection with consumer financial services.  It also published a report that detailed opportunities and challenges that AI poses to the security and resiliency of the financial services sector in March 2024, based on the Department’s extensive outreach on AI-related cybersecurity risks in the financial sector.  This report outlined a series of next steps that Treasury determined should be taken to address AI-related operational risk, cybersecurity issues, and fraud challenges, in response to the President’s AI Executive Order and consistent with the OMB Memo. 

Even more recently, in May 2024 Treasury released its 2024 National Strategy for Combatting Terrorist and Other Illicit Financing.  As a part of its 2024 strategy, Treasury noted that it will continue to expand the use of data analytics and AI in the “government’s efforts to detect and disrupt illicit finance”, as these technologies “play an increasingly important role in informing policymakers of illicit finance threats and vulnerabilities.” 

Treasury is not alone in taking steps to better understand the risks and benefits posed by AI in the financial sector.  Notably, in March 2021, the OCC, Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Consumer Financial Protection Bureau, and National Credit Union Administration jointly issued an interagency request for information on financial institutions’ use of AI.  In 2018, Treasury’s Financial Crimes Enforcement Network (FinCEN) and the federal banking agencies jointly issued a statement on combating money laundering and terrorist financing, encouraging the adoption of technologies like AI to effectively combat such activities.  Other agencies have also taken actions to limit risks associated with new technologies in recent years; for example, in 2021, the Securities and Exchange Commission issued a proposed rulemaking on conflicts associated with broker-dealers’ and registered investment advisers’ use of predictive data analytics and similar technologies, like AI.

Takeaways and the Path Ahead

The AI RFI creates no new regulatory obligations, and bank regulators have not indicated in recent speeches or statements that a comprehensive framework governing the use of AI in financial services is forthcoming in the foreseeable future.  That said, in an area where regulatory interest and scrutiny is increasing so rapidly, it is important that providers of financial products and services be proactive.

A consensus seems to be building among federal financial regulators regarding the risks AI poses to financial institutions, their customers, and the financial system itself.  Financial services providers—whether banks, consumer finance companies, fintechs, or others—would do well to start considering now whether and how their current uses of AI account for these risks and whether current compliance and risk management practices need to be revised.  Above all, it is critical for financial institutions and companies to remain vigilant about evolving regulatory concerns and expectations, and to stay ahead of the curve even before there are clear, comprehensive rules of the road. 

*   *   *

We will continue to monitor developments in this space and are available to discuss in greater detail.  We will provide updates periodically as warranted.

*   *   *