Artificial Intelligence in Financial Services
Senate Committee on Banking, Housing, & Urban Affairs
Artificial Intelligence in Financial Services
Wednesday, September 20, 2023
Topline
- Members from both parties expressed concerns related to the potential impact that artificial intelligence (AI) could have on consumers, citing fraud and weak compliance with existing consumer protection laws.
- Republicans advocated for more tailored regulations and a “pro-innovation” approach to AI.
Witnesses
- Melissa Koide, Director & CEO, FinRegLab & Former Deputy Assistant Secretary for Consumer Policy, U.S. Department of the Treasury
- Daniel Gorfine, Founder & CEO, Gattaca Horizons, LLC, Adjunct Professor of Law, Georgetown University, Former Chief Innovation Officer, Commodity Futures Trading Commission
- Michael P. Wellman, Professor and Division Chair of Computer Science & Engineering, University of Michigan, and Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery
Opening Statements
Chairman Sherrod Brown (D-Ohio)
In his opening statement, Brown discussed how AI could cause wide-reaching changes in our financial system, warning that we can’t sleepwalk into a major economic transformation. He noted banks, brokers, and insurance companies are allowing AI technologies to decide who can get a loan and tailor financial products to customers. Brown emphasized that AI can’t become another way for Wall Street and Silicon Valley to supercharge existing technologies to further rig the system for their benefit. He noted that big corporations stand to earn significant profits from the efficiency this new technology brings to businesses, adding that Wall Street’s version of efficiency usually results in lower wages and fewer jobs. Brown affirmed that Congress has a responsibility to ensure that AI is used to protect consumers while promoting a fair and transparent economy.
Brown explained that, at a minimum, the rules that apply to the rest of our financial system should apply to emerging technologies. He said that without guardrails, AI would be a new tool for Wall Street and Silicon Valley to swindle Americans out of their savings and trap them in debt. Brown warned that technological advances make it harder to determine who is accountable when things go wrong. He said AI data models bake the worst ills of our past into the cake and then disguise them as impartiality. He emphasized that discrimination is still discrimination, even if it comes from a machine. Brown concluded that Americans have every right to be skeptical of AI and that this innovation’s benefits must flow to all Americans.
Senator Mike Rounds (R-S.D.)
In his opening statement, Rounds noted that recent advances have shown how capable AI technology has become. He explained that the financial services industry has been effectively utilizing AI for decades, just under a different name. Rounds said that machine learning and AI have opened the door to accurate forecasting and prediction, making it possible for AI to revolutionize fraud detection by allowing for a more proactive approach. He discussed how the financial services industry is uniquely positioned to adapt to emerging technology, as financial regulation is already technology neutral, and outcome based.
Rounds explained that AI is only as useful as the quality of data that goes into its models, which means investing in cyber infrastructure to protect our data should be a priority. He called for a pro-innovation stance, warning that halting progress can be dangerous. Rounds said financial regulators should allow Congress to act and resist the urge to overregulate new technology, as they run the risk of unintended consequences. He cited the proposed Predictive Data Analytics rule from the SEC as an example. Rounds concluded that the U.S. could shape AI in a way that reflects the values that are important to us.
Testimony
Melissa Koide, Director & CEO, FinRegLab & Former Deputy Assistant Secretary for Consumer Policy, U.S. Department of the Treasury
In her testimony, Koide discussed how machine learning has the potential to improve fairness and inclusion. She noted that generative AI is attracting considerable interest and investment, but financial services providers have been approaching the use of AI with caution. Koide explained that regulatory compliance demands a level of transparency and explainability that many providers are not confident they can currently attain with Gen-AI applications.
She noted that financial firms are using federal regulatory frameworks and laws as their foundation in the testing, development, and use of AI models. At this early stage in development, there are several actions that could help the financial services ecosystem move toward more rapid identification and implementation of best practices and regulatory safeguards. Koide recommended increasing resources to support the production of public research, engagement by historically underrepresented and under-resourced actors, and broad intra-and cross-sector dialogue. She also recommended the careful consideration of data governance practices and standards by conducting a review of other risk management and customer protection frameworks that apply to automated decision making. She noted that broader efforts to increase opportunity, fairness, and economic participation should also be considered.
Daniel Gorfine, Founder & CEO, Gattaca Horizons, LLC, Adjunct Professor of Law, Georgetown University, Former Chief Innovation Officer, Commodity Futures Trading Commission
In his testimony, Gorfine explained that AI in financial services is not new and should be thought of as part of a steady progression of using computers and advanced analytics systems to increase automation in the business sector. He noted that some AI technologies offer predictive insights and analytics that allow for more accurate, efficient, and low-cost decision-making, including in the context of determining creditworthiness when a traditional credit score may preclude access. Gorfine explained that AI risks include the potential for embedding and perpetuating bias, processing and training based on poor quality data, failing to operate as expected, helping bad actors engage in fraudulent and illegal conduct, and driving herd behaviors. He emphasized that the speculative potential or fear of future harm should not broadly block or disincentivize development, nor stymie adoption of AI and emerging technologies in financial services, including by those small firms and community banks seeking to remain competitive in an increasingly digital economy.
Gorfine explained that it is necessary to evolve how governing frameworks are applied to AI, as with any area of technological advancement, and to remain vigilant in identifying novel risks that will require tailored and specific interventions. Gorfine recommended encouraging innovation, while monitoring its use for novel risks. He also called for greater clarity within existing model risk management (MRM), third-party risk management, as well as activity-specific guidance and advance standards and best practices. Gorfine also recommended the modernization of the federal data privacy framework, and for regulators to avoid hasty and speculative regulation that can chill innovation.
Michael Wellman, Computer Science & Engineering, University of Michigan
In his testimony, Wellman noted that AI’s promises and risks pervade every area of our economy and society. He acknowledged that the future path of advanced AI is highly uncertain. Wellman said the opacity of state-of-the-art trading technology is one source of risk, and that the latest AI developments present new risks, including market manipulation. He addressed concerns about the potential for malicious parties to use AI intentionally to manipulate markets by explaining that AI-developed trading algorithms could produce strategies that employ manipulation or other harmful tactics, even if such manipulation was not the specified objective.
Wellman explained that our existing laws are written based on the assumption that it is people who make decisions. He questioned whether our laws can adequately ensure that those using AI will be accountable. Wellman concluded that AI’s training of massive datasets naturally raises questions about how trading on information aggregated at massive scale could affect the fairness and efficiency of our financial markets.
Question & Answer
Regulation
Sen. Tina Smith (D-Minn.) asked how financial regulators can effectively oversee and evaluate something that is as fast changing as AI. Koide explained that existing laws require lenders to tell consumers why a certain decision was rendered related to their application as well as the factors used by models to make that decision. She emphasized that governance expectations, including fair lending and explainability, are critical.
Rounds asked how to avoid screwing up the regulation of AI. Gorfine said regulators need to be principled and recognize that AI technologies are developing within the present regulatory framework for financial services. He called for the monitoring of emerging risk and tailored interventions. Gorfine emphasized that overly broad rulemakings will have unintended consequences. Wellman said responding before the risk materializes is essential, explaining that can reasonably understand areas that pose risks and shape the environment before it’s too late. Koide said regulations need to focus on consumer data privacy, adding that the data piece is critical.
Brown asked which actions Congress and regulators should take to protect Americans’ right to privacy. Koide said regulators need to understand what type of consumer data is being used, and cited data privacy protections for small businesses as an area for further consideration. She called for a holistic look at data privacy laws.
Sen. Katie Britt (R-Ala.) noted her colleagues have called for what she believes amounts to overregulation of AI. She said Congress needs to take a strategic look at AI regulation instead.
Sen. Mark Warner (D-Va.) said Congress needs to write guardrails in a way that they can actually legislate. He noted AI can have an immediate devastating impact on public trust in the markets. Warner asked if we need a new law because of how absolutely decimating the effects of an AI attack would be on the markets. Wellman said the fundamental issue here is trust, adding that AI can supercharge market manipulation and evade the current regulatory schemes. Koide agreed that the risk is there.
Consumer Protections
Sen. Bob Menendez noted that increasingly accurate deepfakes are targeting consumers. He asked what financial institutions can do to minimize the risk of AI-powered scams targeting them and their customers. Gorfine said law enforcement agencies should be taking a lead on understanding the broad nature of the scams that are taking place. He explained that information can then be circulated among financial regulators and the private sector. Gorfine added that financial institutions need to invest in abilities to detect these scams.
Sen. John Kennedy (R-La.) asked if consumers have the right to know when they are interacting with AI technology. He provided the example of a consumer talking to a robot. Wellman said yes, explaining there will be new kinds of necessary disclosures. Gorfine said generally, yes, noting that would depend on the particular function. Kennedy asked if the consumer is entitled to know who owns the robot and the generating content. Wellman said yes.
Brown asked how to encourage responsible development of public AI tools while ensuring consumers are protected from bad actors. Wellman said accomplishing those objectives boils down to accountability, and recommended subjecting AI tools to rigorous testing.
Sen. Elizabeth Warren (D-Mass.) asked if the CFPB identified potential violations of consumer protection laws involving AI. Koide said the consumer protection laws are agnostic on whether the perpetrator is a human decision or complex technologies. Warren said if big banks, like Wells Fargo, use AI to cut costs and mislead consumers, they will be held responsible. She emphasized that there is no “AI exception” to our consumer protection laws.
Fraud and Market Manipulation
Menendez expressed concerns that AI could compound the existing probability of fraud in the financial system. He asked if Koide agreed. Koide said there are ways to leveraging complex analytics to catch fraudulent actors.
Britt asked how to balance the strong capabilities of AI to improve fraud detection and cybersecurity while managing the fact that these technologies can and have been used by bad actors. Gorfine said there is incredible potential for market and trade surveillance, and an opportunity for government and law enforcement to use it. He noted the first line of defense will be exchanges and banks and called for information sharing.
Sen. Chris Van Hollen (D-Md.) noted a lot of our fraud and manipulation statutes are built on human intent. He asked whether an individual who led an AI system could be held liable for market manipulation. Wellman said at best, it’s unclear what the law would say about that.
For more information on this meeting, please click here.
For an archive of past SIFMA hearing coverage, please click here.