SEC IAC Meeting

Securities and Exchange Commission Investor Advisory Council

Open Meeting

Thursday, March 10, 2022

Topline

  • IAC held two panel discussions on (1) ethics in AI and robo-adviser fiduciary responsibilities and (2) cybersecurity.
  • The Commissioners and AI panel focused on governance, auditing, conflict of interest in AI algorithm development, racial and gender bias, and the standard of care for tech platforms using AI.
  • The cybersecurity panel focused on corporate governance and accountability and touched on cyber incident materiality.

Opening Statements

In his opening statement, Commission Chair Gensler addressed digital engagement practices and digital finance platforms and how they raise bias and systemic risk issues. He explained how digital engagement practices are integrated into robo-advising, and folks behind those platforms have to decide the factors behind their practices and for which purpose they are optimizing, whether for investor benefit or other factors like the revenues and performance of the platform. He also said these platforms have to comply with certain standards and duties of care, but optimizing for revenues can create conflict with duties to their investors. Gensler then raised the question of when nudges by brokers require investor protections and stated that we must ensure developments in artificial intelligence (AI) do not create gender and racial inequities. He described cybersecurity as an increasing risk and stated that investors want to know more about how issuers and funds are managing cybersecurity risk, also mentioning the Commission’s cybersecurity rulemaking yesterday. He concluded by saying he has asked Commission staff for recommendations regarding broker dealers and customer notices on Regulation SP.

In her opening statement, Commissioner Hester Peirce said the provision of AI and robo-advisory services can provide affordable advice but that the implications for investor protection cannot be ignored. She also said the panel on cybersecurity is timely, given yesterday’s proposed rule.

In her opening statement, Commissioner Caroline Crenshaw said the increased role of technology provides benefits like convenient, accessible, and lower cost services and operational efficiency but that failure to comply with regulation leads to poor outcomes for investors. She then raised the question of whether technology platforms may be influencing investors in ways that may be considered recommendations or blur the line between solicited and unsolicited transactions. She added that AI is pervasive but reliance on it can present risks, like entrenching or exacerbating racial and gender bias. Crenshaw closed by mentioning Commission action in the cybersecurity space.

Panel Discussion Regarding Ethical Artificial Intelligence and “Roboadviser” Fiduciary Responsibilities

The panel focused on the ethical issues and fiduciary responsibilities related to the use of artificial intelligence in the development and application of roboadvising techniques. They provided an overview of the current state of roboadvising, focusing on algorithms, analyzed the tradeoffs between AI-powered advice vs. personal recommendations, explained the jargon, potential bias, and blind spots around robo-advice, and reviewed what is happening in the larger related space.

Moderator

  • Paul Sommerstad, Partner, Cerity Partners

Panelists

  • Tamra Moore, Partner at King & Spaulding
  • Melissa Nysewander, PhD, Workplace Investing Artificial Intelligence Center of Excellence Leader, Fidelity Investments
  • Julie Varga, VP, Investment and Product Specialist, Morningstar
  • Miriam Vogel, President, Equal AI

Panel Discussion

In her opening statement, Vogel explained that there are not yet clear guardrails and standards in place for AI needed to ensure that decades of progress toward equality are not unwritten in a few lines of code. She said financial products try to remove bias from AI but may have the opposite effect. She then described responsible AI governance by citing frameworks from the World Economic Forum, Business Roundtable, and Business Software Alliance. She also encouraged participation in National Institute of Standards and Technology’s (NIST) AI framework. Vogel went on to explain pillars of responsible AI governance, including investing in the pipeline, hiring and promoting with your values, evaluating a firm’s data, testing a firm’s AI, and redefining a firm’s team. She discussed federal government and international efforts to address AI bias, including work by the White House and the European Union. She then outlined five steps corporate leadership should take to reduce their liability and enhance the benefit of the AI systems they are using; these included establishing an AI governance framework, identifying the designated point of contact in the C-suite responsible for AI governance, communicating stages of the AI lifecycle where testing will be conducted, documenting relevant findings at the completion of each stage, and implementing routine auditing.

In her opening statement, Moore described three fundamental principles that undergird efforts by states and the federal government to govern AI and AI use, which included human rights, democracy, and the rule of law. She then discussed international efforts to govern AI use, the issue of fairness, and the need to mitigate bias. She discussed companies’ existing fiduciary duties, laws barring discrimination based on gender, race, and ethnicity, and algorithm proxies (variables, data points, etc.) that do not specifically mention race but are used to predict a person’s race. She also said historical data incorporates systemic discrimination, we do not have comprehensive federal AI legislation, and there are efforts underway to require impact assessments and algorithm explanations and efforts by agencies toward rulemaking in that area. She concluded that the need for companies to get ahead of the law is important for consumer trust and mitigating liability.

In her opening statement, Nysewander discussed how AI has evolved, emphasizing the need for ethical use and consideration for the interest of clients. She explained that the definition of AI is not standard and touched on the challenges of AI, emphasizing the need to ensure that AI models are recommending the right products to customers and that AI is not trained on bad data. She stated that practitioners should architect AI to eliminate bias in models so they are less bias than a normal human decision maker and that this can be done by understanding the data the model is being trained on, testing results after the fact, and using explainable AI. She also discussed model governance changes over time, stating that Human Resources is an important area where companies should be careful with this and that model governance function should be independent of the team that created the model. She also stated that a governance review board should have knowledge of AI and the business applications and that there needs to be one clear mandate.

In her opening statement, Varga discussed robo-advice and the impact of AI on the robo-advising industry. She said that getting to true AI requires massive amounts of data sets to train a system, and if the majority of data is coming from one record keeper with specific data, which creates bias. She also explained the difficulty of using non-bias AI in robo advising but said it is possible and requires boundaries to be put in place, adding there is a fine balance between what is prudent, realistic, and what takes the human out of the equation. She said it is important to consider whether it is ok to accept a certain rate of accuracy when giving robo-advice, depending on an investor’s profile, and that firms must try to create a system with a lower likelihood of bias. She concluded by stating the need for diversity of thought when designing and coding a system and the need for a diverse set of data and diverse coding team.

Question and Answer

Sommerstand asked for thoughts on businesses having discussions about preventing AI bias and how to have conversations with regulators about taking what is already out there to provide rules and guidelines for robo-advisors putting together these systems. Vogel said requiring disclosures could be vital and as limited and focused as asking if a firm has taken steps to prevent bias against protected classes.

Brian Hellmer asked if we need to require companies to disclose some aspect or description of data sets used to build models or require testing and how onerous it is to test models. Nysewander said it is not always clear from the input data whether a model is bias or not and that it would be difficult to give a list of data sets involved and then say whether a firm expects that data set to be biased or not. Vogel said firms can demonstrate who may be underrepresented in certain outcomes based on age, geography, etc. She added that a basic audit of AI systems can and should be done to prevent atypical solutions. She then highlighted legislation in this space, including the Algorithmic Accountability Act.

Alice Stinebaugh asked Nysewander if there is some reluctance to go against an AI recommendation or if the opposite is true. Nysewander said that is a psychological question that requires testing, and Moore said depending on the context, it may be both, citing race-based algorithms in the medical field and social benefits.

Christopher Mirabile asked if it is possible to build systems for audit and make the cost of auditing lower by putting hooks into the development of code and standardizing procedures and tests and if we can ethically use this technology without building for audit. Nysewander said firms have put out open-source packages that automatically audit algorithms and that it is not a heavy lift to have open-source packages audit algorithms on the back end. Vogel agreed that audits can be built in along with models from other firms for internal AI testing.

Leslie VanBuskirk asked if robo-platforms are being monitored for conflicts of interest and complying with the Best Interest standard. Varga discussed fee structure, and Sommerstand said a lot of fees to go to the asset or sub-asset level. Nysewander said from an AI perspective, firms need to sit down with stakeholders and ask what is being optimized for and whether that is the customer’s benefit.

Elissa Germaine asked what the SEC should consider when overseeing firms developing algorithms and making recommendations. Nysewander said there should be basic audit checks, and firms should be held accountable for models in a certain risk class.

Panel Discussion Regarding Cybersecurity

The panel focused on the growing importance of cybersecurity, and the growing role it is playing as investors try to understand a company’s risk profile.

Moderator

  • Cambria Allen-Ratzlaff, Corporate Governance Director, UAW Retiree Medical Benefits Trust
  • Brian Hellmer, Chief Investment Officer, Global Public Market Strategies, State of Wisconsin Investment Board

Panelists

  • Keith Cassidy, Associate Director, Office of Technology Controls Program, Division of Examinations
  • Athanasia Karananou, Director of Governance and Research, Principles for Responsible Investment (United Kingdom)
  • Joshua Mitts, Associate Professor of Law and Milton Handler Fellow, Columbia Law School
  • Jeffrey Tricoli, Managing Director for Technology Risk Management, Charles Schwab

Panel Discussion

In his opening statement, Cassidy gave an overview of the Commission’s Office of Technology Controls Program Division of Examinations regarding the Division’s role in addressing cybersecurity. Cassidy mentioned that the division has recently been working on a cybersecurity hygiene rule but did not go into detail.

In his opening statement, Tricoli stated it is important for companies to establish a governance committee in order to quickly identify cybersecurity risks and how to remedy them and that a company’s board must be made aware of what potential cyber-attacks may occur and set the boundaries on what will be allowed. He also elaborated on the importance of board members having expertise on cybersecurity and mentioned growing concerns with third-party cybersecurity incidents and the impacts they could have on a company.

In his opening statement, Mitts stated that profits obtained via trading opportunities may enhance hackers’ incentives to exploit security vulnerabilities, leading to greater dissemination of stollen personal information, impersonation, and identity theft. He also discussed how third parties often escape accountability for their fiduciary duties. In addition, he stated mandating current reporting of cyber security incidents on Form 8-K will protect investors by reducing information asymmetry and enhancing share-price accuracy in the market.

In her opening statement, Karananou focused on investor expectations and said cybersecurity governance is a relevant factor for investors determining whether to invest. She also discussed the problem of third parties not being held accountable for cybersecurity risk and how the dependency on technology comes hand and hand with the increase risks of cyber-attacks.

Question and Answer

Allen-Ratzlaff asked why registrants must become more vigilant of cybersecurity incidents. Cassidy said the rate of cybersecurity incidents is growing, and all industries must understand the type of threats they are susceptible to. He added that cyber security hacking via email is one of the growing, less sophisticated cyber-attacks.

Allen-Ratzlaff also asked how the Commission should balance the need of regulating companies with the need to give companies the ability to self-remedy situations. Cassidy said it is important for a company to understand that any third party they may be doing business with is a potential threat.

Allen-Ratzlaff asked about the importance of board members being educated on cybersecurity. Tricoli said board members must understand cybersecurity vulnerabilities so the company can know the amount of resources to invest in cybersecurity protections.

Hellmer asked how tighter regulations and increased disclosure requirements affect companies. Tricoli stated that, in the past, disclosure has been very inconsistent and fractured amongst different industries and that companies must conduct internal reviews and figure out ways to adequately adhere to the requirements of new rules.

Allen-Ratzlaff and Hellmer asked if the Commission should establish a definition of what an immaterial cybersecurity breach is. Mitts said many Form-8K rules require subjective thoughts on if a cybersecurity incident is material and that, currently, gaps exist within the proposed rules on what is a material cybersecurity incident and what is not.

Hellmer asked if it was more difficult for small companies to adhere to disclosure requirements and how much more investors are considering cybersecurity prior to investing. Karananou said all companies should conduct cybersecurity training because it is one of the top risks companies face and because investors are paying more attention to cybersecurity risks.

Allen-Ratzlaff asked what investors should be doing to better understand cybersecurity risk. Karananou and Tricoli said it should be at the top of their agenda along with the level of sophistication the company has regarding cybersecurity.

For more information on this hearing, please click here.

For an archive of past SIFMA hearing coverage, please click here.