Bipartisan Policy Center Event on the Future of AI
Bipartisan Policy Center
“The Future of AI Featuring Reps. Foster and Hurd”
Wednesday, October 15, 2019
Fireside Discussion
Presentation
Mark Walsh, Managing Director, Ruxton Ventures LLC, and Bipartisan Policy Center Board Member, noted that artificial intelligence (AI) bears similarity to the transition to atomic energy and the need to strike a balance between opportunity and threat.
Lynne Parker, Ph.D., Assistant Director of Artificial Intelligence, White House Office of Science and Technology Policy, posed the question, “what do we need to do to ensure American leadership in AI?” as the basis for her remarks. She noted President Trump signed an executive order in February 2019, which outlines the administration’s strategy for American leadership in AI. Parker highlighted a few key areas from the executive order that lay out the directive for the strategy, which include: 1) the U.S. driving research and development (R&D) across the federal government, academia, and industry to promote scientific discovery, economic competitiveness, and national security; 2) identifying federal data sets to further AI; 3) engaging workforce re-skilling and educational curriculum advancement; 4) adopting AI technical standards for safe implementation and adoption of AI; 5) implementing practices for trust and confidence to protect civil liberties, privacy, and American values; and 6) working with international partners, such as the Organization for Economic Cooperation and Development (OECD), to set high-level international principles for AI use. She said that the administration has made AI a key priority across all agencies. Parker added that the administration continues to work to allocate funding for agencies, implement metrics to measure the performance, robustness, and accuracy of AI systems, adopt a risk-based and an interoperable approach, and address data availability systems to further AI leadership.
Question & Answer
Walsh asked Parker to address key points regarding trust and confidence and the potential concern of monitoring citizens. Parker stated that for the most part people do not believe AI will be misused, but there are some areas that are critical to get “right.” She suggested the need for an evidence-based approach and the need to consider the particular use of AI in various cases. Parker added that regulatory sandboxes allow agencies to test AI in a safe environment and learn to “perfect” the technology. She expressed that she would never advocate for AI use in a manner that conflicts with American values.
Walsh asked Parker about policy distinctions between the state and federal level. Parker stated that there would need to be an approach taken by the federal government, neither too early nor late, to continue industry innovation. She added that there is a need to allow states flexibility but also avoid a patchwork of legislation and regulation.
A member of the audience asked whether the U.S. supports the OECD AI policy observatory. Parker stated that the OECD setting up the observatory for nations to share their experiences of AI implementation is an idea she supports. She added that the U.S. supports sharing information and global collaboration.
A member of the audience asked about algorithmic bias mitigation. Parker expressed that it is critically important to address in the R&D stage, to establish technical standards and to compare systems to “reasonable” standards with a cross-section view of all systems. Parker stated that there is a collective agreement across agencies that having consistent standards is important.
Panel Discussion
Panel Presentation
Suzette Kent, Federal Chief Information Officer, Office of Management and Budget, stated that her focus is for federal government agency development and oversight to meet the administration’s objectives. She said oversight is two-fold: 1) to measure how agencies accomplish their missions and improve services; and 2) how agencies respond to industry requests for purposes of making data available and external execution. Kent noted that she works to help the agencies with resources, understand administration priorities, develop policies and guidance, and implement measurement tools for internal and external advancement. She added that all the objectives are intended to be aligned with American values. Kent stated that the U.S. could learn from other nations, such as Estonia or Singapore, to improve the use of technology and the availability of data. Kent expressed that areas of concern in AI to address include trust, technology use, data collection and sharing, and the idea of job “fear-mongering.”
Dr. Emad Rizk, CEO, Cotiviti, said that AI could be leveraged to improve the U.S. healthcare system by cutting expenditures. He emphasized the need for an ecosystem that is interoperable, establishes a single patient identifier, an update to the Health Insurance Portability Accountability Act (HIPAA) provisions, and public-private partnership. Rizk noted that “good” data sets could reduce algorithmic bias and data transparency could increase trust.
John Soroushian, Associate Director, Corporate Governance and Finance, Bipartisan Policy Center, discussed the Bipartisan Policy Center’s AI main street financing task force and the effect of AI on the financial industry. He said the task force is focused on how AI affects the financial industry every day in areas such as credit card fraud. Soroushian said that the policy arena has taken steps to address AI in financial services, most notably the House Financial Services Committee developing the Task Force on AI. He noted that the Bipartisan Policy Center task force wanted to develop ideas and thoughts for policymakers to react to with “measure” and a “level head.” Soroushian highlighted areas of concern including how data is collected and used, the right mix of public and private sector involvement, the importance of trust, and the potential effect on the job market. He added that algorithmic bias would not be addressed solely by hard coding and recommended to implement a comparative measure to human bias.
Question & Answer
Members of the audience asked questions about AI misuse risk assessment, transparency, policy priorities, and algorithmic bias. Soroushian directed the audience to look at the House Financial Services Committee Task Force hearing on the future of AI. Kent stated that simple artificial data silhouettes could help pinpoint areas to address including time, resources, and efforts. She added that the words and channels that are chosen and the explanation information impact AI transparency. Rizk responded that the need for interoperable systems is important.
Closing Remarks
Rep. Bill Foster (D-Ill.) emphasized the problem of explainable AI being unacceptable, saying that the House Financial Services Task Force is wrestling with the impacts of AI. He stated that the potential impact on jobs and communities is an area of concern, comparing AI use to the transition to automobiles. He added his concerns about AI impacting the flow of wealth, as well as the potential transformation of the military use of AI. Foster highlighted the House Financial Services Committee Task Force hearing on digital identification and the importance of authentication.
Rep. Will Hurd (R-Texas) stated that technology and military dominance in the U.S. is no longer guaranteed and recommended that the U.S. take a leadership role in determining the rules of the road for AI. He referenced China’s development of technology to control and oppress their citizens and Russian advancement in AI military applications. Hurd expressed support for the administration’s AI initiatives and OECD’s AI principles. He suggested increasing resources for AI development and for the government to adopt standards for more efficient and less onerous policies. Hurd added that there is a need for workforce re-skilling, public-private partnership, educational training from early schooling, and streamlining legal immigration to retain global “brain drain.”
Question & Answer
Members of the audience asked about agency engagement in making progress in AI development as well as AI governance. Foster said that he believes the regulatory agencies are “savvy” enough to address AI concerns and that AI governance will be a “tough” issue to address. He added that large data set availability will be an area of concern, as startups would struggle to compete without these large data sets.
A member of the audience asked what the most important discreet harms of AI would be. Foster answered that algorithmic bias is not a new issue but is one that has caused incredible harm in the past. He added that there is a need for “deep” consideration for what it means to be fair and the need to re-evaluate all operations.
For more information on this event, please click here.