Skip to main content

Ranking Member Connolly’s Opening Statement at Subcommittee Hearing on Artificial Intelligence

March 8, 2023

Washington, D.C. (March 8, 2023)—Below is Ranking Member Gerald E. Connolly's opening statement, as prepared for delivery, at today's Subcommittee on Cybersecurity, Information Technology, and Government Innovation hearing entitled "Advances in AI: Are We Ready for a Tech Revolution?"

Image
connolly

Click here to watch the video.

Opening Statement
Ranking Member Gerald E. Connolly
Subcommittee on Cybersecurity, Information Technology, and Government Innovation
Hearing on "Advances in AI: Are We Ready for a Tech Revolution?"
March 8, 2023

The Cybersecurity, Information Technology, and Government Innovation Subcommittee has dedicated its first hearing to examining advances in Artificial Intelligence (AI) and its revolutionary impacts on society. This decision reflects our membership's interest and commitment to exploring, understanding, and implementing emerging technologies.

Last Congress, Chairwoman Nancy Mace, Rep. Ro Khanna and I introduced the Quantum Computing Cybersecurity Preparedness Act, which encouraged federal agencies to adopt post-quantum cryptography. I am pleased that the bill was signed into law just a few months ago. I look forward to future bipartisan collaboration as we define the problem sets associated with AI, design solutions, and promote innovation while simultaneously mitigating the dangers and risks inherent to AI technology.

The federal government has a historical, necessary, and appropriate role guiding and investing in research and develop (R&D) for new and emerging technologies. The Defense Advanced Research Projects Agency (DARPA), the well-known research and development agency of the United States Department of Defense, is responsible for the development of myriad emerging technologies.

One of its most famous successes includes the Advanced Research Projects Agency Network (ARPANET), which eventually evolved into the internet we know today. Other innovations include microelectronics, the Global Positioning System (GPS), inferred night imaging, unmanned vehicles, and what eventually became cloud technology. AI will require similar federal investment and engagement.

As stated in the January 2023 final report from the National Artificial Intelligence Research Resource Task Force, "The recent CHIPS and Science Act of 2022 reinforces the importance of democratizing access to a national AI research cyberinfrastructure, via investments that will accelerate development of advanced computing—from next-generation graphics processing units to high-density memory chips—as well as steps to actively engage broad and diverse U.S. talent in frontier science and engineering, including AI," and the report calls for $2.6 billion over the next six years for the purposes of funding national AI research infrastructure.

While government certainly plays a role in R&D, it also has a regulatory role. Congress has the responsibility to foster careful and thoughtful discussions to balance the benefits of innovation with the potential risks of emerging technology.

A recent National Bureau of Economic Research report found that AI could save the U.S. healthcare industry more than $360 billion a year, and be used as a powerful tool to detect health risks. A GAO report also predicts AI could help identify and patch vulnerabilities and defend against cyberattacks, automate arduous tasks, and expand jobs within the tech industry.

As with all technologies, in the wrong hands, however, AI can be abused to hack financial data, steal national intelligence, or create deepfakes—blurring people's abilities to certify reality, and sow further distrust within our democracy.

AI can also cause unintentional harms. GAO found that certain groups, such as workers with no college education, tended to hold jobs susceptible to automation and eventually unemployment.

Another concern relates to machine learning (ML) and data. ML uses data samples to learn and recognize patterns, such as scanning hundreds or thousands of pictures of lungs to better understand pulmonary fibrosis and revolutionize medical care. But what happens if those lung samples only come from a homogenous portion of the population, and that medical breakthrough is inaccurately applied?

When it comes to data, equity is accuracy, and we must ensure data sets include as much and as comprehensive a universe of data as possible.

It is paramount that during this hearing we begin to create a flexible and robust framework, particularly for government's use of AI, to protect democratic values and preemptively address the social, ethical and moral dilemmas that AI raises.

During the 117th Congress, this Committee also voted to pass the AI Training Act (H.R.7683) and the AI in Counterterrorism Oversight Enhancement Act (H.R. 4469) with bipartisan support. This committee is not entirely new to the AI space, and we look forward to continuing efforts to support transformative research.

We also look forward to building on the Biden Administration's efforts such as the National Artificial Intelligence Research Resource Task Force. Just over a month ago, the Task Force released its report to provide a roadmap to stand up a national research infrastructure that would broaden access to the resources essential to artificial intelligence (AI) research and development. I look forward to digging into its suggestions. I am also encouraged by the White House's Blueprint for an AI Bill of Rights to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.

AI is already integrated within the world around us, and its growing use throughout society will continue to drive technological advancements. America must implement an aggressive research-forward federal AI policy to spur competition with other countries that have already established nation-wide strategies and investment plans, such as China. Additional supporting policy strategies might include promoting open data policies or outcome-based strategies when assessing algorithms.

Finally, and more importantly, our country needs the workforce to properly develop, test, understand, and use AI. This workforce of the future will include technologists who will help govern AI responsibly. I look forward to hearing from our witnesses how to best balance all these priorities and prepare for the benefits and risks of AI.

###