Ranking Member Connolly’s Opening Statement at 2nd Subcommittee Hearing Examining Deepfakes

Mar 12, 2024
Press Release

Washington, D.C. (March 12, 2024)—Below is Ranking Member Gerry E. Connolly’s opening statement, as prepared for delivery, at today’s Subcommittee on Cybersecurity, Information Technology, and Government Innovation hearing examining deepfake technology.

Opening Statement
Ranking Member Gerry E. Connolly
Subcommittee on Cybersecurity, Information Technology, and Government Innovation
Hearing on “Addressing Real Harm Done by Deepfakes”
March 12, 2024


Acknowledging the importance of Women’s History Month, I thank the Chair for convening today’s hearing to discuss the pressing issue of nonconsensual deepfake pornography and its disproportionate harm on women.


A 2023 study found that while 98 percent of all online deepfake videos were pornographic, women were the subjects in 99 percent of them.  My hope is that today’s discussion will underscore the need for policy solutions that end the production, proliferation, and distribution of malicious deepfakes.


Earlier this year, artificial intelligence-generated pornographic images of American pop star, Taylor Swift, rapidly spread on social media platform X, formerly known as Twitter.  X proved slow to act, and the images received more than 47 million views in a matter of hours before X removed them.  Despite the images’ removal, the explicit deepfakes of the singer remain elsewhere online, and no laws exist to stop other malicious actors from reposting the material again. 


The fact that Ms. Swift, a globally recognized icon who built a $1 billion empire, cannot remove all nonconsensual deepfakes of herself, emphasizes that no one is safe.


Deplorably, children have also been victims of deepfake pornography. Last December, the Stanford Internet Observatory published an investigation that identified hundreds of images of “child sexual abuse material,” also known as “C-SAM,” in an open dataset that AI developers used to train popular AI text-to-image generation models.


While methods exist to minimize C-SAM in such datasets, it remains challenging to completely clean or stop the distribution of open datasets as the data are gathered by automated systems from a broad cross‐section of the web and lack a central authority or host. Therefore, tech company leaders, victims, advocates, and policymakers must come together to build a solution and address this issue head on.


Mrs. Dorata Mani, thank you for coming today and bravely sharing your family’s story.  You and your daughter, Francesca, have proven to be fierce advocates against the creation and proliferation of nonconsensual deepfake pornography.  You are providing a stalwart voice for countless others victimized by AI generated deepfakes.  I know President Biden has heard your heartfelt request for help, because during his State of the Union speech, he explicitly called upon Congress to better protect our children online in the new age of AI.  And I am happy to rise to that challenge.


I also want to thank my multiple Democratic colleagues who requested to waive onto this Subcommittee hearing to speak out against harmful deepfakes.  One of those Members, Rep. Morelle, introduced the Preventing Deepfakes of Intimate Images Act, which prohibits the creation and dissemination of nonconsensual deepfakes of intimate images.  As a cosponsor of this bill, I see his legislation as a great first step to preventing future wrongs that echo the plight of the Mani Family.


Recent technological advancements in artificial intelligence have opened the door for bad actors with very little technical knowledge to create deepfakes cheaply and easily. 


Deepfake perpetrators can simply download apps that “undress” a person or swap their face onto nude images. 


That is why, if we want to keep up with the rapid proliferation of deepfakes, we must support federal research and development (R&D) of new tools for the detection and deletion of deepfake content.


In addition, digital media literacy programs, which educate the public about deepfakes, have demonstrated effectiveness in vesting individuals with skills to critically evaluate content they consume online.


But we cannot have a fulsome discussion today without acknowledging that some House Republicans—including members of this committee—have actively worked against rooting out the creation and dissemination of deepfakes.


This Congress, the House Judiciary Committee’s Select Subcommittee on the Weaponization of the Federal Government has relentlessly targeted government agencies, non-profits, and academic researchers who are on the front lines of this work. 


These Republican Members have stifled efforts of individuals and advocacy organizations actively combatting deepfakes and disinformation.


For example, the Select Subcommittee accused the federal Cybersecurity and Infrastructure Security Agency of, “colluding with Big Tech…to censor certain viewpoints…”


These Republicans argue that CISA’s work to ensure election integrity—which, in part, includes defending against deepfake threats—is censorship.  They have also attempted to undermine the National Science Foundation’s—or NSF’s—efforts to research manipulated and synthesized media and develop new technologies to detect deepfakes.


Most recently, on February 6, 2024, Chairman Jordan subpoenaed NSF for documents and information regarding its research projects to prevent and detect deepfakes and other inauthentic information.  He issued this subpoena even though the directive originated from a 2019 Republican-championed law! Chairman Jordan has also targeted many academic researchers across the country who provide valuable research findings to the public and policymakers, such as the Stanford Internet Observatory, which led the investigation into C-SAM’s very questionable inclusion in AI-training datasets.


I am proud that the Biden-Harris Administration secured voluntary commitments from seven major tech companies promising to work together and with government to ensure AI technologies are developed responsibly, but our work is not done.


I urge my colleagues on the other side of the aisle to set aside their partisan fishing expeditions and redirect their focus toward crafting bipartisan solutions to stop the creation and dissemination of harmful deepfakes, which includes nonconsensual pornography targeting women and children.



118th Congress