Deepfake of Kamala Harris Reups Questions on Tech’s Self-Regulation
from Net Politics, Digital and Cyberspace Policy Program, Women Around the World, and Women and Foreign Policy Program

Deepfake of Kamala Harris Reups Questions on Tech’s Self-Regulation

The use of deepfakes in the presidential campaign makes clear the risks of continuing to allow technology companies to self-regulate.
Elon Musk's account on X is displayed next to Kamala Harris' account on X in this illustration photo taken in Poland, on July 24, 2024.
Elon Musk's account on X is displayed next to Kamala Harris' account on X in this illustration photo taken in Poland, on July 24, 2024. Jakub Porzycki/NurPhoto via Getty Images

More on:

Artificial Intelligence (AI)

Technology and Innovation

Social Media

Last week, Elon Musk posted on X (formerly Twitter) a digitally altered “deepfake” video, insidiously manipulating a campaign ad for the presumptive democratic presidential nominee Vice President Kamala Harris. Beyond the derogatory nature of the video’s content—which includes the use of a generated spoof voice of Harris describing herself as “the ultimate diversity hire”—the video has resurfaced the ever-present debate on whether technology companies can reliably “self-regulate.” In this case, the irony is especially stark as Musk ostensibly violated his own platform’s policy on synthetic and manipulated media by failing to disclose the context of the audio being AI-generated and failing to label it as a parody as was done by the original creator. While Musk has remained defiant to criticism of his post, the video has drawn the ire of several lawmakers including Senator Amy Klobuchar (D-MN), Representative Barbara Lee (CA-12), and Governor Gavin Newsom, who have all seized on this moment to push for tougher regulation on artificially generated content. Since AI disclosure rules on political ads remain under development, the federal government’s approach continues to rely on tech platforms regulating themselves. 

This is not the first example of the use of artificially generated video or audio in this campaign season. In January of this year, voters in New Hampshire received robocalls spoofing the voice of President Joe Biden, urging them not to vote in their state’s primary; and in 2023, I wrote a piece on an ad produced by the Republican National Committee (RNC) which depicted an imagined four more years of the Biden presidency using artificially generated images. As University of Virginia Law Professor Danielle Citron notes, women of color and from marginalized communities are disproportionately impacted by online harassment and abuse. George Washington Law Professor Spencer Overton points out, “While the United States is becoming more racially diverse, generative artificial intelligence and related technologies threaten to undermine truly representative democracy.” Whomever the next president is, the nation must find a way to safeguard the tools of democracy, including in the digital town square.  

More on:

Artificial Intelligence (AI)

Technology and Innovation

Social Media

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail