Cyber Week in Review: March 15, 2024
from Net Politics and Digital and Cyberspace Policy Program
from Net Politics and Digital and Cyberspace Policy Program

Cyber Week in Review: March 15, 2024

U.S. intelligence officials look on as Senator Mark Warner (D-VA) speaks at a Senate Intelligence Committee hearing on worldwide threats to American security, on Capitol Hill in Washington, D.C. on March 11, 2024.
U.S. intelligence officials look on as Senator Mark Warner (D-VA) speaks at a Senate Intelligence Committee hearing on worldwide threats to American security, on Capitol Hill in Washington, D.C. on March 11, 2024. Julia Nikhinson/Reuters

EU passes AI Act; ODNI releases 2024 threat assessment; Google restricts chatbot responses to election queries; Trump authorized information operation against China in 2019; House passes bill to ban TikTok

March 15, 2024 4:22 pm (EST)

U.S. intelligence officials look on as Senator Mark Warner (D-VA) speaks at a Senate Intelligence Committee hearing on worldwide threats to American security, on Capitol Hill in Washington, D.C. on March 11, 2024.
U.S. intelligence officials look on as Senator Mark Warner (D-VA) speaks at a Senate Intelligence Committee hearing on worldwide threats to American security, on Capitol Hill in Washington, D.C. on March 11, 2024. Julia Nikhinson/Reuters
Post
Blog posts represent the views of CFR fellows and staff and not those of CFR, which takes no institutional positions.

European Union passes Artificial Intelligence Act

The European Parliament approved a landmark Artificial Intelligence Act to promote fundamental rights for high-risk AI. The act classifies AI systems into four level of risk: unacceptable risk, high risk, limited risk, and minimal or no risk. The act brands several systems as unacceptably risky, including the use of AI in law enforcement biometric surveillance systems in public spaces (except for finding a missing person or preventing a terrorist attack), social scoring applications, emotion recognition in the workplace and schools, predictive policing, and untargeted facial image data scraping. AI systems in certain critical sectors, including healthcare, transportation, and law enforcement, will be classified as high-risk and will be required to undergo testing for accuracy, robustness, and cybersecurity during development, and organizations will be required to mitigate potential risks during the deployment of those systems. Developers and operators of limited risk systems will be required to provide explanations of how a given system functions and what data it uses. The act contains special provisions addressing generative AI, requiring developers to provide a detailed summary of what text, pictures, videos, or other kinds of data were used to train generative systems, how much energy a given model uses, malfunctions in a system that cause serious harm, and to put cybersecurity measures in place for each model. Companies who fail to comply risk fines of anywhere from 1.5 percent to 7 percent of global turnover, with minimum fines ranging from $7.5 million to $35 million euros. The act will gradually enter force this year, with prohibitions on AI-based emotion recognition tools taking effect first and will be fully implemented by 2027.

ODNI releases 2024 Annual Threat Assessment

The Office of the Director of National Intelligence released its Annual Threat Assessment of the National Intelligence Community for 2024, which reflects the major priorities and focus areas of the U.S. intelligence community and the evolution and emergence of threats. The report deals with a number of major subjects, including China’s growing influence campaigns in the United States and its efforts to become a leader in technologies like synthetic biology and artificial intelligence, Russia’s efforts to use influence and cyberspace to shape other countries’ decisions, and Iran’s willingness to use cyberattacks to disrupt U.S. critical infrastructure and its influence campaigns targeting the 2024 U.S. presidential election. The report also discusses the potential for new technologies, such as AI and synthetic biology, to converge and rapidly remake the geopolitical landscape. According to the ODNI, “digital technologies have become a core component of many governments’ repressive toolkits” both against domestic dissidents and those living abroad; the office also predicted that authoritarian regimes will continue to make use of technological developments to accelerate their ability to sway opinion globally and target people and organizations they consider a threat to regime security.

Google restricts chatbot Gemini from answering election-related queries

More on:

Artificial Intelligence (AI)

Technology and Innovation

Intelligence

Google announced it will restrict its Gemini chatbot from answering election-related queries. If Gemini flags that a user is asking about election content, it is trained to respond, "I'm still learning how to answer this question. In the meantime, try Google Search." It is currently unclear how Google would define political content in relation to Gemini. Google’s decision will affect the nearly four billion people who live in counties with upcoming major elections. Google recently landed in hot water over Gemini’s responses to political prompts in India and its upcoming general election, following an incident where Gemini stated that Prime Minister Modi had been “accused of implementing policies some experts have characterized as fascist.” Google’s decision to curb some election-related queries to the chatbot also follows Gemini’s suspension of Gemini’s image generation capabilities last month after it generated images that depicted historical inaccuracies. Google has rolled out a number of other new policy coincides with Google’s Jigsaw unit preparing to launch a pre-bunking campaign designed to inoculate people against misinformation ahead of EU elections in June. 

Trump authorized information operation against China in 2019

Donald Trump authorized an information operation on Chinese social media in 2019, with the goal of fomenting paranoia among top Chinese leaders and forcing Chinese officials to divert resources toward finding and closing vulnerabilities in the country’s Great Firewall, according to a report from Reuters. A team from the CIA led the effort, which used fake accounts and strategically leaked information to news organizations to spread allegations that Chinese Communist Party leaders were hiding money overseas and attacked the Belt and Road Initiative. The operation primarily targeted audiences in China, but also operated in Southeast Asia, Africa, and the South Pacific. The operation was designed by Matt Pottinger, at the time the Deputy National Security Advisor, and was personally approved by Trump. Previous forays by U.S. military and intelligence agencies into clandestine influence campaigns have resulted in blowback, including in 2022, when Graphika and the Stanford Internet Observatory released a report revealing a wide-ranging U.S. military-backed clandestine information operation that promoted pro-Western narratives in the Middle East and Central Asia. The revelations led to a review of the Pentagon’s policy toward information operations, and in December 2023, the Department of Defense reportedly implemented a new policy to curtail clandestine information operations and require approval by senior officials before such operations could be launched.

The U.S. House of Representatives passed a bill that could lead to a TikTok ban

The U.S. House of Representatives passed the Protecting Americans from Foreign Adversary Controlled Applications Act in a bipartisan vote. The bill would give ByteDance, TikTok’s Chinese parent company, 165 days to divest from TikTok; if ByteDance fails to divest by then, the law would require app stores to no longer offer TikTok for download, and to stop distributing updates for the app. Lawmakers behind the bill are concerned that TikTok’s algorithms pose a national security threat that could cause Chinese officials to surveil American data. TikTok stated that the bill would “damage millions of businesses, destroy the livelihoods of countless creators across the country, and deny artists an audience” and launched a campaign that sent users to call their congressperson to vote against the bill. However, the campaign may have backfired as the Energy and Commerce Committee still unanimously advanced the legislation, and some members said the campaign hardened their decision to ban the app. There have been other attempts to ban TikTok, including Montana’s attempt to ban TikTok in the state, which was blocked by a U.S. district court judge who said it “oversteps state power.” Critics of the House bill believe that banning TikTok would have a detrimental effect on youth voter outreach in the upcoming November presidential election, and that it would infringe on free speech and limit information accessibility for the 150 million TikTok users in the United States. The National Security Council has called the bill “an important and welcome step,” and President Biden stated, “If they pass [the bill], I’ll sign it.” However, President Biden’s re-election campaign complicates his relationship with TikTok, as the campaign recently joined the app, and President Biden invited several TikTok influencers to his State of the Union Address.

Cecilia Marrinan is the intern for the Digital and Cyberspace Policy Program.

More on:

Artificial Intelligence (AI)

Technology and Innovation

Intelligence

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail
Close