Cyber Week in Review: June 7, 2024
from Net Politics and Digital and Cyberspace Policy Program

Cyber Week in Review: June 7, 2024

DOJ and FTC reach deal on AI investigation; Hackers send zero-click through TikTok; Study on 2020 election misinformation released; Microsoft says Russia targeting Paris Olympics; YouTube tightens controls on gun videos.
The Olympic rings are displayed on the first floor of the Eiffel Tower ahead of the Paris 2024 Olympic games in Paris, France on June 7, 2024
The Olympic rings are displayed on the first floor of the Eiffel Tower ahead of the Paris 2024 Olympic games in Paris, France on June 7, 2024 Sarah Meyssonnier/Reuters

U.S. regulators reach deal that allows for investigations of AI firms

The U.S. Department of Justice (DOJ) and the Federal Trade Commission (FTC) have reached a deal that will allow both to launch investigations into prominent AI companies. Under the deal, the DOJ will investigate whether AI chipmaker Nvidia has violated antitrust laws in its business practices. The FTC, meanwhile, will take the lead in investigating the relationship between OpenAI and Microsoft, which has invested more than $13 billion in  OpenAI. It is unclear when the DOJ or FTC will announce formal investigations into the companies, or whether those investigations will center on particular cases of alleged misbehavior or misconduct. AI companies have been the object of fevered scrutiny since OpenAI launched ChatGPT in November 2022, but the U.S. government has thus far launched few investigations into AI developers. However, some have kicked off recently, including in July 2023, when the Federal Trade Commission announced it was launching an investigation into OpenAI over its data security practices.

Hackers deploy zero-click exploit through TikTok direct messages

A zero-day vulnerability was exposed earlier this week by hackers in TikTok's Android app, allowing hackers to take over accounts through a so-called zero click exploit, which doesn’t require victims to click a malicious link. The exploit leverages the app's deep linking functionality, which enables malicious URLs to trigger specific actions within the app. The flaw has apparently been used to compromise prominent accounts on the platform, with accounts for CNN and Sony being taken offline by TikTok over the weekend to prevent malicious actors from taking them over. The vulnerability potentially exposed over 1.5 billion users to account hijacking. TikTok has since patched the vulnerability and said it is working with affected users to help them regain access to their accounts. Users are advised to update their app to the latest version to protect themselves from similar exploits in the future and avoid clicking on suspicious links.

Study finds that vast majority of disinformation on Twitter around 2020 election came from 1 percent of accounts

More on:

Digital Policy

Cybersecurity

Artificial Intelligence (AI)

A recent study published in the journal Science found that a small group of accounts, dubbed "supersharers," were responsible for spreading the vast majority of content from fake news websites on Twitter in the United States around the 2020 election. The study analyzed over 600,000 U.S. voters on Twitter, now known as X, and found that approximately 2,000 users disseminated 80 percent of all content from untrustworthy websites. These supersharers tended to be older, with an average age of fifty-eight, female, and politically conservative. The researchers suggested that platform interventions such as limiting retweets or suspending accounts of supersharers could effectively reduce the spread of misinformation, though they acknowledge such measures won't eliminate fake news. The study’s authors also said that if social media platforms had suspended the supersharers in August 2020 ahead of the November election, it could have reduced the reach of content from untrustworthy websites spreading rumors and misinformation about the election by almost 60 percent.

Microsoft says Russia is preparing to target Paris Olympics with information campaigns

Microsoft's Threat Analysis Center says it has identified several Russian disinformation campaigns targeting public opinion around the upcoming 2024 Paris Olympic Games, as well as the reputation of the International Olympic Committee (IOC). Microsoft identified two groups running these operations, tracking the groups as Storm-1679 and Storm-1099. The threat actors have utilized AI to generate images and text and spread false information and rumors about the games and corruption in the International Olympic Committee (IOC). The groups have also utilized a common Russian misinformation tactic of impersonating existing news outlets and credible sources to spread misinformation and propagate threats and fears of violence and terrorism around the games. The IOC barred Russian and Belarusian athletes from competing in the Games in October 2023, citing Russia’s ongoing invasion of Ukraine as the reason. Russia has previously leveraged its digital capabilities to wreak havoc at other Olympic Games; in 2018, Russian government-sponsored hackers attempted to disrupt the opening ceremony of the Pyeongchang Winter Olympics in South Korea, using a cyberattack to shut down a significant portion of the digital infrastructure being used to hold and broadcast the event.

YouTube tightens age limits on gun videos

YouTube updated its content policy on Tuesday to completely block content “showing the use of homemade firearms, automatic firearms, and certain firearm accessories” for users under age 18; the changes will take effect on June 18. YouTube framed the update as a response to changing dynamics, and a spokesman cited the proliferation of 3D printed guns as a reason for the change in policy. In 2021, news outlets released a report on how to use 3D printing to make so-called “ghost guns,” which can be manufactured illicitly and cannot be easily traced by law enforcement, which compelled Google, YouTube’s parent company, to establish new controls to limit the number of ghost gun videos appearing on the platform a year later. Child safety online has been a prominent point of discussion in recent years, as policymakers have introduced bills seeking to make platforms more accountable for the well-being of children who use their services—while navigating widespread concerns about resulting constitutional rights violations.

More on:

Digital Policy

Cybersecurity

Artificial Intelligence (AI)

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail