Could Europe's New Data Protection Regulation Curb Online Disinformation?
from Net Politics and Digital and Cyberspace Policy Program

Could Europe's New Data Protection Regulation Curb Online Disinformation?

European leaders are rushing to implement new laws to curb disinformation on social media. However, existing European data protection laws might actually make it harder for bad actors to spread fake news online. 
German Chancellor Angela Merkel and French President Emmanuel Macron arrive to a news conference at the Chancellery in Berlin on May 15, 2017.
German Chancellor Angela Merkel and French President Emmanuel Macron arrive to a news conference at the Chancellery in Berlin on May 15, 2017. Fabrizio Bensch/Reuters

Special Counsel Robert Mueller’s bombshell indictment on Friday, with its new details about Russian efforts in the 2016 election, may spur U.S. political leaders to finally take action to make it more difficult for bad actors to exploit social media to undermine future elections. They may look to Europe, where political leaders have taken steps to curb online disinformation.

Two weeks ago, UK Prime Minister Theresa May called on internet companies to curb online abuse. Her speech follows French President Emmanuel Macron’s more muscular announcement, at his New Year’s address to journalists, that he would reform French media law to combat the spread of fake news on social media, including by empowering judges to remove fake news content in the period preceding elections. German leaders have already implemented a new law imposing fines up to €50 million if internet firms do not remove “obviously illegal” hate speech within twenty-four hours and other illegal content within seven days.

More on:

European Union

Digital Policy

Privacy

Influence Campaigns and Disinformation

Russia

However, existing EU law may help limit the potency of disinformation without the need for a judge or platform to adjudicate what is or is not hate speech or fake news. The sweeping new EU General Data Protection Regulation (GDPR) contains provisions that could restrict how bad actors tailor disinformation by making it harder for them to use the personal data they need to make their campaigns effective in targeting susceptible individuals. Although Facebook’s deputy head of privacy has described GDPR as the biggest change for the company since it was founded, few have seen it as a tool against disinformation.

Disinformation campaigns exploit the same targeted internet advertising system used by familiar brands. Internet advertisers of all kinds gather personal data—including past purchases, petitions signed, sites visited, news sources or advertisements clicked on—from a host of sources across devices. They segment people into categories and then target audiences of people online like those in the segments whose preferences they already know. Targets are encouraged to like pages, follow accounts, and share information. Then they are observed to see how they interact with material they are shown online; the content and the audiences are tweaked based on the responses.

In the case of disinformation campaigns, studies show that content is calibrated to maximize outrage among members of the specific audiences targeted. Because these same messages would be refuted or even produce a backlash if more broadly disseminated, the micro-targeting is critical. In the 2016 election, Russian-linked trolls used internet advertising tools to entice Americans to like and follow fictitious accounts and to spread disinformation to them. Then, they could observe what users responded to and sow further division through demonstrably false conspiracy theories. Facebook revealed recently that during the 2016 presidential election more than 62,000 users committed to attend 129 events organized by Russian trolls, such as rallies for adverse groups Heart of Texas and United Muslims of America that drew their separate audiences to the same place at the same time.

The new GDPR rules are likely to curb use of this model. While no silver bullet against the determined efforts of bad actors, GDPR sharply limits the ability to micro-target based on an online audience’s political views. GDPR requires companies using political or philosophical views (well-established concepts in EU privacy law) to obtain explicit user consent separately for each specific use and each entity that will receive personal data, and the companies cannot make the use of their service contingent on the user opting in. There is an exception for political data used by political parties, but not by others. It is safe to assume that few people will agree to be targeted based on their political and philosophical views. Bad actors will therefore find it more difficult to shop incendiary messages to the specific audiences who will respond favorably.

GDPR also could open up the black box of online political advertising. Users will be provided clear notice of how their data is being used and can revoke their opt-in consent at any time. GDPR also creates a right of “data portability,” allowing users to take their data with them to another provider. This provision may be valuable for civil society and academics to gain access to data from cooperating users to understand what data internet platforms compile and how it is used, thereby providing further accountability.

More on:

European Union

Digital Policy

Privacy

Influence Campaigns and Disinformation

Russia

GDPR rules apply to the collection of any personal data from a device in Europe, meaning they apply to the major internet platforms and other entities outside of the EU who obtain personal data from Europe. Furthermore, online platforms risk being jointly liable for the actions of third party companies on whose behalf they furnish data if the third parties do not use the data in full compliance with the stringent GDPR requirements. Potential sanctions for violations may be severe—up to 4 percent of global revenue or $20 million, whichever is greater, for a serious offence.

Certainly, GDPR rules will be very costly to comply with and create bureaucratic and operational headaches for companies. They also may shift power from third-party data brokers and ad-tech intermediaries to the major platforms which will be collecting the consent of users. And of course bad actors may evade the rules entirely. Whether they will also produce benefits—by creating more discipline in the approach to internet advertising data and constraining the ability of propagandists to target individuals based on their beliefs—will depend on enforcement.

The European approaches aren’t the only—or even the best—ways to tackle disinformation. However, the United States may be able to benefit from Europe’s reforms, identifying what works and what doesn’t in curbing disinformation. And, internet platforms could decide, as an experiment, to expand their EU data practices for political and philosophical views to the United States—requiring opt-in consent to use or disclose political views for anyone other than a political party in order to blunt foreign interference that is almost sure to come later this year. Facebook already appears to be planning to roll out some of their GDPR steps worldwide. The United States may indirectly benefit from European data protection rules.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail