Why Celebrations of Google’s Privacy Announcement Are Misplaced
from Net Politics and Digital and Cyberspace Policy Program

Why Celebrations of Google’s Privacy Announcement Are Misplaced

Its announcement, which follows years of scrutiny from privacy advocates and regulators, is not as benevolent as it appears.
The logo of Google is seen.
The logo of Google is seen. REUTERS/Charles Platiau/File Photo

Maya Villasenor is a former Digital and Cyberspace Policy program intern at the Council on Foreign Relations and an engineering student at Columbia University.

After committing to remove third-party cookies from its Chrome browser last year, Google announced in March that it will no longer use individual browsing histories and behaviors to sell targeted advertisements. The search giant, which pockets the majority of digital advertising spend globally, is poised to revolutionize the online advertising industry. However, its announcement, which follows years of scrutiny from privacy advocates and regulators, is not as benevolent as it appears.

More on:

Digital Policy

Privacy

Technology and Innovation

Although Google will no longer use third-party data to serve personalized ads, it will continue to use its own “first-party” data. Given that Google operates an extensive suite of high-traffic products and services—such as Chrome (65 percent market share globally) and Android (85 percent market share globally)—its first-party data is incredibly comprehensive, precise, and, from a privacy perspective, potentially invasive. Google no longer needs or wants to depend on data derived from tracking users outside of its purview, and thus its announcement is better viewed as a manifestation of the extraordinary scale of the data it has already collected than as an altruistic, pro-privacy decision.

In lieu of individualized tracking, Google products (e.g., Chrome) will use an ostensibly “privacy-preserving” approach called Federated Learning of Cohorts (FLoC), a technology that reportedly targets ads without the use of personally identifiable information (PII). Unlike cookies or other tracking tools, FLoC sorts users into clusters that share similar browsing and online behavioral patterns. Users will be portrayed to third parties (including advertisers) as a member of their broader cluster, which is defined by inferred traits such as interests, occupations, location, and activities. Thus, although FloC could better protect a user’s individual identity, it will still share information about a group of users that could qualify as private or sensitive (e.g., a group composed of people whose online behavior suggests that they share a particular medical condition).

Furthermore, myriad online service providers and advertising incumbents have already spent years accumulating individualized tracking information, which can be combined with FLoC in order to understand which clusters are more profitable and reliable, in turn creating risks for bias. Google, equipped with its extensive first-party data platform, will be among the data behemoths able to complement FLoC with individualized user information—location history, travel habits, knowledge about family and friends, and more—in order to better understand and discriminate between clusters. Facebook, despite frequent grumbles about upcoming “privacy” changes in the advertising industry, will also be able to derive insights from its own extensive troves of data.

The combination of FLoC with traditional tracking data creates its own set of concerns. For instance, smaller, newer market entrants without access to pre-FLoC data will be at a competitive disadvantage compared to large incumbents that have access to information derived from years of individualized tracking. In addition, the biases embedded within individualized tracking data will continue to propagate for as long as that data is used, regardless of whether its collection has ceased.

FLoC does little to satisfy the spirit of privacy, and both regulators and governments are poorly equipped to respond. The privacy frameworks in the United States and Europe are predicated almost entirely on protecting PII—an approach that, while important, necessary, and laudable, fails to account for the new, sophisticated data analysis techniques that sidestep PII rules. FLoC manages to share personal information that isn’t explicitly individualized, weakening the impact of Europe’s General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and Virginia’s recent data privacy law. New proposed regulation [PDF] in the European Union also doesn’t squarely address the complex ways in which approaches such as FLoC can extract privacy-relevant information from various forms of aggregated data, and thus risks being outdated even before it is adopted.

More on:

Digital Policy

Privacy

Technology and Innovation

It is not unusual for technological innovation to outpace regulation, but the disconnect is particularly acute when our definition of privacy itself is increasingly insufficient to address new methods used to extract (and profit from) personal information. Silicon Valley’s leaders are well aware of the current bipartisan skepticism about big tech, and have realized that they are better served by positioning themselves as allies rather than adversaries. Although Google is hoping to garner good press and goodwill from its announcement, it is too early, and likely wrong, to celebrate.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail