Cambridge Analytica and Facebook Fallout: The Renforce/UGlobe Seminar

On 11 April 2018, Facebook founder and CEO Mark Zuckerberg appeared at the US congressional hearings. At the heart of the testimony was the Cambridge Analytica fallout on the misuse of Facebook users’ data, which continues to reveal the vulnerabilities of social media companies and their impact on politics. The business model of social media companies is based on the sale of advertisements and the provision of apps which allow the social media platforms to make the most of users’ data. Their businesses’ unique strength resides in the “targeted advertising” of potential consumers — and voters. While Facebook and other similar social media generate an enormous benefit of sharing information, the companies’ reliance on users’ data triggers an unprecedented risk of information misuse, not only in a commercial sense, but also for political campaigns.

On 9 April, two days before the Zuckerberg hearings, Utrecht University’s RENFORCE and UGlobe held a seminar on the Cambridge Analytica fallout. The seminar was organized by Lucky Belder and Machiko Kanetake, coordinators of the UGlobe project on “Disrupting Technological Innovation? Towards an Ethical and Legal Framework”. Here is the gist of the seminar, together with our own observations.

New and old phenomena

In the midst of the ostensibly unprecedented data misuse, we ought to ask whether the Cambridge Analytica fallout represents an entirely new phenomenon. Sybe de Vries set the tone of the Utrecht seminar by posing this overall question. The Cambridge Analytica fallout represents familiar stories in many respects. Data breaches or the mishandling of private data happen on a daily basis. One of the biggest data breaches occurred in 2016, when Yahoo’s 3 billion accounts were compromised as a result of a data theft. At the same time, the fact that the Cambridge Analytica fallout was not a standard case of “data breach” embodies an element of novelty. The business model of social media companies cultivated the conditions for Facebook’s previous lax treatment of its users’ data, which allowed the Cambridge Analytica to have access to up to 87 million people’s data. In other words, there is an inherent tension in expecting social media companies to be the guardians of personal data. Such a tension is combined with their pervasive use of “Like” and “Share” buttons, every use of which reveals users’ personality and preferences. Overall, the Cambridge Analytica fallout made the most of Facebook’s business model, scale, and its users’ personality traits.

Data protection under EU law

Given that the misused data involved those of 2.7 million EU citizens, it is clear that the EU and its member states ought to strengthen their privacy protection laws applicable to IT giants. As Sybe de Vries highlighted during the seminar in Utrecht, the horizontal effect of fundamental rights and freedoms should provide a useful framework for the development of regulatory measures in a multi-level regulatory society. In this regard, in the EU, the General Data Protection Regulation (GDPR), which becomes effective as from 25 May 2018, will strengthen the regulation of the processing of personal data, as Stefan Kulk remarked during the seminar. It requires social media companies such as Facebook to ensure that users can make an informed decision about the use of their data and whether it is shared with external developers. In particular, the GDPR’s duty to protect personal data “by design” could require social media companies to implement measures to prevent data from being shared with those other than external developers which can be trusted.

Regulating unfair competition

The misuse of Facebook users’ data is also embedded in its sheer market dominance. Despite the Cambridge Analytica saga, many people still continue using Facebook, due at least in part to the lack of any viable alternatives. In this regard, competition law should fully catch up with the business model of social media companies, as Rogier de Vrey observed during the seminar in Utrecht. In fact, in December 2017, Germany’s competition authority found that Facebook had abused its dominant market position. The stringent protection of personal data should thus be accompanied by the provision of alternative social media platforms to consumers. Protection against misuse of Facebook users’ data may also be available under consumer protection law. More specifically, lack of transparency in online platforms’ practices (e.g. as a result of a misleading privacy policy or of non-compliance to information requirements) might result in infringement of unfair trading practices law.

Building societal resilience to disinformation

In the long run, the political or commercial misuse of personal data could only be mitigated by building societal resilience to disinformation and strengthening the diversity of media and societies in general. This is one of the overall messages of the European Commission’s High Level Expert Group on Fake News and Online Disinformation, which released its final report in March 2018. During the seminar in Utrecht, Madeleine de Cock Buning, who chaired the High Level Expert Group, introduced some of the key findings of the High Level Group, including the urgent need for promoting media and information literacy to help users navigate the digital media environment. As Machiko Kanetake presented during the seminar, Facebook’s apparent failure to detect transfers of personal data to the third party via disguised psychological tests highlights the possible roles of explicit and implicit biases on the part of those who design Facebook’s privacy-related algorithms. They were apparently not designed in a manner that would allow Facebook to properly identify the risks of the misuse of personal data. (Think, for instance, algorithms used for a bank; they are designed to identify as many risks of fraudulent transactions as possible.)

Overall, to prevent the future misuse of private data involving social media, we must enhance our critical insights into information and systems that surround us and construct our thinking and behavior. In this sense, the Cambridge Analytica fallout was a wake-up call, not only for social media companies and regulators, but also for so many of us who live in the information age.

Machiko Kanetake and Lucky Belder

This entry was posted in Core values on by .
Machiko Kanetake

About Machiko Kanetake

Machiko Kanetake is Associate Professor of Public International Law at Utrecht University. She is a member of the Management Board of the Utrecht Centre for Regulation and Enforcement in Europe. Previously, she has been appointed as a postdoctoral researcher at the University of Amsterdam and held visiting appointments at NYU School of Law, Harvard Law School, and King's College London. She is a senior editor of the Leiden Journal of International Law.