GDPR Compliance in Light of Heavier Sanctions to Come: at Least in Theory

[su_pullquote align=”right”]By W. Gregory Voss & Hugues Bouthinon-Dumas[/su_pullquote]

Ridiculously low ceilings on administrative fines hindered the effectiveness of EU data protection law for over twenty years. US tech giants may have seen these fines as a cost of doing business. Now, over two years after the commencement of the European Union’s widely heralded General Data Protection Regulation (GDPR), the anticipated billion-euro sanctions of EU Data Protection Authorities, or ‘DPAs’, which were to have changed the paradigm, have yet to be issued.

Newspaper tribunes and Twitter posts by activists, policymakers and consumers evidence a sense of unfulfilled expectations. DPA action has not supported the theoretical basis for GDPR sanctions—that of deterrence. However, the experience to date and reactions to it inspire recommendations for DPAs and companies alike.In our working paper, EU General Data Protection Regulation Sanctions in Theory and in Practice, forthcoming in Volume 37 of the Santa Clara High Technology Law Journal later in 2020, we explore the theoretical bases for GDPR sanctions and test the reality of DPA action against those bases. We use an analysis of the various functions of sanctions (confiscation, retribution, incapacitation etc) to determine that their main objective in the GDPR context is to act as a deterrent, inciting compliance.

To achieve deterrence, sanctions must be severe enough to dissuade. This has not been the case under the GDPR as shown through an examination of actual amount of the sanctions, which is paradoxical, given the substantial increase in the potential maximum fines under the GDPR. Sanctions prior to the GDPR, with certain exceptions, were generally capped at amounts under €1 million (eg £500,000 in the UK, €100,000 in Ireland, €300,000 in Germany and €105,000 in Sweden).

Since the GDPR has applied, sanctions have ranged from €28 for Google Ireland Limited in Hungary to €50 million for Google Inc in France, far below the potential maximum fine of 4% of turnover, or approximately €5.74 billion for Google Inc. based on 2019 turnover. While the highest sanctions under the GDPR have been substantially greater than those assessed under the prior legislation, they have been far from the maximum fines allowed under the GDPR.

Nonetheless, this failure of DPAs, especially the Irish DPA responsible for overseeing most of the US Tech Giants, has not gone unnoticed, as shown by EU institutional reports on the GDPR’s first two years. Indeed, increased funding of DPAs and greater use of cooperation and consistency mechanisms are called for, highlighting the DPAs’ current lack of means. Here, we underscore the fact that, in the area of data protection, there has been perhaps too much reliance on national regulators whereas in other fields (banking regulation, credit rating agencies etc), the European Union has tended to move toward centralization of enforcement.

Despite these short-fallings, the GDPR’s beefing-up of the enforcement toolbox has allowed for actions by non-profit organizations mandated by individuals (such as La Quadrature du Net that took action against tech giants after the GDPR came into force), making it easier for individuals to bring legal proceedings against violators in the future, and an EU Directive on representative actions for the protection of consumer collective interests is in the legislative pipeline.

On the side of businesses, there has been a lack of understanding of certain key provisions of the GDPR and, as compliance theorists tell us, certain firms may be overly conservative and tend to over-comply out of too great of a fear of sanction. This seems to be the case with the GDPR’s provisions regarding data breach notifications, where unnecessary notifications have overtaxed DPAs. The one-stop-shop mechanism, which is admittedly complex, also created misunderstanding.

This mechanism allows the DPA of the main establishment in the European Union of a non-EU company to become the lead supervisory authority in procedures involving that company, which potentially could lead to companies’ forum-shopping on this basis. However, there is also a requirement that the main establishment has decision-making power with respect to the data processing to which the procedure relates. Failure to consider the latter requirement could result in companies selecting main establishments in countries where there is not such decision-making power, and thereby halt attempts at forum-shopping for a lead supervisory authority for certain processing.  One example of this culminated in the French DPA (CNIL)’s largest fine so far, imposed on Google, whereas the latter argued that the Irish DPA was its lead supervisory authority.

As we explain in our paper, a lack of GDPR enforcement carries risks. Not only does it undercut the deterrent effect of the GDPR, but it also provides a tenuous basis for risk assessment by companies. While the GDPR’s first two years involved a sort of grace period when DPAs focused on educating companies and spent time painfully investigating complaints to litigation-proof their cases, some companies model their risk assessment of regulation based on enforcement histories. If there is a push for greater enforcement, which EU institutional reports would tend to foreshadow, the basis for companies’ models will be inaccurate. Furthermore, such dependence on risk evaluation ignores potential benefits to firms of increased trust and efficiency involved with expanding compliance to adopt a higher data protection compliance standard applied to customers worldwide.

Thus, we argue, not only should DPAs sanction offenders, but DPAs should sanction them severely when justified, establishing the necessary deterrence effect for EU data protection law. Moreover, DPA’s communication should in many cases be modified to stop downplaying sanctions: such communication is counterproductive to the desired effect of sanctions. Companies, on the other hand, should take efforts to understand fully the GDPR, and embrace compliance, leaving behind data protection forum-shopping as a potentially ineffective action. Furthermore, the typical securities lawyer warning that, ‘past performance is no guarantee of future results’, may be a forewarning to companies using past sanctions to create their compliance risk-assessment models that the results may not be accurate for the future.

Gregory Voss is an Associate Professor in the Human Resources Management & Business Law Department at TBS Business School.

Hugues Bouthinon-Dumas is an Associate Professor in the Public and Private Policy Department at ESSEC Business School.

This article originally appeared on the Oxford Business Law Blog (OBLB) and is reproduced with permission and thanks.

[su_pullquote align=”right”]By Gregory Voss and Kimberly Houser[/su_pullquote]
What the Cambridge Analytica debacle and the resulting U.S. Senate hearing revealed in no uncertain terms is that the U.S. does not have adequate data privacy laws. Despite the grandstanding by Senators, they demonstrated a lack of understanding of not only the workings of the data economy, but also of the laws of their own country.

When the EU General Data Protection Regulation (GDPR) became applicable on May 25, 2018, the disparity between the laws in the U.S and those in the EU became very apparent. In our working paper, GDPR: The End of Google and Facebook or a New Paradigm in Data Privacy?, slated for the fall edition of the Richmond Journal of Law and Technology, we explore these differences in terms of ideology, enforcement actions, and the laws themselves.

The American tech business model is to provide services free of charge in exchange for a user’s personal data. This comports with the data protection law in the U.S., which is sector specific, meaning only certain types of data, such as medical and financial data, are protected but only to the extent provided in the applicable statute. There is no omnibus U.S. federal data privacy law relating to the private sector. While the Federal Trade Commission (FTC) is the de facto privacy authority in the U.S., its history of enforcement actions against U.S. tech companies is quite limited. Historically, it has only been when a company provides a privacy policy and then fails to comply with it that the FTC has taken action against it under Art. 5 of the FTC Act regarding ‘deceptive and unfair practices’.

The European model of data privacy is based on a human rights foundation, with both privacy and data protection being fundamental. Under the predecessor to the GDPR (the 1995 Directive), numerous actions were brought against U.S. technology companies for violations of EU member state laws. Despite this long history of successful enforcement actions, these U.S. tech companies have not significantly changed their business model with respect to data obtained from the EU due to the low maximum fines under the member states’ laws (eg, a €150,000 fine in France for a company valued at €500 billion).

The American ideology behind data privacy is the balancing of an entity’s ability to monetize data that it collects (thus encouraging innovation) with a user’s expectation of privacy (with those expectations apparently being quite low in the U.S.). In the EU, the focus is on protecting a users’ privacy. A great example of this dichotomy is the Google Spain case. A Spanish citizen sought to have certain information removed from a Google search as permitted under EU law. Google objected to this in court. On the one hand was freedom of speech (paramount in the U.S.) and the public right to know asserted by Google, and on the other, the European’s right to privacy and to be forgotten argued by the European plaintiff. The European Court of Justice ruled that the balancing of interests tipped in favor of privacy for the Spaniard.

As we explain in our paper, U.S. federal laws are sector-specific with the primary areas being covered in the Health Insurance Portability and Accountability Act (health care information), the Gramm-Leach-Bliley Act (financial information) the Fair Credit Reporting Act (credit information) and the Children’s Online Privacy Protection Act (children’s information). In addition, states have also enacted varying data security laws aimed at requiring data breach notifications.

The European approach, on the other hand, has always been more overarching. The 1995 Directive, for example, required each EU member state to adopt comprehensive privacy protection laws meeting the objectives of the Directive. While the adoption of a directive allowed flexibility in each member state’s creation of its own privacy laws, in 2012, the European Commission determined that the law needed to be updated. The GDPR was enacted to: provide harmonization of the member states’ laws, incorporate advances in technology, eliminate administrative filing burdens for companies, and, as we posit in our paper, level the playing field for technology companies using the personal data of those located in Europe.

Because U.S. companies have been able to monetize their data with very few restrictions or consequences, they were able to become behemoths in the tech field with an 80% market share for Facebook and 90% market share for Google. The rules, however, have now been updated with respect to EU data. The GDPR requires, among other things, verifiable consent prior to using a user’s data and consent for each secondary use. There is no corresponding requirement in the U.S.; companies operating under U.S. law primarily rely on an opt out mechanism and are not required to disclose secondary uses of your data. The GDPR also provides a right to be forgotten, a right to data portability, the ability to opt out of automated machine decisions (profiling), and requires a lawful basis for processing data. None of these rights are afforded to U.S. citizens under U.S. federal law.

Because the GDPR is extraterritorial in scope, the law will apply regardless of where a company is located if it collects or processes the personal data of those located in Europe, where processing relates to the offering of goods or services (either for pay or “free”) to such “data subjects,” or to the monitoring of their behaviour, to the extent such behaviour takes place in the EU. This leaves us with the question: will the GDPR be the end of Google and Facebook or present a new paradigm in privacy protection? This remains to be seen. However, given that fines may now be assessed in the billion-euro range under the GDPR rather than the thousand-euro range of the past, it does seem likely that the U.S. business model (data for service) will need to adapt, at least with respect to data from the EU.

This article originally appeared on the Oxford Business Law Blog.

[su_pullquote align=”right”]By Gregory Voss[/su_pullquote]

How likely is it that the reforms launched in 2012 by the European Union (EU), with the aim of ensuring a high level of personal data protection for the citizens of its 28 member states, will become applicable in 2017? It is possible, but the European Parliament, the Council of Ministers and the European Commission have yet to reach an agreement: informal three-way discussions are taking place.

Since June 2015, these three EU institutions have been jointly drafting a text for the General Data Protection Regulation (GDPR). There are still a few points on which the parliament and the council disagree, in particular with regard to obtaining an individual’s consent for the processing of personal data, the rights and responsibilities of those collecting data, and the amounts of fines for non-compliance.

A commission proposal for new legislation on personal data protection was made back in 2012. But the draft regulation, passed by the parliament on March 12, 2014, is now awaiting validation by the Council. These reforms will help protect European citizens and their personal data even with respect to international companies whose headquarters are outside the EU, but who nevertheless process data online. While the degree of personal data protection in Europe is generally quite high, the financial penalties are too low, in contrast to those enforced in the United States.

When the three EU bodies have agreed on the final draft text, it can then be adopted only after two consecutive readings of the same text by the parliament, whose members are directly elected by EU citizens and after approval by the council, which represents the governments of the 28 member states. Once adopted (most likely in 2016, though some were pushing for adoption at the end of 2015), the regulation will become applicable in the two years that follow.

This GDPR will harmonize European law and may deliver an additional benefit by triggering a broader process that leads to the standardization of international legislation on protection of personal data. Moreover, the reduction of the administrative burden arising from this single piece of legislation will enable savings of €2.3 billion per year, according to the Commission’s calculations.

The process may seem to be taking a long time, but it has to be borne in mind that it took five years to finalize the 1995 European directive on personal data protection. The GDPR is essentially at the three and a half-year mark, so there is still time for this.

The GDPR has been subject to intense lobbying efforts by the representatives of those who process data. While they may slow down the legislative process, these actors can play a legitimate role in informing legislators about the practical realities faced by the companies who collect data.

Following the Snowden revelations, efforts to reform the legislation have experienced numerous upheavals. In June 2013, Edward Snowden, a former CIA consultant and a member of the National Security Agency (NSA), revealed that the US government had collected personal data concerning individuals living outside the US from nine of the biggest American technology companies, particularly as part of an electronic monitoring program known as PRISM. On October 21, 2013, the European Parliament proposed a text in which it was stipulated that the company responsible for data processing, or its subcontractor, would have to inform the data subject about any communication of their personal data to the public authorities in the previous twelve months. This provision is clearly influenced by the PRISM case.

In general, revelations such as this one, relating to data protection, help stimulate the debate about privacy in Europe, even if they have weakened trust between the EU and the United States. On October 6, 2015, as a result of the transfer of data on an Austrian citizen to the United States, by the European subsidiary of Facebook, the Court of Justice of the European Union (CJEU) ruled against the validity of the Safe Harbor Privacy Principles, which had been used to justify the transfer and which stipulate that in the event of threats to US security, a clause allows the US authorities to access the personal data of European citizens. The CJEU’s decision, in turn, followed the conclusions of the Advocate General, , and invalidated the Safe Harbor, which according to Vossm “is a problem for more than 4,000 US and European companies that depend on the Safe Harbour Privacy Principles for the transfer of personal data to the United States.” It remains to be seen what actions the institutions and European and US companies will take following this decision.

On the other hand, even in the absence of a GDPR, the Google Privacy Policy case shows that EU member states have the tools to oblige the operator of a search engine to respect privacy and personal data protection laws. In this vein, a number of cases have led to the data protection authorities in Germany, Spain, France, Italy, the Netherlands and the United Kingdom imposing penalties on Google, including fines amounting to hundreds of thousands of euros. While the size of these fines is relatively small compared with Google’s annual turnover (€59 billion in 2014), they are examples of the more severe enforcement actions, based on the turnover of the companies sanctioned, which are foreseen in the European legislative proposals.

In France, the Commission Nationale de l’Informatique et des Libertés (CNIL – the French data protection authority) disagrees with Google about de-listing following the Google Spain decision by the CJEU. Since the court recognized this right in 2014, any person may request that the operator of a search engine erase the search results that appear in relation to their name. As a result, Google has received tens of thousands of requests from French citizens. It then proceeded to de-list results on its European search engine domains (.fr, .es, .co.uk, etc.). But it did not extend the de-listing to other geographic domains or to google.com, which any user can search. In May 2015, the CNIL requested that Google proceed with de-listing from all its geographic domains. Google, however, argues that this decision constitutes an infringement of the public’s right to information and is, therefore, a form of censorship. A CNIL rapporteur (the official who manages the case) will no doubt be appointed to resolve this issue.

While the EU is working hard to hammer out a jointly-agreed regulation on protection of personal data, its member states, such as France, continue to strengthen their legislative arsenals. On September 26, 2015, the government presented a draft document on the subject of a “digital republic”, comprising some thirty articles on the confidentiality of electronic correspondence, portability of files and open access to public data, for public consultation. Public consultation on the development of this document is an interesting approach, the effects of which deserve to be monitored.

[su_note note_color=”#f8f8f8″]This article, written by Gregory Voss, along with the articles “European Union Data Privacy Law Developments”, published in The Business Lawyer (Volume 70, Number 1, Winter 2014-2015); “Looking at European Union Data Protection Law Reform Through a Different Prism: the Proposed EU General Data Protection Regulation Two Years Later”, published in Journal of Internet Law (Volume 17, Number 9, March 2014); and “Privacy, E-Commerce and Data Security”, published in “The Year in Review”, an annual publication of ABA/ Section of International Law (Spring 2014), co-authored with Katherine Woodock, Don Corbet, Chris Bollard, Jennifer L. Mozwecz, and João Luis Traça.[/su_note]

[su_box title=”Practical applications” style=”soft” box_color=”#f8f8f8″ title_color=”#111111″]The effect of GDPR on businesses will depend on the final text adopted by the EU. It is a certainty that greater accountability will be imposed on companies that manage personal data. Some companies will probably have to create new data protection officer posts (DPO) defined on a similar model to the “correspondant informatique et libertés” (CIL) in France. Companies specializing in conducting privacy impact assessments will also emerge. The author, therefore, advises business leaders to closely monitor developments in legislation protecting personal data, in order to be able to comply with new legislation as soon as it comes into force. He proposes raising the awareness of employees through training on data protection. Finally, companies will have to implement adequate procedures to comply with the legislation on personal data protection, including those that enable the data breach notifications that will be required by the GDPR.[/su_box]

[su_spoiler title=”Methodology”]To produce these articles about data-protection legislation, the author has analysed many legal documents and “hundreds of pages of proposals, amendments and opinions”, especially those resulting from the work carried out by WP29, the independent EU working group on the handling of personal data. In his articles, he puts the proposals of European authorities to adopt a GDPR into perspective and offers practical advice for businesses. He has also examined the changes in opinion of various European bodies, the European Commission, Parliament and Council, and has studied the reactions of legislators to Edward Snowden’s revelations on electronic surveillance.[/su_spoiler]