By Gregory Voss and Kimberly Houser

What the Cambridge Analytica debacle and the resulting U.S. Senate hearing revealed in no uncertain terms is that the U.S. does not have adequate data privacy laws. Despite the grandstanding by Senators, they demonstrated a lack of understanding of not only the workings of the data economy, but also of the laws of their own country.

When the EU General Data Protection Regulation (GDPR) became applicable on May 25, 2018, the disparity between the laws in the U.S and those in the EU became very apparent. In our working paper, GDPR: The End of Google and Facebook or a New Paradigm in Data Privacy?, slated for the fall edition of the Richmond Journal of Law and Technology, we explore these differences in terms of ideology, enforcement actions, and the laws themselves.

The American tech business model is to provide services free of charge in exchange for a user’s personal data. This comports with the data protection law in the U.S., which is sector specific, meaning only certain types of data, such as medical and financial data, are protected but only to the extent provided in the applicable statute. There is no omnibus U.S. federal data privacy law relating to the private sector. While the Federal Trade Commission (FTC) is the de facto privacy authority in the U.S., its history of enforcement actions against U.S. tech companies is quite limited. Historically, it has only been when a company provides a privacy policy and then fails to comply with it that the FTC has taken action against it under Art. 5 of the FTC Act regarding ‘deceptive and unfair practices’.

The European model of data privacy is based on a human rights foundation, with both privacy and data protection being fundamental. Under the predecessor to the GDPR (the 1995 Directive), numerous actions were brought against U.S. technology companies for violations of EU member state laws. Despite this long history of successful enforcement actions, these U.S. tech companies have not significantly changed their business model with respect to data obtained from the EU due to the low maximum fines under the member states’ laws (eg, a €150,000 fine in France for a company valued at €500 billion).

The American ideology behind data privacy is the balancing of an entity’s ability to monetize data that it collects (thus encouraging innovation) with a user’s expectation of privacy (with those expectations apparently being quite low in the U.S.). In the EU, the focus is on protecting a users’ privacy. A great example of this dichotomy is the Google Spain case. A Spanish citizen sought to have certain information removed from a Google search as permitted under EU law. Google objected to this in court. On the one hand was freedom of speech (paramount in the U.S.) and the public right to know asserted by Google, and on the other, the European’s right to privacy and to be forgotten argued by the European plaintiff. The European Court of Justice ruled that the balancing of interests tipped in favor of privacy for the Spaniard.

As we explain in our paper, U.S. federal laws are sector-specific with the primary areas being covered in the Health Insurance Portability and Accountability Act (health care information), the Gramm-Leach-Bliley Act (financial information) the Fair Credit Reporting Act (credit information) and the Children’s Online Privacy Protection Act (children’s information). In addition, states have also enacted varying data security laws aimed at requiring data breach notifications.

The European approach, on the other hand, has always been more overarching. The 1995 Directive, for example, required each EU member state to adopt comprehensive privacy protection laws meeting the objectives of the Directive. While the adoption of a directive allowed flexibility in each member state’s creation of its own privacy laws, in 2012, the European Commission determined that the law needed to be updated. The GDPR was enacted to: provide harmonization of the member states’ laws, incorporate advances in technology, eliminate administrative filing burdens for companies, and, as we posit in our paper, level the playing field for technology companies using the personal data of those located in Europe.

Because U.S. companies have been able to monetize their data with very few restrictions or consequences, they were able to become behemoths in the tech field with an 80% market share for Facebook and 90% market share for Google. The rules, however, have now been updated with respect to EU data. The GDPR requires, among other things, verifiable consent prior to using a user’s data and consent for each secondary use. There is no corresponding requirement in the U.S.; companies operating under U.S. law primarily rely on an opt out mechanism and are not required to disclose secondary uses of your data. The GDPR also provides a right to be forgotten, a right to data portability, the ability to opt out of automated machine decisions (profiling), and requires a lawful basis for processing data. None of these rights are afforded to U.S. citizens under U.S. federal law.

Because the GDPR is extraterritorial in scope, the law will apply regardless of where a company is located if it collects or processes the personal data of those located in Europe, where processing relates to the offering of goods or services (either for pay or “free”) to such “data subjects,” or to the monitoring of their behaviour, to the extent such behaviour takes place in the EU. This leaves us with the question: will the GDPR be the end of Google and Facebook or present a new paradigm in privacy protection? This remains to be seen. However, given that fines may now be assessed in the billion-euro range under the GDPR rather than the thousand-euro range of the past, it does seem likely that the U.S. business model (data for service) will need to adapt, at least with respect to data from the EU.

This article originally appeared on the Oxford Business Law Blog.