Ridiculously low ceilings on administrative fines hindered the effectiveness of EU data protection law for over twenty years. US tech giants may have seen these fines as a cost of doing business. Now, over two years after the commencement of the European Union’s widely heralded General Data Protection Regulation (GDPR), the anticipated billion-euro sanctions of EU Data Protection Authorities, or ‘DPAs’, which were to have changed the paradigm, have yet to be issued.
Newspaper tribunes and Twitter posts by activists, policymakers and consumers evidence a sense of unfulfilled expectations. DPA action has not supported the theoretical basis for GDPR sanctions—that of deterrence. However, the experience to date and reactions to it inspire recommendations for DPAs and companies alike.In our working paper, EU General Data Protection Regulation Sanctions in Theory and in Practice, forthcoming in Volume 37 of the Santa Clara High Technology Law Journal later in 2020, we explore the theoretical bases for GDPR sanctions and test the reality of DPA action against those bases. We use an analysis of the various functions of sanctions (confiscation, retribution, incapacitation etc) to determine that their main objective in the GDPR context is to act as a deterrent, inciting compliance.
To achieve deterrence, sanctions must be severe enough to dissuade. This has not been the case under the GDPR as shown through an examination of actual amount of the sanctions, which is paradoxical, given the substantial increase in the potential maximum fines under the GDPR. Sanctions prior to the GDPR, with certain exceptions, were generally capped at amounts under €1 million (eg £500,000 in the UK, €100,000 in Ireland, €300,000 in Germany and €105,000 in Sweden).
Since the GDPR has applied, sanctions have ranged from €28 for Google Ireland Limited in Hungary to €50 million for Google Inc in France, far below the potential maximum fine of 4% of turnover, or approximately €5.74 billion for Google Inc. based on 2019 turnover. While the highest sanctions under the GDPR have been substantially greater than those assessed under the prior legislation, they have been far from the maximum fines allowed under the GDPR.
Nonetheless, this failure of DPAs, especially the Irish DPA responsible for overseeing most of the US Tech Giants, has not gone unnoticed, as shown by EU institutional reports on the GDPR’s first two years. Indeed, increased funding of DPAs and greater use of cooperation and consistency mechanisms are called for, highlighting the DPAs’ current lack of means. Here, we underscore the fact that, in the area of data protection, there has been perhaps too much reliance on national regulators whereas in other fields (banking regulation, credit rating agencies etc), the European Union has tended to move toward centralization of enforcement.
Despite these short-fallings, the GDPR’s beefing-up of the enforcement toolbox has allowed for actions by non-profit organizations mandated by individuals (such as La Quadrature du Net that took action against tech giants after the GDPR came into force), making it easier for individuals to bring legal proceedings against violators in the future, and an EU Directive on representative actions for the protection of consumer collective interests is in the legislative pipeline.
On the side of businesses, there has been a lack of understanding of certain key provisions of the GDPR and, as compliance theorists tell us, certain firms may be overly conservative and tend to over-comply out of too great of a fear of sanction. This seems to be the case with the GDPR’s provisions regarding data breach notifications, where unnecessary notifications have overtaxed DPAs. The one-stop-shop mechanism, which is admittedly complex, also created misunderstanding.
This mechanism allows the DPA of the main establishment in the European Union of a non-EU company to become the lead supervisory authority in procedures involving that company, which potentially could lead to companies’ forum-shopping on this basis. However, there is also a requirement that the main establishment has decision-making power with respect to the data processing to which the procedure relates. Failure to consider the latter requirement could result in companies selecting main establishments in countries where there is not such decision-making power, and thereby halt attempts at forum-shopping for a lead supervisory authority for certain processing. One example of this culminated in the French DPA (CNIL)’s largest fine so far, imposed on Google, whereas the latter argued that the Irish DPA was its lead supervisory authority.
As we explain in our paper, a lack of GDPR enforcement carries risks. Not only does it undercut the deterrent effect of the GDPR, but it also provides a tenuous basis for risk assessment by companies. While the GDPR’s first two years involved a sort of grace period when DPAs focused on educating companies and spent time painfully investigating complaints to litigation-proof their cases, some companies model their risk assessment of regulation based on enforcement histories. If there is a push for greater enforcement, which EU institutional reports would tend to foreshadow, the basis for companies’ models will be inaccurate. Furthermore, such dependence on risk evaluation ignores potential benefits to firms of increased trust and efficiency involved with expanding compliance to adopt a higher data protection compliance standard applied to customers worldwide.
Thus, we argue, not only should DPAs sanction offenders, but DPAs should sanction them severely when justified, establishing the necessary deterrence effect for EU data protection law. Moreover, DPA’s communication should in many cases be modified to stop downplaying sanctions: such communication is counterproductive to the desired effect of sanctions. Companies, on the other hand, should take efforts to understand fully the GDPR, and embrace compliance, leaving behind data protection forum-shopping as a potentially ineffective action. Furthermore, the typical securities lawyer warning that, ‘past performance is no guarantee of future results’, may be a forewarning to companies using past sanctions to create their compliance risk-assessment models that the results may not be accurate for the future.
Gregory Voss is an Associate Professor in the Human Resources Management & Business Law Department at TBS Education.
Hugues Bouthinon-Dumas is an Associate Professor in the Public and Private Policy Department at ESSEC Business School.
This article originally appeared on the Oxford Business Law Blog (OBLB) and is reproduced with permission and thanks.
[su_pullquote align=”right”]By Louise Curran[/su_pullquote]
The British vote to leave the EU has enormous implications for both parties, many of which are only beginning to be explored. One of the policy areas which will be most affected is trade policy. For the last forty years, the UK has essentially had no independent policy on international trade relations. Trade policy making was undertaken by qualified majority in the European Council and the resulting consensus became the UK’s effective policy.
Much of the discussion on Brexit has focused on the future trade relations between the EU and the UK. However Brexit will also have important impacts on the rest of the world, which are often ignored in the public debate.
In a recent conference paper, I explored the potential impact of Brexit on Global Value Chains (GVCs) through an analysis of its likely impacts on suppliers which rely on access to the UK market to integrate GVCs. The Brexit White Paper rejects membership of either the European Economic Area or a Customs Union. In this scenario, Brexit will lead to an independent UK trade policy. Thus the UK must create a new trade policy to govern its relations with suppliers (and customers) around the world. Indeed the wish to regain independence on trade policy was a key reason behind the rejection of a Customs Union.
The UK government has made grand statements about their intention to negotiate Free Trade Agreements (FTAs) with a variety of emerging (China and India for example) and developed country (US, Australia…) partners as part of their ‘Global Britain’ vision. They have said precious little about what the shift from the EU trade regime will mean in terms of trade relations with less economically interesting partners. This situation creates huge uncertainty for developing country suppliers which rely on existing trade agreements with the EU for access to the UK market. In this paper I sought to highlight which countries were most vulnerable to policy change.
In order to understand why certain suppliers are vulnerable, it is important to understand that trade policy is not just about FTAs. It is also about a whole structure of EU unilateral trade regimes which have evolved over decades. These provide special market access preferences to developing countries and very high levels of access to the poorest of them. What this means, in real terms, is that if you are an exporter from Bangladesh (classed as a Least Developed Country (LDC) by the UN) you pay no tariffs on your exports of shirts to the EU (and thus to the UK), whereas a Chinese shirt exporter will pay 12%. Similarly, if you are a Pakistani exporter of bedlinen, you will also pay no tariffs on your exports, while India also pays 12%. This is because Pakistan benefits from a special EU access regime for countries which have ratified and applied a long list of international agreements on everything from labour rights to environmental protection (called GSP+).
Over the last twenty years there has been extensive research exploring the evolution of Global Value Chains (GVCs) which seeks to understand why certain GVCs are structured in the way they are. Trade regimes have emerged as being an important factor in deciding where production ‘lands’ in the global economy. They are particularly important in sectors where special access regimes provide high levels of tariff advantages, like textiles and clothing for Bangladesh and Pakistan in the above example. Other sectors where trade regimes have been highlighted as important to the geography of GVCs are fish processing, especially tuna (where Papua New Guinea pays no tariff and Thailand pays 25%) and cut flowers (where Kenya pays zero compared to 8% for Australia).
In addition, these kind of special access regimes are contingent on the goods exported from a given country being considered by the EU to be ‘made in’ that country. The definition of these ‘rules of origin’ is complex and the result of long hours of debate and consultation. Research has consistently found these rules to have an important influence on the geography of GVCs. For example, US rules stipulate that for a shirt to count as ‘made in’ a country, it must be sown from fabric woven in that country, which has been woven from yarn which is also spun domestically. A country which theoretically has free market access, nevertheless needs a functioning, competitive textile and spinning industry in order to avoid paying tariffs. The EU has a more liberal approach to these rules, especially for LDCs like Bangladesh. My own research has confirmed that these rules have had an important stimulating effect on EU imports from both Bangladesh and Cambodia.
In order to identify which countries are most vulnerable to changes in the UKs trade regime, I analysed non-oil exports. I focused on those countries which, on the one had rely a lot on the EU for their exports and, on the other export a large share of their EU exports to the UK. The countries subject to unilateral market access which emerge as most dependent on the UK market are Kenya, Bangladesh, Cambodia and Pakistan. The biggest trade flows are in Bangladesh – which exports over $3.5bn to the UK, much of it clothing.
The continued integration of these developing countries into UK-oriented GVCs post-Brexit requires continued and consistent market access. There is no guarantee that the UK will provide this, although it would be, to say the least, surprising if they abandoned their long standing support for developing countries’ integration onto the world economy. There will almost certainly be a UK special access regime for developing countries after Brexit, however it may not be as generous as the EU’s and at the very least it is likely to differ, especially over time. A key question will be the extent to which the UK retains the very generous access regime for LDCs like Bangladesh and Cambodia and whether it retains something like the current GSP+ regime, which is vital for Pakistan. This uncertainty is unhelpful. Current GVCs have been constructed over time in response to existing trade regimes and their framing rules. The quicker future policy is clarified, the easier it will be for GVC actors to integrate any changes into their strategies and adapt to the new reality. The UK’s Department for International Trade (DIT) is exploring possibilities, but with so many issues to consider in the post-Brexit landscape, developing countries are concerned that they will not be top priority for UK policy makers. Academic research indicates that they are right to be worried.
[su_spoiler title=”Methodology”]The impact of Brexit on trade regimes and Global Value Chains, Paper for the GIFTA seminar: Implications of Brexit: Navigating the Evolving Free Trade Agreement Landscape. Commonwealth House, London, February 6-7 2017 [/su_spoiler]
[su_pullquote align=”right”]Par Yuliya Snihur[/su_pullquote]
In the construction of a corporate identity for their business, creators of innovative start-ups have to simultaneously highlight their distinctiveness and also show that they belong to a pre-existing category of similar businesses. The objective is to reach “optimal distinction” which means finding a balance between an identity which is distinct from other businesses, and a “group” identity where they can show they belong to well-established business category. This balance is important if starts-ups are to grow their reputation and legitimacy.
To be unique but not too unique, that is the dilemma. A business’s first few years of existence are critical for the construction of its identity. It’s a period when creators make strategic choices which they must implement rapidly so that the business project survives and develops, but whose consequences are difficult to modify over the long term. The aim is to highlight the distinctiveness of the business while reassuring potential customers and partners about its normality. This balance is what’s known as “optimum distinction”. To succeed, a midway point has to be found between being unique, which contributes to the reputation of the firm, and the need to be like the others, to belong to a pre-existing and recognised group or category, which delivers legitimacy.
In search of optimum distinction
The challenge of building a corporate identity is something all new businesses have to face, but it’s even more intense for innovating companies with new business models, ie, a way of running their business which breaks away from existing practices in their sector. By definition, start-ups have no history or track record and are unknown to the general public, who have no frame of reference or benchmarks to rely on when it comes to trust.
What this study seeks to identify is the means by which innovating start-up companies build their reputation and legitimacy in the eyes of the public. To answer this question, we have analysed the way in which four young businesses built their identity. All four had introduced new business models, but each belonged to a different market sector: health, restaurants, digital services and the hotel sector. The results reveal four specific actions that were present in every case: these are storytelling, the use of analogy, seeking accreditation or reviews, and the establishment of alliances or partnerships. On the basis of these results, we have come up with a theoretical model which shows the link between each action taken and its consequences for the business’s corporate identity as perceived by the public, each action tending to influence both the reputation and the legitimacy of the firm.
Self-affirmation and external recognition
The first two actions are the sole responsibility of the creator and are linked to the way the business proclaims or declares itself from the start. Storytelling describes the genesis of the enterprise and gives it meaning. If it highlights individual experience or the personality of the creator, it will have an influence the reputation of the firm; if it highlights a social issue, like sustainable development, it will be more likely to establish its legitimacy. Analogies, on the other hand, allow the firm to explain its contribution by comparing it to other players in other sectors, close to or distant from the firm’s own activity. When the players are from the same sector, we speak of a local analogy whose aim is to build up the firm’s legitimacy. If they are from different sectors, this more distant analogy will result in a strengthening of its reputation.
The two other types of action involve a broader cross-section of collaborators. These actions need to be taken later on because they require more time to put in place and call for a more objective assessment of the firm’s competency compared with other businesses or organisations. A third-party evaluation can take multiple forms, from rankings and prizes to processes of certification or accreditation. In the first instance, the evaluation should grow its reputation, in the second, it will impact on its legitimacy. And finally, establishing partnerships, with the regular meetings that entails, leads to stronger relationships with third parties. This leads also to image enhancement through association, which fosters the firm’s reputation or justifies its membership of a group or a category and thus confers legitimacy.
Consequences to be confirmed in new research phase
The size of our sample and the short period over which the study was undertaken do not allow us to draw any general conclusions about the effects of these four actions. Nonetheless, the replication of similar results in a sample of four businesses belonging to four different sectors does make it possible to offer hypotheses that make a fresh contribution to the theory of business identity, especially in the particular instance of businesses operating an innovative business model in their sector. These hypotheses could be tested in future studies on a larger sample and at a more advanced stage in the development of the business. On a practical note, new businesses engaged in innovation could use them to find pointers on the timing and the actions to implement to construct their firm’s corporate identity.
[su_spoiler title=”Méthodologie”]The approach chosen for this qualitative study draws on the field of multiple case-by-case studies. Yuliya Shilhur selected the four most innovative businesses in terms of their business models in four different sectors, from a representative line-up of 165 firms chosen at the start. The results were obtained by studying 620 pages of documentary sources (both internal and external) supplied by the firms and 29 interviews with inside sources (founders, employees) and external ones (investors, clients). The study was published in February 2016 in the review, Entrepreneurship and Regional Development, under the title “Developing optimal distinctiveness: organizational identity processes in new ventures engaged in business model innovation.” [/su_spoiler]
[su_pullquote align=”right”]By Uche Okongwu[/su_pullquote]
Supply chain optimization essentially involves finding a compromise between striving for customer satisfaction at the same time as profitability. By adjusting the different supply-chain planning parameters, each company can achieve the performance level in line with its strategy and objectives.
The concept of the supply chain is as old as economics: from the supply of materials to production and delivery, the successive players involved in any given market represent the links in a chain, acting as customers and suppliers to each other respectively. However, increased competition and globalization have made companies realize that all the different players in their supply chain share a common goal, namely customer satisfaction. Consequently, how the supply chain is organized and how it performs are of crucial strategic importance for companies, and increasingly so. In this regard, the example of the aerospace industry is regularly covered in business news and highlights this strategic role perfectly; indeed, the industry has had to increase production rates to meet the growing demand and this has created tension throughout the chain. However, this problem actually concerns all sectors of the economy, whether in industry or in services. Over the last twenty years, researchers and managers have been looking at ways of optimizing supply-chain management to improve companies’ performances, based on the ideas of collaboration, integration and information sharing.
The difficulty in resolving this issue lies in the complexity of the supply chain itself; in addition to the number of links in the chain, we need to take into account the number of performance indicators and particularly the number of parameters that a company can adjust in order to meet its performance targets, which is infinite. Until now, the research had focused on one parameter or another, sometimes combining them, but in a limited way. For the first time, our study aims to go further by combining several parameters positioned at different stages and functions in the chain (planning, procurement, production, delivery), in order to establish which combination of key factors produces the best performance.
Performance: always a question of compromise
The first issue that needs to be addressed concerns the supply-chain performance, indicators. Many indicators are used, some of which are contradictory, since certain indicators are linked to profitability and others to customer satisfaction. For our study, we selected three: profit margin, on-time delivery and delivering the quantities requested. Ideally, an optimized supply chain should make it possible to achieve maximum scores on all these parameters, but in reality, no company can claim to be the best in every area. As such, you have to reach a compromise at some point, according to your market and objectives, by accepting to “sacrifice” part of your performance on a given indicator. With this in mind, the idea of optimum supply-chain performance depends on the objectives the company sets in terms of profitability and customer satisfaction, but also in relation to its position in the market. Consequently, the main challenge in supply chain planning is finding this compromise.
The case on which we worked was inspired by a real situation. It concerns a supply chain in the field of furniture, for the production of tables and shelves. Out of the 12 general supply-chain planning parameters we identified, we decided to alter six and to observe the result of the simulation on our performance criteria: the planning time-frame(short or long), the production capacity in terms of human resources (constant or adapted to the demand), the production sequencing (priority given to the oldest or the most recent orders), the duration of the cycle, the reliability of the forecasts and the availability or otherwise of stocks.
In the case of this supply chain, the production capacity appears to have a strong impact on margins and the ability to meet demand, whereas sequencing has a greater impact on the promptness of deliveries and the extent to which the response meets the demand.
Addressing the company’s priorities
These results confirm the initial hypothesis, namely that different combinations of planning parameters will have different impacts on the performance indicators. The different planning parameters cannot be considered independently of the performance criteria, hence the need to make choices. The ideal combination of parameters depends on the performance sought by the company.
Using the model developed in this study, managers responsible for supply-chain planning have a theoretical and practical tool to help them in their decision-making, allowing them to determine the best combination based on the company’s priorities. The framework and methodology developed, as well as the results obtained, are a genuine breakthrough in terms of research. To take things further, it would be interesting to combine even more parameters – as long as the computer-simulation tools available make this possible -, and to test the model on different supply chain structures and in other market environments.
[su_spoiler title=”Methodology”]To conduct this study, Uche Okongwu (TBS), Matthieu Lauras (TBS, Ecole des Mines Albi), Julien François and Jean-Christophe Deschamps (Bordeaux University) reviewed the available literature on the topic of supply-chain performance. Based on the following research question: “What combination of key factors in supply chain planning make it possible to optimize the performance of the supply chain?”, the authors developed equation models that they tested on a real supply-chain case in the furniture industry. The study was published in January 2016 in the Journal of Manufacturing Systems, in an article entitled: “Impact of the integration of tactical supply chain planning determinants on performance”.[/su_spoiler]
[su_note note_color=”#f8f8f8″]Uche Okongwu has been a Lecturer in Operations Management and Supply Chain Management at Toulouse Business School since 1991. He has combined his career as a researcher with that of an engineer and consultant in industrial organization. In 1990, he obtained a doctorate in Industrial Engineering at the Institut National Polytechnique de Lorraine (Nancy, France). He is currently Director of Educational Development and Innovation at TBS, having already set up the School’s industrial organization division [/su_note]
[su_pullquote align=”right”]By Kévin Carillo[/su_pullquote]
The rapid development of collaborative communication technology as an alternative to e-mails provides companies with a possibility of fundamental transformation but will require supporting measures to usher in a genuine culture of knowledge sharing.
The upsurge in businesses of collaborative communication technology from web 2.0 has been both rapid and widespread. Internal social networks, video-conferences, blogs, micro-blogs, wikis, document sharing—the number of those adopting these linked tools never ceases to increase, in the hope of improving productivity and performance; tools that open up vistas of profound change within their companies and in the working habits of their staff. Little by little the traditional ‘silo’ model whereby the various departments, roles and hierarchies are compartmentalized in a kind of internal competition, is being replaced by the new and more open Enterprise 2.0 model based on increased staff collaboration that breaks down this rigid structure and on sharing information through a kind of forum which itself creates knowledge.
Alongside this organizational revolution, collaborative tools may also be an efficient solution to the increasing problem of e-mail proliferation. E-mails were revolutionary when they first appeared and were unanimously adopted in the workplace but they are now a victim of their own success to the point where their overuse becomes a serious obstacle to productivity: staff members get scores of emails each day, spend hours reading them, don’t open all of them, lose them and their in-boxes get filled up. Finally, communication is hindered and collaboration handicapped. Certain types of interaction currently done by e-mail would be much more efficient with collaborative communication techniques and this is certainly the case, for example, for conversations, sharing of expertise or brainstorming within a group or community.
This said, cooperation and knowledge sharing cannot simply be imposed by decree. Although it is extremely important to give staff access to alternative tools and systems, it is equally important to ensure they adopt them in a productive way. More so in that they are disruptive technologies that radically modify work habits and ways of relating.
The essential role of habit
Our research has focussed precisely on determining just how far the habitual use of collaborative tools—their day-to-day and automatic, routine use—influences the inclination of staff to share their knowledge when they no longer have access to e-mail. The theoretical model we developed identified three perceived advantages to using collaborative communication systems: the relative advantage they offer (it’s useful for my job), compatibility (it corresponds to my needs, the tasks I have to accomplish at work and to the nature of my job) and ease of use. We hypothesize that these advantages have a direct effect on user habits and on knowledge sharing. We also postulate that user habit has a catalyzing effect on each of the perceived advantages in relation to knowledge sharing.
To measure the validity of these hypotheses, we undertook a field study in an information technology (IT) services and consulting firm and obtained the following results: if users see an advantage in using collaborative tools they are more likely to make a habit of it and to share knowledge; likewise user friendliness also leads to habit-forming. On the other hand, we were unable to establish a direct link between user friendliness and knowledge sharing. Nor was the study able to establish the immediate effect of compatibility on knowledge-sharing habits. Concerning the central focus of the study, the part played by habit, the results show that it is extremely important in that it strengthens the impact of the relative advantage and of compatibility on skill sharing.
Technological evolution and the human factor
The study confirms that access to these technologies, no matter how efficient they are, is not enough to change behavior. Their use must become a habit. The more at ease staff are with collaborative tools, the more naturally they will share knowledge and the more easily they will adopt the codes and methods of Enterprise 2.0.
Consequently, what management has to do is to encourage these habits, and the study shows that there are two important arguments that can help bring this about: lead staff to understand that using a collaborative system is not only extremely useful but also easy. This implies introducing a number of measures, some of which are very simple: communication, incentives, games and competitions, sharing the experiences of advanced users, targeted pedagogical programs, and so on.
At the end of the day, this study underlines the classic problem when studying information systems: the importance of the human factor. Simply deploying a collaborative system is not suficient for an enterprise to become 2.0. A collaborative culture must created before the tools are implemented.
[su_note note_color=”#f8f8f8″]Kévin Carillo and the article “Email-free collaboration: An exploratory study on the formation of new work habits among knowledge workers”, Jean-Charles Pillet et Kévin Carillo, International Journal of Information Management, Novembre 2015.[/su_note]
[su_spoiler title=”Methodology”]In their research, Jean-Charles Pillet and Kévin Carillo carried out a quantitative case study. Starting with a review of the research literature at the time they constructed a theoretical model based on the idea that ingrained habits diminish the relationship between the perceived advantage of using a collaborative system and the ability of staff to share knowledge. To measure the validity of the 9 hypotheses they drew up a questionnaire with 21 items, each one having a 5-point response scale from “totally disagree” to “fully agree”. The study was carried out in August 2014 in an IT services and consulting firm with a workforce of more than 80,000 people spread over forty-odd countries. Several years beforehand, its executives had launched a global policy of dropping e-mails in favor of a collaborative system comprising three main tools: videoconferencing, an internal social network and a system for document sharing. The study focused on a single particular department in the company, the one responsible for handling the suspension of client IT services as soon as possible. Sixty-six valid responses (55%) were collected from 120 people divided into 5 teams in France and in Poland and an analysis of these confirmed some of the hypotheses made.[/su_spoiler]