By Victor Dos Santos Paulino
Innovation is one of the major themes in management. The capacity to innovate is considered to be critical for businesses to succeed. However, if we look at the space industry, we can see that innovation should be bridled with caution if a strategy is to succeed.
Conventional wisdom claims that the rapid adoption of new technologies improves the performance and survival of companies. Already at the beginning of the 20th century, Joseph Schumpeter had demonstrated the link between innovation and industrial success. In the 1990s, other scholars, such as Joel Mokyr, followed suit while explaining the inertia (the slow adoption of new technology) as being due to the phobic and irrational attitudes of managers. Against this backdrop, the space industry provides an interesting, and even paradoxical, example: this highly technological sector is a symbol of innovation, yet it considers it necessary to adopt a cautious approach. This is a requirement for telecommunications satellite operators, for whom reliability is more important than novelty, a factor that entails risk.
Innovation is a complex phenomenon that does not automatically guarantee success, progress and profits. For example, it has been demonstrated that over 60% of innovations led to failures. In addition, many companies legitimately postpone the adoption of innovations in several cases: for example, when an innovation would cannibalize an existing product or make it obsolete, or when the costs turn out to be too high compared with the expected profits. Do these factors explain the inertia-based strategy observed in the space industry?
By its very nature, the use of new technology by the space industry entails a risk: ground testing of a component, even under conditions that simulate space, may not accurately predict its behaviour in flight. It may perform perfectly, or prove faulty and no-one can be sure ahead of time! The result is that satellite manufacturers tend to favour an inertia-based strategy with which technological changes are adopted in an extremely cautious manner. Only tried and tested innovations are implemented. The cost of failure makes both manufacturers and their customers behave cautiously.
Caution features strongly in the space telecommunications sector, because the reliability of satellites is a major competitive advantage. To ensure the greatest reliability, manufacturers have set up perfectly tuned organisations and processes. This is why the cycle of design, development and manufacture of satellites is broken down – and must continue to be so, into successive phases: Phase 0 > Mission analysis; Phase A > Feasibility study; Phase B > Preliminary design; Phase C > Detailed design; Phase D > Manufacturing and testing; Phase E > Exploitation; Phase F > Decommissioning. While this approach helps ensure high levels of reliability, it also brings with it considerable structural inertia.
This need for reliability and stability leads space manufacturers to adopt information and communications technologies that have the least impact on the organization. However, it also leads them to not question technological choices for space telecommunications, choices that increase reliability, but do not allow any savings in production costs. Serge Potteck, a specialist in space project management, emphasises, for example, that to transmit a signal, engineers prefer to design antennas with a diameter of 60 cm in order to guard against possible malfunctions, whereas a less costly 55 cm antenna would suffice.
This analysis, however, needs to be refined for each of the different segments that make up the space sector. They can be classified into three groups. The first consists of telecommunications satellites and rockets (launchers). In this segment, the cost of failure would be very high. It would penalize the manufacturer, who would have produced a non-functioning satellite as well as the company that operates the launchers and markets launch services, but, also, all the players involved in the business plan. A failure can cause a delay of several years in the marketing of new telecommunications services to be delivered by satellite.
The second group consists of spacecraft built for scientific or demonstration purposes, and as always, the rockets used to launch them. The governments or space agencies that commission them are not subject to the usual profitability requirements. Here, disruptive technology and its associated risks are part and parcel of a project.
The last group overlaps the space industry and other industries. It encompasses, for example, the tools to operate the geolocation capabilities of the Galileo constellation or the distribution of digital content. In this segment, stability is seen as detrimental to the development of new markets.
While the particular environment in which the space sector operates tends to dampen its ability to experiment, it does not entirely prevent innovation. Inertia-based strategies are, in fact, largely an appearance. What we refer to as “inertia” is, in fact, a genuine innovation-dynamic: any new technology will be studied carefully before being tested, or not, on a new spacecraft, and before its possible subsequent integration. Could such a strategy, therefore, ensure the survival of a market in certain cases? To consider it as a failure to be countered would be a mistake!
The space industry would probably not innovate much if its only clients were commercial satellite operators. However, space agencies are willing to finance experimental spacecraft, thus accepting the financial risk associated with possible failure. It is thanks to them that the manufacturers of commercial satellites are able to validate the technological choices available to them, since they have proven their reliability.
From my publications, ” Innovation: quand la prudence est la bonne stratégie [Innovation: when caution is the right strategy]”, published in TBSearch magazine, No. 6, July 2014, and ” Le paradoxe du retard de l’industrie spatiale dans ses formes organisationnelles et dans l’usage des TIC [The paradox of the delay of the space industry in its organizational forms and in the use of ICT],” published in Gérer et comprendre [Managing and understanding], December 2006, No. 86
The analysis of the organizational and technological paradox that characterizes the space industry is based on several types of information: the theoretical literature available (Hannan and Freeman, 1984; Jeantet, Tiger, Vinck and Tichkiewitch, 1996); the work done by engineers in the sector (Potteck, 1999); and field observations made between 2003 and 2007 at one of the leading European prime contractors manufacturing satellites and space probes.
By Lourdes Perez
Contrary to conventional wisdom, small businesses are not condemned to be always at a disadvantage in their relations with large ones. They may have much to gain, provided they find a suitable mode of operation that avoids them finding themselves in competition when it comes to sharing the value created.
Under what conditions can both companies in a commercial partnership benefit fully and fairly? Until now, there has been something of a consensus on this issue: above all, the two companies in the partnership needed to be of equivalent size. As the profits from the partnership (development of new products, winning new markets, creation of additional income) are seen as a cake to be shared, they should be distributed according to the respective size and contributions of each partner. Seen from this point of view, an asymmetric partnership between a large company and an SME would most likely be less beneficial for the latter. Moreover, the literature on the subject generally stresses the risks for SMEs, which lack the weapons to defend themselves in this ‘coopetition’ relationship (cooperation-competition).
In opposition to this commonly-held idea, our study shows that on the contrary asymmetrical relationships offer opportunities for small businesses to innovate. This kind of relationship is virtually inevitable in the context of the globalized, ultra-competitive economy, where the most dangerous posture for a small company is to remain isolated. There are many examples of asymmetric partnerships, particularly in predominantly technological sectors, which have been found to be just as fruitful for “small” as for “large” companies. In many such cases, partners have been able to create relationships based on complementarity, which, in the end, is just as important as similarity.
This does not mean denying the risk of failure. The risk remains, but is far from insurmountable, as long as a strategy is devised to meet the challenges of this type of partnership: first, the difficulties of communication related to the differences in scale between the two structures (it’s rare for the head of an SME to have direct access to the Managing Director of a large company); and secondly, the differences in organizational structure and ways of working.
If the small business systematically approaches such a relationship with due respect for a number of basic rules, it increases the chance of a profitable outcome. To reach this conclusion, we analyzed a successful partnership between a small Spanish seafood company that wanted to extend the time it could conserve its shellfish and a large Italian company in the energy sector. From this case study, we developed a model that summarizes in three key steps the approach that a small business should follow to avoid the pitfalls generally associated with asymmetric partnerships.
The first step requires the selection of only a small number of partners. An SME does not have the means to commit itself seriously to multiple partnerships with large companies because it lacks time, logistical organisation and resources. It therefore has every interest in building a lasting alliance with a partner whose strategic objectives are complementary to its own. In our case, the two companies had very different motivations for forming a relationship: whereas the SME sought a technological solution, the large company saw an opportunity to enter the Spanish market, in a sector where it was not previously present. There was therefore no question of sharing the profits of the partnership, as they were not the same for each partner.
The second step is the construction of a strong and committed relationship that offsets the imbalance between the two structures. This requires a serious commitment on the part of the SME, which must nominate a “champion” within the company, i.e. a privileged contact person, with sufficient clout in the organization, someone who is respected throughout the company and who is capable of defending the project and driving it forward in spite of any resistance and obstacles that may arise.
The third step is to develop proposals of mutual value. At the beginning of the partnership, the SME and the large company each pursue specific objectives. But once the project gets under way, some appear unattainable and others incompatible, while new ones appear. The important thing here is to find the appropriate balance between obstinacy and flexibility: to be able to hold firm to one’s positions while taking the partner and unforeseen events into account, and being prepared to rethink the initial objectives. This requires an ability to listen, an open mind, and knowing the partner, its objectives and its motivations.
The success of this strategy clearly shows that there is no reason why an asymmetrical partnership should inevitably be less beneficial to the smaller partner. In our case study, each partner obtained 100% of what it was seeking, because they had expectations that were in no way mutually exclusive and because they were able to build their compatibility together. This new perception of asymmetry in a cooperative spirit, rather than as an unequal balance of power, opens new perspectives for the understanding and the management of relations between unequal partners.
References: Based on an interview with Lourdes Pérez and the article “Uneven Partners: managing the power balance”, Lourdes Pérez and Jesùs J. Cambra-Fierro, Journal of Business Strategy, 2015.
Lourdes Pérez and Jesús J. Cambra-Fierro undertook a qualitative case study of two companies, a Spanish SME and a large Italian company, engaged in an asymmetric partnership. The information collected was from a documentary survey (public information, sectoral information, databases) and a review of the scientific literature. Interviews with several qualified people in each company, based on open-ended questionnaires, helped the researchers determine the major themes of the study and build a matrix. The conclusions of the study are a synthesis of these different sources.
By Kévin Carillo
The rapid development of collaborative communication technology as an alternative to e-mails provides companies with a possibility of fundamental transformation but will require supporting measures to usher in a genuine culture of knowledge sharing.
The upsurge in businesses of collaborative communication technology from web 2.0 has been both rapid and widespread. Internal social networks, video-conferences, blogs, micro-blogs, wikis, document sharing—the number of those adopting these linked tools never ceases to increase, in the hope of improving productivity and performance; tools that open up vistas of profound change within their companies and in the working habits of their staff. Little by little the traditional ‘silo’ model whereby the various departments, roles and hierarchies are compartmentalized in a kind of internal competition, is being replaced by the new and more open Enterprise 2.0 model based on increased staff collaboration that breaks down this rigid structure and on sharing information through a kind of forum which itself creates knowledge.
Alongside this organizational revolution, collaborative tools may also be an efficient solution to the increasing problem of e-mail proliferation. E-mails were revolutionary when they first appeared and were unanimously adopted in the workplace but they are now a victim of their own success to the point where their overuse becomes a serious obstacle to productivity: staff members get scores of emails each day, spend hours reading them, don’t open all of them, lose them and their in-boxes get filled up. Finally, communication is hindered and collaboration handicapped. Certain types of interaction currently done by e-mail would be much more efficient with collaborative communication techniques and this is certainly the case, for example, for conversations, sharing of expertise or brainstorming within a group or community.
This said, cooperation and knowledge sharing cannot simply be imposed by decree. Although it is extremely important to give staff access to alternative tools and systems, it is equally important to ensure they adopt them in a productive way. More so in that they are disruptive technologies that radically modify work habits and ways of relating.
Our research has focussed precisely on determining just how far the habitual use of collaborative tools—their day-to-day and automatic, routine use—influences the inclination of staff to share their knowledge when they no longer have access to e-mail. The theoretical model we developed identified three perceived advantages to using collaborative communication systems: the relative advantage they offer (it’s useful for my job), compatibility (it corresponds to my needs, the tasks I have to accomplish at work and to the nature of my job) and ease of use. We hypothesize that these advantages have a direct effect on user habits and on knowledge sharing. We also postulate that user habit has a catalyzing effect on each of the perceived advantages in relation to knowledge sharing.
To measure the validity of these hypotheses, we undertook a field study in an information technology (IT) services and consulting firm and obtained the following results: if users see an advantage in using collaborative tools they are more likely to make a habit of it and to share knowledge; likewise user friendliness also leads to habit-forming. On the other hand, we were unable to establish a direct link between user friendliness and knowledge sharing. Nor was the study able to establish the immediate effect of compatibility on knowledge-sharing habits. Concerning the central focus of the study, the part played by habit, the results show that it is extremely important in that it strengthens the impact of the relative advantage and of compatibility on skill sharing.
The study confirms that access to these technologies, no matter how efficient they are, is not enough to change behavior. Their use must become a habit. The more at ease staff are with collaborative tools, the more naturally they will share knowledge and the more easily they will adopt the codes and methods of Enterprise 2.0.
Consequently, what management has to do is to encourage these habits, and the study shows that there are two important arguments that can help bring this about: lead staff to understand that using a collaborative system is not only extremely useful but also easy. This implies introducing a number of measures, some of which are very simple: communication, incentives, games and competitions, sharing the experiences of advanced users, targeted pedagogical programs, and so on.
At the end of the day, this study underlines the classic problem when studying information systems: the importance of the human factor. Simply deploying a collaborative system is not suficient for an enterprise to become 2.0. A collaborative culture must created before the tools are implemented.
Kévin Carillo and the article “Email-free collaboration: An exploratory study on the formation of new work habits among knowledge workers”, Jean-Charles Pillet et Kévin Carillo, International Journal of Information Management, Novembre 2015.
In their research, Jean-Charles Pillet and Kévin Carillo carried out a quantitative case study. Starting with a review of the research literature at the time they constructed a theoretical model based on the idea that ingrained habits diminish the relationship between the perceived advantage of using a collaborative system and the ability of staff to share knowledge. To measure the validity of the 9 hypotheses they drew up a questionnaire with 21 items, each one having a 5-point response scale from “totally disagree” to “fully agree”. The study was carried out in August 2014 in an IT services and consulting firm with a workforce of more than 80,000 people spread over forty-odd countries. Several years beforehand, its executives had launched a global policy of dropping e-mails in favor of a collaborative system comprising three main tools: videoconferencing, an internal social network and a system for document sharing. The study focused on a single particular department in the company, the one responsible for handling the suspension of client IT services as soon as possible. Sixty-six valid responses (55%) were collected from 120 people divided into 5 teams in France and in Poland and an analysis of these confirmed some of the hypotheses made.
By Gilles Lafforgue
Climate change issues are increasingly the focus of international negotiations these days. Could carbon capture and storage (CCS) be a more promising solution for reducing emissions without reducing the consumption of fossil fuels?
Today fossil fuels account for almost 80% of the world consumption of primary energy, since their relatively low cost makes them more competitive than renewable forms such as that solar, wind or biomass energy. Their massive use alone contributes 65% of the greenhouse gases, mainly CO2, which accumulate in the atmosphere and contribute to global warming.
In the expectation of a transition to a more sustainable energy strategy, Carbon Capture and Storage (CCS) appears to be a viable medium-term alternative for limiting emissions without restricting the consumption of fossil fuels. Developed during the 1970s to improve extraction efficiency from oil wells, CCS involves capturing carbon emissions at source before their release into the atmosphere, then injecting them into natural reservoirs (e.g. saline aquifers, geological formations containing brine unfit for consumption), into former mines or even back into hydrocarbon deposits (still being exploited or else exhausted). CCS would appear to be effective since it can remove 80 to 90% of emissions from gas or coal power stations.
The cost of using such a process remains to be determined. Implementation of CCS becomes cost-effective if the rate of carbon taxationreaches between 30 to 45 dollars/metric ton for coal-fired thermal power stations and 60-65 dollars/metric ton for gas-fired power stations (given that this price ought to fall as a consequence of technological change). However, CCS can only be implemented at a reasonable cost for those sectors that produce the greatest volume and the most concentrated emissions: heavy industries such as cement or steel works, or conventional electrical power stations (coal especially). This technology is, however, inappropriate for diffuse waste gases of low concentration such as are emitted by transport or agriculture.
What strategy must therefore be adopted to optimize the sequestration of CO2?
To answer this question, and to determine a meaningful association between the exploitation of fossil resources and CO2 sequestration, we have developed a dynamic model. This model enables the optimum pace of CCS deployment to be defined, and takes three essential parameters into account: the availability of fossil resources, the accumulation of carbon in the atmosphere (and its partial absorption by the biosphere and oceans), and the limited capacity of storage sites. Using the model we show that the optimal sequestration of the greatest possible percentage of CO2 released by industrial activity occurs when the CCS process starts. CO2 sequestration then gradually falls until the site is completely filled. Note that as long as the CO2 can be sequestered, consumption of fossil fuels remains strong. Consumption slows down once the reservoir has become saturated and all the CO2 released has been subject to payment of the carbon tax. This is where renewable energies come in.
In another research project we sought to determine the optimum policies for capturing CO2 emissions by comparing two sectors. Sector 1with heavy industries such as steel and cement works, or conventional thermal power stations with concentrated emissions, has access to CCS and can therefore reduce its emissions at a reasonable cost. Sector 2, the transport sector for example, whose emissions are more diffuse, only has access to a more costly CO2 capture technology (e.g. atmospheric capture, a technique which involves recovering the CO2 from the atmosphere using a chemical process to isolate the polluting molecules). Considering these two “heterogeneous” sectors, we have been able to show that the optimal strategy is to start by capturing the emissions from Sector 1 before the permissible pollution ceiling is reached. The capture of emissions from Sector 2 starts once the pollution ceiling has been reached, and is only partial. As far as carbon tax is concerned, our research shows that this has to increase during the pre-ceiling phase. Once the ceiling has been reached, the tax must fall in stages to zero.
It seems clear in a market economy that the only way to persuade industry to capture and store CO2 is to put a price on carbon, by taxing it, for example. Reasoning in terms of “cost-effectiveness”, industrial firms will compare the cost of sequestering a metric ton of carbon with the amount of tax they would need to pay if that same metric ton were released into the atmosphere. This tax must be unique and applied to all sectors, regardless of their number and nature. What level of tax would guarantee that CCS would be competitive and thus ensure its development? According to the IPCC (Intergovernmental Panel on Climate Change), if we are to limit the global temperature rise to 2°C then atmospheric pollution ceiling must not exceed 450 ppm (parts per million). This equates to a carbon tax of around 40 dollars/metric ton of CO2 in 2015, reaching 190 dollars/metric ton of CO2 in 2055 (the date at which the threshold is reached) which would widely stimulate the development of CCS.
However, it is essential to note that carbon capture is merely a transient solution for relieving the atmosphere of carbon emissions, while continuing to benefit from energy that is relatively cheap compared with renewable energy sources. Between now and 2030 policies should implement strategies for implementing a sustainable transition to sources of clean energy.
[1] Primary energy: energy available in nature before any transformation (natural gas, oil, etc.)
[2] Carbon tax: officially known in France as the Contribution Climat Energie [Climate Energy Contribution] (CCE), the carbon tax is added to the sale price of products or services depending on the amount of greenhouse gases (e.g. CO2) emitted during their use. This came into being in January 2015 and has risen to 7 euros/metric ton of carbon. This recommended ceiling in the concentration of atmospheric CO2 was established with the objective of limiting the rise in temperature to some desired value (e.g. the infamous +2°C).
Gilles Lafforgue, and articles “Lutte contre le réchauffement climatique : quelle stratégie de séquestration du CO2?” (Combating global warming: what CO2 sequestration strategy?) published in Tbsearch magazine, “Optimal Carbon Capture and Storage Policies” (2013), published in Environmental Modelling and Assessment, co-authored by Alain Ayong le Kama (EconomiX, Université Paris Ouest, Nanterre), Mouez Fodha (Paris School of Economics) and Gilles Lafforgue, and “Optimal Timing of CCS Policies with Heterogeneous Energy Consumption Sectors” (2014), published in Environmental and Resource Economics, co-authored by Jean‐Pierre Amigues (TSE), Gilles Lafforgue and Michel Moreaux (TSE).
The macroeconomic models that have been developed provide insight into how CO2 sequestration can be implemented so as to make an effective contribution to global warming while maximizing the advantages of exploiting fossil fuels. Expressed as a rate of CO2 emissions to be reduced, the theoretical results cast a pragmatic light, enabling governments to encourage industrialists to sequester CO2 rather than pay the carbon tax.
In the first study, a dynamic model was developed for the optimum management of energy resources, taking account of interactions between the economy and the climate. Carbon was assigned a value that penalized economic activity directly.For the second model we adopted a “cost-effectiveness” approach. Assuming a maximum threshold of emissions which cannot be exceeded (from the Kyoto protocol), the scale at which CCS had to be deployed was determined and we then ascribed a financial value to carbon.
By Laurent Germain et Anne Vanhems
We took a close look at the personality of traders to try and understand better how speculative bubbles work and we noticed that some traders do not act rationally. However they are not the only ones who affect financial markets.
For each transaction, traders have to take into account many parameters: market trends, competitors’ strategies and the latest news. But sometimes the situation gets out of hand, and other less rational aspects influence their decisions.
In spite of their experience, individuals sometimes act in a biased way. We may thus observe disproportionate reactions, such as buying shares at a price that is much too high in relation to their intrinsic value. These are illogical decisions which trigger speculative bubbles and stock markets then crash when everyone wants to sell assets simultaneously and their value collapses.
While it is now acknowledged that some traders do not act rationally at a given time, we still find it difficult to imagine that this might also be true for market-makers. These market-makers are organizations (mostly investment banks), or people, who set the buy and sell prices of assets: they are said to ‘quote’ the buying and selling prices and thus set the value of the assets. However, it is possible to demonstrate that some of these market-makers also make the wrong decisions. The market-makers are considered to be more experienced and “battle-hardened”. They are paid to stay one step ahead and traders are trained to analyze their latest strategies. As they are key players, it was difficult to admit that these market-makers might act irrationally, by increasing prices until red lights started flashing, or, inversely, lowering the price of shares in a context of increasing demand.
In order to study the effect of this phenomenon, we separated the two main biases. The first bias, related to the degree of optimism, causes the broker to misread the market trend. Thus when clues are showing that a share is about to lose value, he may think that it will soon go up again. The second bias, related to the level of self-confidence, leads him to over-estimate his own skill. He may then cause the price of the share to vary a lot, thus making the market more volatile. Now, the higher the volatility, the bigger the gains and losses.
We first observed that the biases of market-makers affect the depth and liquidity of the market. A deep market is one in which the price remains relatively stable A liquid market is a market in which prices are not set aggressively (there is then a lot of buying and re-selling).
For instance, an optimistic market maker may think that the information he received is more reliable than it actually is and consider that his judgment is less crucial for his decision. He will then tend to overestimate the price of the asset, and traders (who buy and re-sell) will decrease the number of their transactions. When this market maker is too confident of his assessment, he will quote the share less aggressively and thus increase the liquidity of the market.
The tulip bulb crisis was the first speculative bubble and remains an outstanding example of a frenzied financial market. In 1636-1637, some bulbs were sold at more than 15 times the annual salary of the horticulturist and the volumes exchanged on the markets were completely unrelated to the actual number of available bulbs.
The first conclusion we can draw from this study is that market-makers who are too confident or not confident enough can make either profits or losses. When the market maker is pessimistic, but still trusts his own judgment, then price variations are seen to be weaker.
Prices increase mechanically, and the volume exchanged by rational traders is then low. Nevertheless, one conclusion of the study is that market-makers are able to take advantage of this market: the rise in prices does not affect the overall demand.
The results of this research also show that while traders with biased behavior trigger situations of disequilibrium, market-makers who are over-confident increase the likelihood that this will happen. For instance, an optimistic market maker amplifies excessive trading, which means that there are too many transactions. We can compare this to the Internet bubble, when both traders and market-makers thought they were witnessing the birth of a new economy and hence the likelihood of extreme growth.
The prices of shares in technology start-ups then went sky-high, uncorrelated with the actual profits of the companies and nevertheless, the number of transactions continued to increase. The March 2000 crash led to a recession in the sector but also in the economy in general with losses exceeding the profits made.
Moreover, we proved that there was an unexpected result: the fact that market-makers may behave in a biased way sometimes favors traders who are not very confident. In this case, a trader who lacks confidence may get better results than a trader who acts correctly. Consider for example a share whose value will not change. The optimistic market maker believes that it will increase and therefore sets a high price. A pessimistic trader believes that it will drop and therefore sells his shares, whereas a ‘standard’ trader will wait. In this case the pessimistic trader will make profits but not the ‘standard’ trader.
We may conclude from our research that the volatility observed may not only be due to traders, but may also be amplified by the attitude of market-makers. In fact, the last conclusion of the study is that in the extreme cases of levels of confidence, we observe excessive volatility and an excessive number of transactions. In a situation in which some traders lack confidence, market-makers who also lack confidence will cause rational traders to make too many transactions.
We are now working on a new more complex model which assumes that some market-makers act according to the way others do: in other words they no longer act as independent ‘black boxes’ but take into account the strategies of their counterparts.
Reference: This article written by Laurent Germain and Anne Vanhems and the article entitled “Irrational Market-makers”, co-authored by Fabrice Rousseau and Anne Vanhems, were published in Finance vol. 35, no. 1, April 2014. This article won the Prize of the French Finance Association 2014.
These unpublished results pave the way for new strategies in which traders and market-makers should consider that, both among their peers and their competitors, some agents may take biased decisions.
Banks expressed a lot of interest in the study when the article was published and we may assume that this theory has been integrated into their trading practices
Our team of researchers constructed a mathematical model simulating the effects of psychological biases on markets. We defined two reference scenarios: the first in which all actors are rational and a second including rational and irrational traders dealing with rational market-makers. This enabled us first to illustrate the impact of trader irrationality. We then compared these results to a simulation in which all of the market-makers are irrational.
By Stéphanie Lavigne
Quite unexpectedly, those European companies which invest the most in Research & Development (R&D) are also those whose majority shareholders are institutional investors (and particularly pension funds located in English-speaking countries) whereas we expected to find so-called ‘strategic’ investors (the State or families) that are generally believed to support a company’s growth policy and therefore also its innovation policy.
The advent of institutional investors in the 1990s led to a radical change in equity breakdown in European companies. Today, 50 to 60% of the capital of European groups listed on the stock exchange is held by pension funds and mutual funds (which manage other peoples’ money). Now, as leading shareholders, they have imposed their own governance principles and value creation strategies, demanding about 15% return on investment for the households whose savings they manage.
To achieve this level of return, companies are implementing financially-driven strategies with shorter investment periods, so that they can deliver ever-increasing dividends to their shareholders on a yearly or even a six-monthly basis. But is this short period of time compatible with a given company’s growth and with its R&D policy in particular?
In this study, we have tried to establish a relationship between the equity breakdown and innovation policies of major European companies.
A review of empirical studies undertaken so far reveals that they relate almost entirely to the North American market and yield contradictory results. Two opposing theories have emerged regarding the influence of institutional investors: one of the theories asserts that these investors believe in short-term profitability only and do not encourage high-risk innovation policies; the second theory, on the other hand, acknowledges the control exercised by these investors and their positive influence on innovation policy, which ensures the company’s long-term profitability.
Our study shows that in Europe, the more companies’ shares are held by institutional investors the more they spend on R&D whereas we were expecting to find strategic investors such as governments or families that are known to support companies with patient growth policies. It seems that the crucial factor is the investment period of these institutional investors: the longer the period, the greater the likelihood of the company committing to an innovation policy. This may seem insignificant, but the findings have never before been demonstrated in a multinational context (a sample of 324 European companies) over such an extended period of time (tests between 2002 and 2009).
One of the major conclusions of our study highlights the detrimental effect of short-term investor attitudes on the innovation strategies of companies, which actually need the support of long-term investors in order to carry out their R&D policies.
When analyzing how the investment period influences the innovation strategies of European companies, we compared companies having short-term or “impatient” investors (with an investment period of less than 18 months) as majority shareholders with companies where long-term or “patient” investors are the majority shareholders. Our findings show that R&D spending is higher when the majority shareholders are patient investors and lower when most of the company’s capital lies in the hands of impatient investors.
This article was written by Stephanie Lavigne and the article “Ownership structures and R&D in Europe: the good institutional investors, the bad, ugly and impatient shareholder”, co-authored by Olivier Brossard and Mustafa Erdem Sakinc, published in Industrial and Corporate Change (Volume 22, Number 4) – Oxford University Press, 5 July 2013.
Our study of equity breakdown and the innovation strategies of European companies shows that we should not be disparaging about institutional investors but focus on the crucial issue of how long they leave their investments in companies.With this in mind, companies must learn to identify the investment period of any new institutional investors promptly in order to build a privileged relationship with them and attempt to offset any short-term investors.
In our research, we conducted an empirical study of the relationship between the equity breakdown and innovation policies of leading European companies. We analysed a sample consisting of the 324 most innovative European companies (as listed on the EU Industrial R&D Investment Scoreboard between 2002 and 2009) and compared their R&D expenditure against financial and shareholding data obtained from the Thomson Financial data base.
By Akram Al Ariss
In order to win points in the global search for talents, companies had better create a human resources policy that is attractive to self-initiated expatriates. Akram Al Ariss, research professor at Toulouse Business School, has carried out a review of scientific research on this important subject.
The scale of international migrations has been steadily increasing for many years: from 214 million in 2010, the number of people living outside their country of origin has risen to 232 million and may well increase again by 96 million people between now and 2050 according to United Nations estimates. Until now, the potential use of highly skilled talents from this population by organizations has not been paid much attention by researchers. The human resource management literature on this topic refers to these talents as ‘self-initiated expatriates’. Therefore, we use this term in the rest of this article.
Highly skilled talents who undertake an international mobility are a pool of human resources that could give host countries and companies a competitive edge in the global war for talents. This is especially the case with regard to self-initiated expatriates (SIEs) made up of individuals who have chosen to move of their own free will, and who are often highly qualified and experienced, with a rich linguistic and cultural background. But dipping into this pool first requires identifying, recruiting, developing, and retaining them as staff while satisfying their ambitions. In order to do so, companies need to devise and implement a specially-tailored Human Resources strategy.
This is particularly important for companies which are expanding internationally. For cost reasons, the classic pattern of expatriation of their employees with concomitant salary bonuses and various other benefits, has been replaced in the past few years by a more economical, “local plus” model, in which the employee resigns in order to be rehired under a local contract, with much less favorable conditions. But this system, which generates frustration and understandably dents staff motivation, often leads to a swift resignation, and is counterproductive. In reality, rather than the employee, it is the company that ends up losing in the long term: the saving is only illusory, since the “local plus” strategy creates a detrimental turnover of employees, leading to a brain drain in the company and damages its image in the eyes of potential expatriate candidates. The recruitment of self-initiated expatriates is undoubtedly an interesting way out of this impasse. Since they are already expatriates for non-professional reasons, they will more readily accept to work at local market conditions.
The question actually applies to every business: how to target and reach those with high added value individuals? One answer could be by simply taking into account their specific needs. The situation varies according to their experience as well as their countries of origin and host countries. Nevertheless, studies have shown that SIEs face a number of barriers and obstacles that limit their opportunities for integration in their host organizations and societies. Among the most commonly cited, we find the immigration policies of states, particularly regarding visas and work permits, recognition or not of qualifications and professional experience, barriers related to language proficiency and communication codes and, more insidiously, discrimination and stereotypes of all kinds. These difficulties are also exacerbated when it comes to women, who nowadays make up one out of two self-initiated expatriates. A company’s first responsibility is to recognize these obstacles and then help self-initiated expatriates to find a way round or overcome them in order to facilitate recruitment and enable them to find jobs matching their skills.
Human resource (HR) managers’ strategy plays an essential role in two specific ways: through adapting their organizational recruitment and selection procedures, on the one hand, and through providing cultural training and development opportunities to these self-initiated expatriates, on the other. In terms of recruitment, HR practices must adapt to this expatriate population, not only to avoid excluding it (for example by neglecting its preferred communication channels or requiring local professional experience that, by definition, it cannot have), but also to attract it (for example by not restricting the job offer to a technical description of the proposed job but giving in addition general information on life opportunities linked to the job). For the company, the main benefit of this proactive and differentiated approach is not to miss out on this highly skilled labor.
The second priority is to encourage them to stay with the company by facilitating their integration and cultural adaptation. Research cannot provide a comprehensive and definitive answer as to why an SIE remains in a job, especially as these reasons may vary from country to country. However, HR management should strive to understand the motivating factors in order to implement appropriate development and retention solutions.
These are only a few indicators from research results. The development of a relevant HR strategy tailored for self-initiated expatriates is essential in any case. Of course, whatever happens, it’s a win-win policy for expatriates themselves, for whom the choice of mobility is then crowned with success, but equally for companies who manage to attract the best candidates, thus giving them a decisive advantage in global competition. Indeed, the international workforce is a source of diversity, creativity and innovation. The winning companies will be those that are capable of looking beyond the various stereotypes, discrimination and obstacles, in order to tap into this worldwide flow of human resources.
This article written by Akram Al Ariss and articles on “self-initiated expatriation and migration in management literature,” co-authored with Marian Crowley-Henry (Department of Management, National University of Ireland Maynooth), published in Career Development International (2013); “Human resource management of international migrants: current theories and future research”, co-authored with Chun Guo (Department of Management, Sacred Heart University, Fairfiels, CT, USA), published in The International Journal of Human Resource Management, 2015.
Further Reading (books):
In writing the two articles referenced above, Akram Al Ariss and his two co-authors conducted a systematic review of the scientific research conducted on the subject of self-initiated expatriation.
By Gregory Voss
How likely is it that the reforms launched in 2012 by the European Union (EU), with the aim of ensuring a high level of personal data protection for the citizens of its 28 member states, will become applicable in 2017? It is possible, but the European Parliament, the Council of Ministers and the European Commission have yet to reach an agreement: informal three-way discussions are taking place.
Since June 2015, these three EU institutions have been jointly drafting a text for the General Data Protection Regulation (GDPR). There are still a few points on which the parliament and the council disagree, in particular with regard to obtaining an individual’s consent for the processing of personal data, the rights and responsibilities of those collecting data, and the amounts of fines for non-compliance.
A commission proposal for new legislation on personal data protection was made back in 2012. But the draft regulation, passed by the parliament on March 12, 2014, is now awaiting validation by the Council. These reforms will help protect European citizens and their personal data even with respect to international companies whose headquarters are outside the EU, but who nevertheless process data online. While the degree of personal data protection in Europe is generally quite high, the financial penalties are too low, in contrast to those enforced in the United States.
When the three EU bodies have agreed on the final draft text, it can then be adopted only after two consecutive readings of the same text by the parliament, whose members are directly elected by EU citizens and after approval by the council, which represents the governments of the 28 member states. Once adopted (most likely in 2016, though some were pushing for adoption at the end of 2015), the regulation will become applicable in the two years that follow.
This GDPR will harmonize European law and may deliver an additional benefit by triggering a broader process that leads to the standardization of international legislation on protection of personal data. Moreover, the reduction of the administrative burden arising from this single piece of legislation will enable savings of €2.3 billion per year, according to the Commission’s calculations.
The process may seem to be taking a long time, but it has to be borne in mind that it took five years to finalize the 1995 European directive on personal data protection. The GDPR is essentially at the three and a half-year mark, so there is still time for this.
The GDPR has been subject to intense lobbying efforts by the representatives of those who process data. While they may slow down the legislative process, these actors can play a legitimate role in informing legislators about the practical realities faced by the companies who collect data.
Following the Snowden revelations, efforts to reform the legislation have experienced numerous upheavals. In June 2013, Edward Snowden, a former CIA consultant and a member of the National Security Agency (NSA), revealed that the US government had collected personal data concerning individuals living outside the US from nine of the biggest American technology companies, particularly as part of an electronic monitoring program known as PRISM. On October 21, 2013, the European Parliament proposed a text in which it was stipulated that the company responsible for data processing, or its subcontractor, would have to inform the data subject about any communication of their personal data to the public authorities in the previous twelve months. This provision is clearly influenced by the PRISM case.
In general, revelations such as this one, relating to data protection, help stimulate the debate about privacy in Europe, even if they have weakened trust between the EU and the United States. On October 6, 2015, as a result of the transfer of data on an Austrian citizen to the United States, by the European subsidiary of Facebook, the Court of Justice of the European Union (CJEU) ruled against the validity of the Safe Harbor Privacy Principles, which had been used to justify the transfer and which stipulate that in the event of threats to US security, a clause allows the US authorities to access the personal data of European citizens. The CJEU’s decision, in turn, followed the conclusions of the Advocate General, , and invalidated the Safe Harbor, which according to Vossm “is a problem for more than 4,000 US and European companies that depend on the Safe Harbour Privacy Principles for the transfer of personal data to the United States.” It remains to be seen what actions the institutions and European and US companies will take following this decision.
On the other hand, even in the absence of a GDPR, the Google Privacy Policy case shows that EU member states have the tools to oblige the operator of a search engine to respect privacy and personal data protection laws. In this vein, a number of cases have led to the data protection authorities in Germany, Spain, France, Italy, the Netherlands and the United Kingdom imposing penalties on Google, including fines amounting to hundreds of thousands of euros. While the size of these fines is relatively small compared with Google’s annual turnover (€59 billion in 2014), they are examples of the more severe enforcement actions, based on the turnover of the companies sanctioned, which are foreseen in the European legislative proposals.
In France, the Commission Nationale de l’Informatique et des Libertés (CNIL – the French data protection authority) disagrees with Google about de-listing following the Google Spain decision by the CJEU. Since the court recognized this right in 2014, any person may request that the operator of a search engine erase the search results that appear in relation to their name. As a result, Google has received tens of thousands of requests from French citizens. It then proceeded to de-list results on its European search engine domains (.fr, .es, .co.uk, etc.). But it did not extend the de-listing to other geographic domains or to google.com, which any user can search. In May 2015, the CNIL requested that Google proceed with de-listing from all its geographic domains. Google, however, argues that this decision constitutes an infringement of the public’s right to information and is, therefore, a form of censorship. A CNIL rapporteur (the official who manages the case) will no doubt be appointed to resolve this issue.
While the EU is working hard to hammer out a jointly-agreed regulation on protection of personal data, its member states, such as France, continue to strengthen their legislative arsenals. On September 26, 2015, the government presented a draft document on the subject of a “digital republic”, comprising some thirty articles on the confidentiality of electronic correspondence, portability of files and open access to public data, for public consultation. Public consultation on the development of this document is an interesting approach, the effects of which deserve to be monitored.
This article, written by Gregory Voss, along with the articles “European Union Data Privacy Law Developments”, published in The Business Lawyer (Volume 70, Number 1, Winter 2014-2015); “Looking at European Union Data Protection Law Reform Through a Different Prism: the Proposed EU General Data Protection Regulation Two Years Later”, published in Journal of Internet Law (Volume 17, Number 9, March 2014); and “Privacy, E-Commerce and Data Security”, published in “The Year in Review”, an annual publication of ABA/ Section of International Law (Spring 2014), co-authored with Katherine Woodock, Don Corbet, Chris Bollard, Jennifer L. Mozwecz, and João Luis Traça.
The effect of GDPR on businesses will depend on the final text adopted by the EU. It is a certainty that greater accountability will be imposed on companies that manage personal data. Some companies will probably have to create new data protection officer posts (DPO) defined on a similar model to the “correspondant informatique et libertés” (CIL) in France. Companies specializing in conducting privacy impact assessments will also emerge. The author, therefore, advises business leaders to closely monitor developments in legislation protecting personal data, in order to be able to comply with new legislation as soon as it comes into force. He proposes raising the awareness of employees through training on data protection. Finally, companies will have to implement adequate procedures to comply with the legislation on personal data protection, including those that enable the data breach notifications that will be required by the GDPR.
To produce these articles about data-protection legislation, the author has analysed many legal documents and “hundreds of pages of proposals, amendments and opinions”, especially those resulting from the work carried out by WP29, the independent EU working group on the handling of personal data. In his articles, he puts the proposals of European authorities to adopt a GDPR into perspective and offers practical advice for businesses. He has also examined the changes in opinion of various European bodies, the European Commission, Parliament and Council, and has studied the reactions of legislators to Edward Snowden’s revelations on electronic surveillance.
By Alain Klarsfeld
Between 2010 and 2012, three key Acts were promulgated, requiring businesses and administrations to make significant progress in terms of gender equality. The Act of 2011 focused particularly on gender balance on the administrative and supervisory boards of companies and public administrations. Here is an overview of the impact of this Act by the research professor Alain Klarsfeld.
Despite the laws and affirmative action in favor of employment equality and the fight against discrimination, only minor advances have been made in many areas. Companies have certainly gotten close to the legal quota of 6% required concerning employment for the disabled, whereas the proportion was barely 4% ten years ago, but this is still far from satisfactory. Regarding older workers, companies have also made real efforts to prolong their employment (+8% in 7 years), with a rate of 44.5% for 55-64 year-olds at the end of 2012, after a long period of stagnation during the 2000s. Among other positive developments, we should mention the creation in 2011 of an ombudsman (Défenseur des Droits), who joined the French High Authority for the Fight Against Discrimination and for Equality (HALDE) and whose role is to further the fight against discrimination by ensuring that access to one’s legal rights, now simplified and streamlined, is more effective.However, the employment situation of young people remains a concern, and has not significantly improved. As an example, a young graduate with five years of higher education can take, on average, a year to find a first position, despite the legislative incentive known as the ” Contract between Generations*”. Similarly, the employment rate of people from immigration populations, whether first or second generation, has changed very little.
There is nonetheless a silver lining to this rather gloomy picture: the entry of women into the governing bodies of companies and institutions: the supervisory boards. It all started with an Act of 2011 concerning the balanced representation of women and men on supervisory boards and also to professional equality, which provided for the progressive introduction of quotas as a step towards the feminization of the governing bodies of large companies. The goal was to reach 40% of women by 2017, made mandatory with financial and non-financial penalties in the event of non-compliance. This is a real cultural shift that has opened the way for women to play a major role.
Quota-based policies are not universally approved. In fact, they are frequently contested. Their implementation may even be counter-productive, as evidenced by the tensions around ethnic or caste quotas in India or Malaysia. There were no such quotas before 2008, when they were introduced by Norway, since followed by 11 other countries, but they have had an encouraging effect in favoring equality on supervisory boards. It might have been supposed that the women in these positions would be seen as lacking legitimacy. However, studies show that the operation and work of boards of directors are improved. Women who are genuinely recruited for their skills appear to be less conformist, ask more questions and make these bodies more dynamic.
This legal obligation has also had the effect of putting a stop to a damaging system of cooptation. The boards of directors tended to exist in isolation, coopting new members from among their acquaintances. The imposition of quotas has opened new horizons regarding professional recruitment processes, by head-hunters for example, who are now compiling databases of highly qualified women and offering them to apply to these positions. In the future, it would be a good idea to verify that the creation of directories of qualified people has led to the recruitment of more professional administrators overall, and not necessarily just women.
Thanks to this Act, which imposes a “positive obligation”, offers have emerged for training as administrators, helping to make the role genuinely professional. In the years to come, it will be necessary to verify that the presence of women on boards of directors supports or provides leverage for parity in corporate executive positions, and that no further legislation is necessary to achieve parity. Already, outside the scope of the Act, major groups, including companies listed on the French CAC 40, have set themselves goals for the recruitment of female managers and top executives. This is less of a blunt instrument than an obligation to comply with quotas, but it will again be necessary to verify that its potential trickle-down effect in companies, at every hierarchical level, produces real progress.
* A scheme introduced in 2013 to help private companies create jobs, including permanent employment contracts for young people.
From an interview with Alain Klarsfeld and the chapter “Equality and Diversity in years of crisis in France”, co-authored with Anne-Françoise Bender and Jacqueline Laufer, published in the book “International Handbook on Diversity Management at Work – Country Perspectives on Diversity and Equal Treatment (second edition)”, May 2014.
Professional equality: This means the same rights and opportunities for both men and women, in particular as regards access to employment, working conditions, training, qualifications, mobility, promotion, work-life flexibility and remuneration (equal pay).
The chapter “Equality and diversity in years of crisis in France”, published in 2014, provides an up-to-date overview of developments in France concerning professional equality and diversity since the first edition of the book “International Handbook on Diversity Management at work – Country Perspectives on Diversity and Equal Treatment” was published in 2010. The book is the result of an analysis of changes to the European and French legislative framework and of the various reports and publications on the subject, as well as of paying attention to and monitoring the work of think-tanks, associations and businesses that deal with the fight against discrimination and the importance of diversity, in and outside of Europe.
Par Pierre André Buigues
France’s has had a foreign trade deficit since 2003 and the country’s share of the world export market is continuing to drop. France’s share of the export market went from 6.1% in 1995 to 5.1% in 2000. It then fell to 4.2% in 2006 and stood at just 3.5% in 2013. The automotive sector provides a good example of this French industrial decline. In 2003, France’s automotive sector had a trade surplus of €12.6 billion but this had turned into a €6.9 billion deficit by 2014!
Economists put the decline of French foreign trade down to a lack of competitiveness, due to both price and other reasons. In France, costs have tended to increase faster than productivity and the products are not perceived as giving sufficiently high ‘value for money’, particularly compared with products “Made in Germany”.
The French aeronautical sector is an exception to this trend; indeed, the sector has prevented the balance of trade deficit from plunging further. The aviation sector – both civil and military – and the space industry have posted a foreign trade surplus in excess of €23 billion over the last few years, representing the largest surpluses in the overall French balance of trade. France is the world’s second largest exporter in the aeronautical field, with 22% of the worldwide market, after the United States (35 %). Germany is the third largest exporter with 14% of the worldwide market. France has seen its market share increase by 8% in ten years, unlike the agri-food and automotive sectors.
Airbus’ exports represent the lion’s share of French exports. Airbus accounts for roughly 50% of French exports in the aeronautical sector. Table 1 below shows direct sales of new French-built aircraft to foreign airline companies and the shipments of turnkey A380 aircraft from France to Germany for subsequent deliveries from the Hamburg site, as well as the value in euros (€M) of these exports.
The aeronautical sector is an oligopoly characterised by heavy capital investment and products with advanced technology . As such, the cost of entering the market is extremely high. In France, the aeronautical sector represents around 4,000 companies and employs 320,000 people directly. The success of the French aeronautical sector is the result of an industrial strategy built on strong technological assets, strategic European alliances and strong political support:
However, a certain number of challenges lie ahead for the French aeronautical industry.
1- Asia accounts for an increasingly large part of the global air-transport market and a new manufacturer could enter the market to compete with the two powerhouses, namely Airbus and Boeing. Airbus forecasts that passenger traffic in China will exceed that of the United States within 20 years and China aims to take a share of the aeronautical sector. To develop its sales in China, Airbus decided to increase its purchases of Chinese components and to set up an A320 assembly plant in the country.
2- France plays a pivotal assembly role in Europe. The country imports parts and aeronautical equipment, essentially from Europe (foreign trade deficit) and exports complete aircraft (large foreign trade surplus). Complete aircraft account for over two thirds of French aeronautical exports. Delocalising the assembly of Airbus aircraft therefore has a negative impact on France’s balance of trade. At the same time, Germany is taking an increasingly important position in the European aeronautical sector, with a growing number of A320s being assembled on the site in Hamburg. This is Airbus’s best-selling aircraft, already assembled on several sites, in Toulouse, Hamburg, Tianjin (China) and, since 2015, in Mobile (USA).
3- Aeronautical R&D accounts for over €3 billion of investment in France every year. However, within Airbus itself, the question is being asked as to whether R&D leadership has shifted from France to Germany. At the beginning of the 2000s, the R&D expenditure of Airbus France was one and a half times greater than that of Airbus Germany. Ten years on, the R&D expenditure in Germany was 10% more than in France. To be more precise, Airbus Germany is responsible for a significant section of the fuselage of Airbus planes and for the cabins. In addition, Germany is the leader in terms of materials R&D, although France is still the R&D leader for certain key components, such as the cockpit, flight controls, navigation and traffic management.
4- The aeronautical and space industry is also one of the rare industrial sectors in which jobs are being created, and in which skilled jobs are predominant. Engineers and managers account for approximately 41% of all the jobs in the sector. However, the French education system is not able to supply the aeronautical sector with all the technicians, welders, and metal workers that it requires. For instance, small-and-medium-sized aeronautical sub-contractors have much greater problems recruiting the staff they need than Airbus.
5- Finally, the industry also carries significant risks, considering the investment required to launch a new aircraft. Indeed, there was a fear the A380 would not be a commercial success. Each new aircraft brought onto the market can also run into serious problems, as in the case of the A400M. Consequently, there is no guarantee of success.
By Pierre André Buigues, based on research by Elie COHEN and Pierre-André BUIGUES (2014) “Le décrochage industriel”, Fayard, pp 439, [978-2-213-68188-7]; Pierre-André BUIGUES and Denis LACOSTE (2011) “Stratégies d’Internationalisation des Entreprises Menaces et Opportunités”, De Boeck, pp 376. [978-2804162917]