By David Stolin

On March 31, 2005, Lehman Brothers chairman and CEO Dick Fuld was re-elected to the company’s board with 87.3% investor support. Four years later Mr. Fuld was ranked as “the worst CEO of all time” by Portfolio magazine, and widely described as having professional and personal qualities that contributed to Lehman’s collapse – and, due to Lehman’s position at the heart of the financial sector, to the international financial crisis.

We do not know how every Lehman shareholder voted in that election, much less the reasons for how they voted. We do know that around two-thirds of Lehman’s stock was held by other prominent financial institutions, the top ten being Citigroup, State Street, Barclays, Morgan Stanley Dean Witter, Vanguard, AXA, Fisher Investments, MFS, Mellon Bank, and Merrill Lynch. Most of these firms and their managers would be expected to have repeated dealings with Lehman and its management. As a result, we would expect these firms to be particularly well-informed about Mr. Fuld’s shortcomings and to have voiced concerns about his ongoing concentration of power.
On the other hand, the combination of Mr. Fuld’s shortcomings and his power made him a formidable enemy. He is on record as saying “I want to reach in, rip out their heart, and eat it, before they die” about his professional adversaries.

It is a stimulating thought exercise to visualize Mr. Fuld’s reaction upon learning that, say, Citigroup or Merrill Lynch had voted against his re-election to Lehman’s board. We note that at Lehman, like at the vast majority of U.S. firms, voting was not confidential. This means that Lehman’s management could find out how each of the company’s shareholders voted. And this would raise a problem for Lehman’s institutional investors: even if they disagree with the management, is it worth incurring the management’s wrath by voting against it?

TBS-Cartoon1_ok

Of course, it is natural for managers to be unhappy with shareholders who vote against them. But for at least three reasons, such feelings matter more when the investee company is in the financial sector.

• The first reason is the “old boys’ network”. Decision-makers at the investing firm are especially likely to be connected to their counterparts at the investee if both have finance backgrounds: they are more likely to have received the same education, to be active in the same professional organizations, to have worked at the same companies in their past careers, and to expect to do so in the future. This increases the potential for retaliation (or reciprocation) at the individual level.

• The second reason is firm-level interaction. Financial firms are more likely to have competitor or supplier/client relationships with their investors than do non-financial firms. This means that retaliation and reciprocation can be channeled through such relationships as well.

• The third reason is cross-holdings of shares. A financial firm may hold shares in its own institutional shareholder, which gives the firm another potential means of retaliating for any anti-management votes by that shareholder, namely, voting against the shareholder’s own management. Conversely, investor and investee may reciprocate by supporting each other through voting.

How can we examine if our suspicions are founded? The only group of institutions systematically required to disclose their votes is U.S. mutual funds, and accordingly we focus our study on mutual fund companies. Our empirical tests suggest that all three types of conflicts of interest listed above do matter. Social ties between the voting and target firms increase the voting firm’s support for the target’s management. In addition, voting appears to be influenced by the fear of retaliation, both in the form of being voted against in the future and of being aggressively competed against in the future. Our results suggest that there is “clubbiness” in the way fund companies vote on each other. We then go on to examine the implications of this clubbiness. We show that directors elected in fund companies with greater own-industry support, monitor senior management significantly less.

To generalize our findings, we then use aggregate voting outcomes to assess whether financial companies as a group vote more favorably when it comes to their financial sector peers and we find that this is the case as well.

In short, the financial sector’s inevitable and extensive investment in itself has a deleterious effect on its governance. What can be done about it? We believe that our work has at least two important policy implications.

First, the notion of conflicts of interest which institutional investors address in their voting policies should be explicitly defined to include not only client/supplier relationships, but also conflicts of interest through product market competition and reciprocal investments. Such recognition would help take voting out of the hands of individuals most inclined to vote in a conflicted manner, or at least constrain these individuals’ discretion.

Second, proxy voting should be required to be confidential at firms in the financial sector; i.e. investee firms should not be able to discover how different shareholders voted. This would mitigate a key reason for conflicted voting, which is potential retaliation/reciprocation by the investee’s management.

It would be naïve to think that decision-making in business can ever be rid of conflicts of interest. But in the case of proxy voting in financial firms, the problem is important enough to deserve a close look from regulators.

Méthodologie

Afin d’enquêter sur les conflits d’intérêts entre les gestionnaires d’actifs, les auteurs ont étudié la procuration de votes des fonds communs de placement sur des propositions de gestion d’actifs. L’étude couvre la période 2004-2013 et les variables explicatives étaient les fonds, la société et la relation fonds-société. Ils ont également analysé les résultats des votes rattachés. L’étude a été publié en Mars 2017 en la version papier du Management Science Journal.

By Sylvie Borau and Jean-François Bonnefon

The new mayor of London is planning to ban commercials that depict female models who are too thin or whose bodies are not realistic, but the question of how effective “natural” models are in advertising remains open. Even though an increasing number of publicity campaigns show models with fuller, more realistic figures, these remain few and far between. Why would this be the case?

While commercials for cosmetic products intended for women traditionally feature models with ideal beauty, some brands, like Dove for instance, have started to adjust their communication strategy by presenting more realistic women with fuller figures and less artificial editing of the image.

Choosing a model for a commercial: an ethical and economic issue

Presenting models, whether ideal or natural, poses two significant problems: the first is ethical and the second economic. On the one hand, idealized images of female beauty impose an unreachable standard and can have negative effects on the psychological wellbeing of women, for example in terms of body image anxiety. On the other, selecting an ideal model or a natural one also poses the question of the commercial’s economic impact.

From the perspective of the advertiser, like the creative agency, the choice of relying less on stereotypical, edited images, or even abandoning them completely, will be based mainly on commercial effectiveness criteria and probably less on issues surrounding social responsibility. As a result, it is essential to evaluate more precisely how women react to these natural models and their commercial effectiveness. While many studies have looked into the ability of an idealized model to generate anxiety, less attention has been paid to the ability of a natural model to also trigger negative emotions. However, if the reference point for female consumers is models representing ideal beauty, natural-looking models may be considered out-of-place in the media environment and thus elicit repulsion, unpleasant surprise, or even disgust.

Body anxiety and repulsion

The aim of this study was to compare the reactions of women to magazine advertisements containing either an ideal model or a natural model, both in terms of affective reactions, such as body anxiety and repulsion, and in terms of commercial impact, including their impressions of the advertisement, attitude to the brand, and interest in buying. Half of the sample subjects were shown the traditional ideal model used in commercials for cosmetic products, while the other half was presented with a natural model: a woman with a more realistic body, non-stereotypical physical traits, and no editing of the image.
By focusing more specifically on two negative emotions, anxiety regarding the appearance of one’s body and the repulsion generated by the models, we put forward two hypotheses: first, that natural models reduce body anxiety among readers, particularly those with a high body mass index (BMI), and that this has a positive effect on the commercial’s impact, and second that natural models increase the feeling of repulsion that women feel, with a negative effect on the campaign’s effectiveness.

Surprising results

Concerning the effect of exposure to different models in terms of negative emotions, it was found that the natural model did not decrease body anxiety among the women. This result could be explained by the fact that the respondents already reported a very high level of anxiety; it would be difficult for this level to be affected further by exposure to the images. However, the natural model generated repulsion, even more so among the women with high BMIs. These women who are very unsatisfied with their appearance probably project the feeling of repulsion that they feel towards their own body onto the natural model.
Concerning the effect of these negative emotions, the results showed that body anxiety increased the effectiveness of the commercial; in other words, the more a woman is anxious about her appearance, the more she will tend to like the advertisement and the brand, and the more likely she will be to want to buy the product. This positive effect of anxiety on the commercial’s impact is rather counter-intuitive, since negative emotions generally have a negative effect on advertising performance. The other result is more logical, showing that repulsion had a negative outcome on effectiveness.

Reconciling ethics and commercial impact

In short, these results are not too encouraging if we consider the divide that exists between public policies that aim to encourage the use of natural models and advertising professionals who are more concerned with the economic effectiveness of this type of strategy, and who are therefore interested in advertising with ideal models.
What would we need to do to counter this contradiction and reconcile ethical considerations with economics? If the aim is both to be effective and not generate negative emotions that may either increase effectiveness through anxiety, or decrease effectiveness through repulsion, an alternative could be to dispense with model images, whether idealized or natural. A number of brands have adopted this third approach, particularly in the area of drugstore products. This type of strategy, which is more respectful of the consumer’s wellbeing, requires the development of advertising discourse that is more informative, shifting the message from emotion more to rationale.
Further studies could help to refine these conclusions, for example by looking at categories of products other than cosmetics, by presenting other types of models, or by calling on other reactions rather than emotions, such as the credibility that the readers assign to the model and to the advertisement.

Methodology

A survey was carried out including 400 French women aged between 18 and 35 years, representative of the population of France in terms of BMI, education level, socio-professional category, and marital status. The responders were asked to look at a women’s magazine online, in which there was an advertisement for a cosmetic product illustrated either with an ideal model or a natural model. They were then asked to complete a questionnaire.

Sylvie Borau has been a professor in marketing at Toulouse Business School since 2013. Before that, she worked for 8 years in various research institutes, particularly in Canada. Her thesis, as part of her doctorate obtained from the IAE of Toulouse in 2013, led her to win the 2014 Sphinx thesis award and to be a finalist for the AFM-FNEGE prize of the French National Marketing Association and the Foundation for Management Education. Her research work focuses on consumer behavior and more specifically on physical attractiveness in advertising.

In 2016, she published an article in the International Journal of Advertising, entitled: “The advertising performance of non-ideal female models as a function of viewers’ body mass index: a moderated mediation analysis of two competing affective pathways” in collaboration with Jean-François Bonnefon, CNRS Director of Research at the Toulouse School of Economics.

By Victor Dos Santos Paulino

sirius_logo_RVB

Any company faced with a radical innovation in its sector of activity will hesitate between indifference and reaction, because of the impossibility of foreseeing whether the innovation is a radical breakthrough or a product that is doomed to fail. To resolve this dilemma, the solution might be to identify potentially disruptive innovations and assess their risk for established stakeholders, as illustrated by the case of the satellite industry.

The miniaturisation of satellites has affected space industry markets over the last 20 years. On the offer side, new manufacturers have emerged, marketing small satellites at a lower cost; on the demand side, there are new clients that see this innovation as an opportunity. Quite logically, the well-established manufacturers, positioned in the segment of traditional large-size satellites, are wondering whether they should consider these radically new technological choices as a threat?

Disruptive innovation is difficult to observe until it’s happened

Our research, conducted in the framework of the Sirius chair (http://chaire-sirius.eu), aims to answer this question, which first involves clarifying the concept of ‘disruptive innovation’. This is necessary because the expression, which is widely used and sometimes mistakenly, makes established players in the sector anxious, while fascinating and intriguing them, without it being entirely clear what exactly we are talking about.
A disruptive innovation is a particular case of radical innovation which modifies the structure of an industrial sector and whose effects may lead to existing companies being replaced by new competitors. The difficulty is that it is only possible to be certain that it is a disruptive innovation in the long run, a posteriori, once it has staked out its place or even driven out the oldest technologies and the companies that marketed them. In the short term it appears rather to be a less efficient product or service, aimed at a marginal clientele, an immature technology proposed by small companies with limited resources, less know-how and less knowledge of the market.
Because of these characteristics, it is very difficult to distinguish between a real disruptive innovation which has just been introduced and so requires that existing companies react, and an innovation destined to fail, that they can comfortably ignore. This creates uncertainty about what they should do, which is known as the innovator’s dilemma, since existing companies should promptly assess the danger and possibly invest in the new market while the disruptive innovation is not yet a threat, if they are to limit the consequences. If they wait too long, it might be too late.

A classification for anticipating the threat

What matters to company executives is to be able to anticipate trends and thus, if possible, to be able to use forecasting tools. Since it is not possible to affirm at an early stage that an innovation is disruptive, the solution is to try and determine in the short term whether it has the typical characteristics, in other words whether it is a potentially disruptive innovation and if so what type of threat it is likely to pose to well established stakeholders.

Not all disruptive innovations have the same consequences: some lead to complete substitution of the old technology by the new one and thus pose an extreme threat, the typical case being that of silver-based, emulsion film photography wiped out by digital photography; other innovations do not entirely replace the initial products. This is the case in air transport, for which low-cost companies have captured only some of the clientele of traditional companies, and in telephony, where landline technology continues to coexist with mobile technology. These examples are characteristic of three types of disruptive innovation for which only the first is associated with a high risk of the pre-existing market disappearing. In the two other cases, the threat appears to be lower for established companies.

Small satellites, a limited threat

What then is the situation for the space industry? Given this conceptual framework, how should well established stakeholders react to the development of small satellites? According to the parameters chosen for our theoretical model, small satellites have most of the characteristics of potentially disruptive innovation: lower technological performance with respect to the requirements of the traditional main customers; they are less complex; they either cost less or on the contrary cost much more, for instance in the case of constellations of small satellites; they offer the perspective of introducing new performance criteria such as the possibility of designing, building and launching a new satellite in a very short time or again the improvements offered by constellations in low Earth orbit.

However, an analysis of the demand for these new satellites shows that they are intended mainly for new customers, which means that we can exclude the hypothesis of a disruptive innovation affecting an existing market, which is really the main risk case for manufacturers. Those who buy them can be divided into institutional customers from emerging countries, which do not have sufficient resources to launch conventional satellites, and private top-of-the-market customers with new needs for low orbit constellations, which conventional satellites do not meet.

Thus, small satellites are indeed a potentially disruptive innovation but they only pose a slight threat to well established stakeholders. Despite the structural changes they might lead to for this industry, there is not much risk that they will entirely replace conventional satellites. This in no way determines either their ultimate success or failure.

Methodology

This study was conducted by Victor dos Santos Paulino (TBS) and Gaël Le Hir (TBS) in the framework of the Sirius chair, on a topic proposed by the chair’s industrial partners. For the theoretical part, the authors reviewed the existing literature on the theory of disruptive innovation, which enabled them to draw up a table classifying the characteristics of potentially disruptive innovations. They then applied this model to the satellite industry while referring to several sources of information (information published by manufacturers, sectoral information, interviews with experts, databases). The study was published in the Journal of Innovation Economics & Management in February 2016 under the title “Industry structure and disruptive innovations: the satellite industry”.

By Uche Okongwu

Supply chain optimization essentially involves finding a compromise between striving for customer satisfaction at the same time as profitability. By adjusting the different supply-chain planning parameters, each company can achieve the performance level in line with its strategy and objectives.

The concept of the supply chain is as old as economics: from the supply of materials to production and delivery, the successive players involved in any given market represent the links in a chain, acting as customers and suppliers to each other respectively. However, increased competition and globalization have made companies realize that all the different players in their supply chain share a common goal, namely customer satisfaction. Consequently, how the supply chain is organized and how it performs are of crucial strategic importance for companies, and increasingly so. In this regard, the example of the aerospace industry is regularly covered in business news and highlights this strategic role perfectly; indeed, the industry has had to increase production rates to meet the growing demand and this has created tension throughout the chain. However, this problem actually concerns all sectors of the economy, whether in industry or in services. Over the last twenty years, researchers and managers have been looking at ways of optimizing supply-chain management to improve companies’ performances, based on the ideas of collaboration, integration and information sharing.

The difficulty in resolving this issue lies in the complexity of the supply chain itself; in addition to the number of links in the chain, we need to take into account the number of performance indicators and particularly the number of parameters that a company can adjust in order to meet its performance targets, which is infinite. Until now, the research had focused on one parameter or another, sometimes combining them, but in a limited way. For the first time, our study aims to go further by combining several parameters positioned at different stages and functions in the chain (planning, procurement, production, delivery), in order to establish which combination of key factors produces the best performance.

Performance: always a question of compromise

The first issue that needs to be addressed concerns the supply-chain performance, indicators. Many indicators are used, some of which are contradictory, since certain indicators are linked to profitability and others to customer satisfaction. For our study, we selected three: profit margin, on-time delivery and delivering the quantities requested. Ideally, an optimized supply chain should make it possible to achieve maximum scores on all these parameters, but in reality, no company can claim to be the best in every area. As such, you have to reach a compromise at some point, according to your market and objectives, by accepting to “sacrifice” part of your performance on a given indicator. With this in mind, the idea of optimum supply-chain performance depends on the objectives the company sets in terms of profitability and customer satisfaction, but also in relation to its position in the market. Consequently, the main challenge in supply chain planning is finding this compromise.

The case on which we worked was inspired by a real situation. It concerns a supply chain in the field of furniture, for the production of tables and shelves. Out of the 12 general supply-chain planning parameters we identified, we decided to alter six and to observe the result of the simulation on our performance criteria: the planning time-frame(short or long), the production capacity in terms of human resources (constant or adapted to the demand), the production sequencing (priority given to the oldest or the most recent orders), the duration of the cycle, the reliability of the forecasts and the availability or otherwise of stocks.

In the case of this supply chain, the production capacity appears to have a strong impact on margins and the ability to meet demand, whereas sequencing has a greater impact on the promptness of deliveries and the extent to which the response meets the demand.

Addressing the company’s priorities

These results confirm the initial hypothesis, namely that different combinations of planning parameters will have different impacts on the performance indicators. The different planning parameters cannot be considered independently of the performance criteria, hence the need to make choices. The ideal combination of parameters depends on the performance sought by the company.
Using the model developed in this study, managers responsible for supply-chain planning have a theoretical and practical tool to help them in their decision-making, allowing them to determine the best combination based on the company’s priorities. The framework and methodology developed, as well as the results obtained, are a genuine breakthrough in terms of research. To take things further, it would be interesting to combine even more parameters – as long as the computer-simulation tools available make this possible -, and to test the model on different supply chain structures and in other market environments.

Methodology

To conduct this study, Uche Okongwu (TBS), Matthieu Lauras (TBS, Ecole des Mines Albi), Julien François and Jean-Christophe Deschamps (Bordeaux University) reviewed the available literature on the topic of supply-chain performance. Based on the following research question: “What combination of key factors in supply chain planning make it possible to optimize the performance of the supply chain?”, the authors developed equation models that they tested on a real supply-chain case in the furniture industry. The study was published in January 2016 in the Journal of Manufacturing Systems, in an article entitled: “Impact of the integration of tactical supply chain planning determinants on performance”.

Uche Okongwu has been a Lecturer in Operations Management and Supply Chain Management at Toulouse Business School since 1991. He has combined his career as a researcher with that of an engineer and consultant in industrial organization. In 1990, he obtained a doctorate in Industrial Engineering at the Institut National Polytechnique de Lorraine (Nancy, France). He is currently Director of Educational Development and Innovation at TBS, having already set up the School’s industrial organization division.

By Pierre-André Buigues and Denis Lacoste

French car-makers exported fewer and fewer cars over the course of the first decade of the 2000s. At the start of the 2000s, PSA was exporting 54% of its French production and Renault 47%.

Ten years later, that percentage had dropped by over 20 points for PSA; Renault’s case is even more critical since the company has even started importing vehicles to France. Today, Renault now produces fewer vehicles in France than it registers! And France now has a significant trade deficit in the car sector; the last surplus was in 2004!

Does this mean that French manufacturers have become less international in their reach?

Absolutely not. Indeed, during this same period, French manufacturers invested heavily in building assembly plants abroad. In the early 2000s, the number of cars manufactured by Renault and PSA abroad represented about 70% of domestic production. In 2010, the ratio of foreign production to domestic production was close to 170% for PSA and almost 300% for Renault.
One might think that these developments are related to macroeconomic and monetary conditions in the Eurozone. However, when you look at the development of German car-manufacturers’ strategies over the same period, it is clear this is not the case. Between 2000 and 2010, we can see that Volkswagen’s exports remained stable while Mercedes and BMW’s exports rose.

Why did delocalized production replace export?

Specialists in business strategy generally agree that the choices made for an international development strategy are determined by two key factors: the company’s competitive advantages and the economic conditions affecting production in the home country.

The competitive advantages of French manufacturers. . Basically, industrial companies can choose between strategies based on low production costs or differentiation strategies based on technological innovation. A low-cost strategy drives companies to delocalize a significant part of production to low cost countries. On the other hand, a differentiation strategy generally goes hand in hand with increased exportation, because the competitive advantage is based on R & D and hence on the high-level expertise that is only available in developed countries. Companies that opt for a low-cost strategy will look abroad for cheap labor whereas those who base their strategy on differentiation will be less affected by the higher production costs linked to domestic production and can draw on the positive effects of the interaction between production and R & D.

In the case of the car industry, there are considerable differences between the innovation strategies of French companies – which seek to set up production abroad – and German companies, which maintain a high level of exports. At the start of the 2000s, Volkswagen was already investing more than twice as much as Renault and PSA in research, and in 2010, Volkswagen’s research budget was three times greater. If we specifically look at the R&D content of each vehicle sold, there is naturally a quite significant technology input with high-end manufacturers like Mercedes and BMW (more than €2,000 per vehicle), but this is the case even with mid-range manufacturers; the R&D content in a Volkswagen car is 20% higher than that of Renault and 45% greater than that of PSA. Again, the gap widened during the first decade of the 2000s; the increase in R&D expenditure per vehicle is significantly higher in German-made cars compared to French-made cars.

The economic conditions in France The more or less favorable domestic business environment, particularly in terms of cost, also has an impact on their choices in terms of international development. What about the French car industry? What are the differences between the French and German environments? If we look at things on a very general level, we see that the hourly labor costs for manufacturing in general increased by 38% in France, compared with only 17% in Germany, during the first decade of the 2000s. If we look closer at the car sector, we can note that productivity per employee was lower in Germany than in France in 2000, but that productivity increased sharply over the decade in question, while it decreased in France. In 2008, employee productivity was 25% higher in the German car industry compared to France. This can be explained by the fact that French car manufacturers have made little investment in France, their priority being their overseas factories.

Even though we may bemoan the extremely negative consequences in terms of employment and the creation of wealth in France, French car manufacturers made strategic choices that are coherent in terms of international development in view of their low R&D expenditure, their medium- and low-end positioning and the unfavorable domestic production conditions in terms of cost. However, it is not surprising that French manufacturers’ profit margins are lower than those of their German counterparts. For example, over the period 2000-2010, the operating profit per car was €635 for VW and around €250 for Renault and PSA.

Is this specific to the car industry in France?

Unfortunately for French international trade and the employment market in France, the car sector is not an isolated case. France has far fewer companies that export than Germany, and the share of exports in French GDP is almost two times lower. On the other hand, France has more large multinationals than Germany (14 companies in the world’s top 100 compared with 10 for Germany) and these French multinationals have a larger proportion of their workforce abroad than their German counterparts.

Consequently, for France to become an “export country” once again, it would take a radical change in the strategic positioning of companies located in France as well as more favorable production conditions in the country.

Written by P.A. Buigues and D. Lacoste. The information in this text is taken from the following articles: “Les déterminants des stratégies internationales des constructeurs automobiles européens : exportation ou investissements directs à l’étranger” (Determining factors in the international strategies of European car manufacturers: exportation or direct investment abroad? ”), published in 2015 in the magazine “Gérer et Comprendre”, written by the authors in collaboration with M. Saias M, and “Les Stratégies d’internationalisation des entreprises françaises et allemandes : deux modèles d’entrée opposés” (International business development strategies of French and German companies: two opposite input models), written by the authors and published in “Gérer et Comprendre” in 2016, as well as their book “Stratégies d’Internationalisation des entreprise” (International Business Development Strategies), published in in 2011 by De Boeck.

Methodology

The database was essentially built using information published by the manufacturers in their annual reports, as well as data provided by the French Automobile Manufacturers’ Committee (CCFA), the International Organization of Motor Vehicle Manufacturers (OICA) and by Eurostat. The data relating to international business development, strategies and economic conditions were analyzed over the entire 2000-2010 period.

Practical applications

This study shows that any assessment of a company’s choice in terms of international development cannot be cannot be conducted without analyzing other aspects of its strategy (particularly in terms of positioning) and the economic conditions in the company’s home country. The study also suggests that foreign investments are not necessarily the best way forward in terms of international development. The case of the car industry shows that it is possible for a company to keep a significant part of its production in its home country while remaining efficient, even in a global industry.

By Servane Delanoë-Gueguen

When looking at business creation, people tend to take more interest in the project than in the entrepreneur behind it. However, starting a business has strong personal implications. Assessments of personalized support programs would be more relevant if they paid greater attention to gauging how entrepreneurs feel about their ability to see their project through to completion, particularly as regards the strategic and financial aspects.

What drives someone to want to start a company? Obviously there is the initial project, which may or may not result in the creation of a start-up, but above all there is the individual behind the project, the budding entrepreneur, who will end up transformed by the experience, whatever the result. The process is a form of apprenticeship, during which the business creator acquires new skills, develops new ways of looking at things, and builds networks. If the individuals manage to create their business, this personal transformation will provide them with valuable skills for the company’s development. If not, they will be able to draw on these newly-acquired skills to prepare an entrepreneurial project later in life, or to use their new knowledge working for someone else.

Taking greater interest in the perceived abilities rather than the number of creations

People with new business projects do not have to go through the process alone. They are even encouraged to participate in support programs, which may have a profound impact on the project as well as the person behind it. Unfortunately, when assessing such programs, this personal dimension is rarely taken into account: to evaluate their effectiveness, we tend to focus on the participants’ satisfaction with the program or the fact that they managed to create their business, but not on the effects that the programs have had on the budding entrepreneurs. Our study looked at people participating in a support program set up by Brittany Chambers of Commerce and Industry (CCI). The aim of the study was specifically to analyze this personal impact. Rather than focusing on the project leader’s actual skills, we studied their perceived entrepreneurial self-efficacy , i.e. how the individuals perceived their ability to create a business.

This perceived entrepreneurial self-efficacy – originally developed in the field of psychology – is a key determining factor in the process of creating a company, because not feeling capable can be a major obstacle. If properly evaluated, it can even foster the entrepreneur’s tenacity in the face of difficulties. However, this remains a perceived ability, which is not necessarily representative of the actual ability; indeed, certain individuals have a tendency to underestimate their abilities whereas others overestimate them. Finally, the perception can change, according to four major influences: personal experience, observation of others, verbal persuasion by third parties and emotional state.

The shock of reality

The study sought to measure the change in the perceived self-efficacy of budding entrepreneurs who took part in a support program by interviewing them at the beginning of the project, and then a year later. While we might expect participation in a personalized support program to have a positive effect on entrepreneurial self-efficacy (that is to say, the project leaders feel more capable of creating their company), the results of the study actually show an overall decrease in self-efficacy. If we look in more detail, the only positive impact was on entrepreneurial administrative self-efficacy – concerning the planning of the project and formalities – whereas perceptions related to strategy and finance tended to deteriorate.

These results can be explained by what we could term a “reality check”. At the start of the process, many budding entrepreneurs think that the administrative side is highly complex and focus on this aspect; then they realize that this is not actually the most complicated aspect, particularly since a number of measures have simplified business-start-up procedures over recent years. At the same time, they start to realize how difficult it is to find customers and funding, that there are competitors in the market, and that they never have enough time to do everything. All these aspects are often under-estimated when they build their project.
However surprising it may be, this result shows the value of having an objective assessment of start-up support programs, by focusing on the personal impacts: the aim of support programs is to help people with start-up projects set up viable businesses and understand the realities of the market, not to simply ensure that the majority of the individuals actually start their businesses. With this in mind, it is not necessarily a bad thing for prospective business creators to feel less capable at the end of the process than at the beginning. Participants who ultimately decide not to start their business, after appreciating the importance of having a customer base and a network, have the opportunity to ask themselves the right questions, to readjust their perceived ability, and sometimes realize they are simply not made to be entrepreneurs. They will be better equipped for the next project, or at least thy will have more realistic perceptions.

A practical tool for improving programs

This evaluation method is a valuable tool for improving support programs, with practical uses that can be taken advantage of almost immediately. For example, it may be interesting to adopt a differentiated approach depending on whether the people at the start of the program underestimate or overestimate their ability to create a company, in order to help them reach a more realistic self-perception. In relation to the case analyzed in this study, the support programs could focus more on strategic issues and funding.
These results are a step towards achieving an objective assessment of support mechanisms for budding entrepreneurs. Now, it would be useful to fine-tune the results with a more representative sample group of budding entrepreneurs and extend the research to different types of support initiatives.

Servane Delanoë-Gueguen is a research professor in entrepreneurship and business strategy in Toulouse Business School. She is responsible for the TBSeeds incubator and is joint Head of the “entrepreneur” vocational option. She has a PhD in emerging entrepreneurship from the Open University (UK). Her research focuses on budding entrepreneurs, entrepreneurial ecosystems, business-creation support programs, entrepreneurial desire and business incubation. This publication is a summary of the article “Aide à la création d’entreprise et auto-efficacité entrepreneuriale” (Support for business creation and entrepreneurial self-efficacy”) published in 2015 in theRevue de l’entrepreneuriat.

Methodology

Within the framework of her research, Servane Delanoë-Gueguen conducted a longitudinal study. Based on a literature review, she developed a theoretical model with 3 research hypotheses concerning the evolution of entrepreneurial self-efficacy over the course of one year concerning individuals with business start-up projects involved in a support program, who had ultimately created their business or not, with gender differentiation. The model was then tested with a group of budding entrepreneurs. In the first year, a total of 506 people answered a questionnaire to assess their perception of their entrepreneurial abilities. The following year, she managed to re-contact 394 of the people concerned, of whom 325 had a genuine start-up project in progress. Out of this group, 193 people answered the questionnaire again.

By Victor Dos Santos Paulino

sirius_logo_RVB

The case of innovation in the space industry

Innovation is one of the major themes in management. The capacity to innovate is considered to be critical for businesses to succeed. However, if we look at the space industry, we can see that innovation should be bridled with caution if a strategy is to succeed.

Conventional wisdom claims that the rapid adoption of new technologies improves the performance and survival of companies. Already at the beginning of the 20th century, Joseph Schumpeter had demonstrated the link between innovation and industrial success. In the 1990s, other scholars, such as Joel Mokyr, followed suit while explaining the inertia (the slow adoption of new technology) as being due to the phobic and irrational attitudes of managers. Against this backdrop, the space industry provides an interesting, and even paradoxical, example: this highly technological sector is a symbol of innovation, yet it considers it necessary to adopt a cautious approach. This is a requirement for telecommunications satellite operators, for whom reliability is more important than novelty, a factor that entails risk.

Uncertainty in the space industry

Innovation is a complex phenomenon that does not automatically guarantee success, progress and profits. For example, it has been demonstrated that over 60% of innovations led to failures. In addition, many companies legitimately postpone the adoption of innovations in several cases: for example, when an innovation would cannibalize an existing product or make it obsolete, or when the costs turn out to be too high compared with the expected profits. Do these factors explain the inertia-based strategy observed in the space industry?

By its very nature, the use of new technology by the space industry entails a risk: ground testing of a component, even under conditions that simulate space, may not accurately predict its behaviour in flight. It may perform perfectly, or prove faulty and no-one can be sure ahead of time! The result is that satellite manufacturers tend to favour an inertia-based strategy with which technological changes are adopted in an extremely cautious manner. Only tried and tested innovations are implemented. The cost of failure makes both manufacturers and their customers behave cautiously.

Reliability is a source of competitive advantage in space telecoms

Caution features strongly in the space telecommunications sector, because the reliability of satellites is a major competitive advantage. To ensure the greatest reliability, manufacturers have set up perfectly tuned organisations and processes. This is why the cycle of design, development and manufacture of satellites is broken down – and must continue to be so, into successive phases: Phase 0 > Mission analysis; Phase A > Feasibility study; Phase B > Preliminary design; Phase C > Detailed design; Phase D > Manufacturing and testing; Phase E > Exploitation; Phase F > Decommissioning. While this approach helps ensure high levels of reliability, it also brings with it considerable structural inertia.

This need for reliability and stability leads space manufacturers to adopt information and communications technologies that have the least impact on the organization. However, it also leads them to not question technological choices for space telecommunications, choices that increase reliability, but do not allow any savings in production costs. Serge Potteck, a specialist in space project management, emphasises, for example, that to transmit a signal, engineers prefer to design antennas with a diameter of 60 cm in order to guard against possible malfunctions, whereas a less costly 55 cm antenna would suffice.

Differences between segments in the space sector

This analysis, however, needs to be refined for each of the different segments that make up the space sector. They can be classified into three groups. The first consists of telecommunications satellites and rockets (launchers). In this segment, the cost of failure would be very high. It would penalize the manufacturer, who would have produced a non-functioning satellite as well as the company that operates the launchers and markets launch services, but, also, all the players involved in the business plan. A failure can cause a delay of several years in the marketing of new telecommunications services to be delivered by satellite.

The second group consists of spacecraft built for scientific or demonstration purposes, and as always, the rockets used to launch them. The governments or space agencies that commission them are not subject to the usual profitability requirements. Here, disruptive technology and its associated risks are part and parcel of a project.

The last group overlaps the space industry and other industries. It encompasses, for example, the tools to operate the geolocation capabilities of the Galileo constellation or the distribution of digital content. In this segment, stability is seen as detrimental to the development of new markets.

An inertia-based strategy… but only at first sight

While the particular environment in which the space sector operates tends to dampen its ability to experiment, it does not entirely prevent innovation. Inertia-based strategies are, in fact, largely an appearance. What we refer to as “inertia” is, in fact, a genuine innovation-dynamic: any new technology will be studied carefully before being tested, or not, on a new spacecraft, and before its possible subsequent integration. Could such a strategy, therefore, ensure the survival of a market in certain cases? To consider it as a failure to be countered would be a mistake!

The space industry would probably not innovate much if its only clients were commercial satellite operators. However, space agencies are willing to finance experimental spacecraft, thus accepting the financial risk associated with possible failure. It is thanks to them that the manufacturers of commercial satellites are able to validate the technological choices available to them, since they have proven their reliability.

From my publications, ” Innovation: quand la prudence est la bonne stratégie [Innovation: when caution is the right strategy]”, published in TBSearch magazine, No. 6, July 2014, and ” Le paradoxe du retard de l’industrie spatiale dans ses formes organisationnelles et dans l’usage des TIC [The paradox of the delay of the space industry in its organizational forms and in the use of ICT],” published in Gérer et comprendre [Managing and understanding], December 2006, No. 86

Methodology

The analysis of the organizational and technological paradox that characterizes the space industry is based on several types of information: the theoretical literature available (Hannan and Freeman, 1984; Jeantet, Tiger, Vinck and Tichkiewitch, 1996); the work done by engineers in the sector (Potteck, 1999); and field observations made between 2003 and 2007 at one of the leading European prime contractors manufacturing satellites and space probes.

By Lourdes Perez

Contrary to conventional wisdom, small businesses are not condemned to be always at a disadvantage in their relations with large ones. They may have much to gain, provided they find a suitable mode of operation that avoids them finding themselves in competition when it comes to sharing the value created.

Under what conditions can both companies in a commercial partnership benefit fully and fairly? Until now, there has been something of a consensus on this issue: above all, the two companies in the partnership needed to be of equivalent size. As the profits from the partnership (development of new products, winning new markets, creation of additional income) are seen as a cake to be shared, they should be distributed according to the respective size and contributions of each partner. Seen from this point of view, an asymmetric partnership between a large company and an SME would most likely be less beneficial for the latter. Moreover, the literature on the subject generally stresses the risks for SMEs, which lack the weapons to defend themselves in this ‘coopetition’ relationship (cooperation-competition).

Complementarity rather than similarity

In opposition to this commonly-held idea, our study shows that on the contrary asymmetrical relationships offer opportunities for small businesses to innovate. This kind of relationship is virtually inevitable in the context of the globalized, ultra-competitive economy, where the most dangerous posture for a small company is to remain isolated. There are many examples of asymmetric partnerships, particularly in predominantly technological sectors, which have been found to be just as fruitful for “small” as for “large” companies. In many such cases, partners have been able to create relationships based on complementarity, which, in the end, is just as important as similarity.

This does not mean denying the risk of failure. The risk remains, but is far from insurmountable, as long as a strategy is devised to meet the challenges of this type of partnership: first, the difficulties of communication related to the differences in scale between the two structures (it’s rare for the head of an SME to have direct access to the Managing Director of a large company); and secondly, the differences in organizational structure and ways of working.

If the small business systematically approaches such a relationship with due respect for a number of basic rules, it increases the chance of a profitable outcome. To reach this conclusion, we analyzed a successful partnership between a small Spanish seafood company that wanted to extend the time it could conserve its shellfish and a large Italian company in the energy sector. From this case study, we developed a model that summarizes in three key steps the approach that a small business should follow to avoid the pitfalls generally associated with asymmetric partnerships.

Three basic steps

The first step requires the selection of only a small number of partners. An SME does not have the means to commit itself seriously to multiple partnerships with large companies because it lacks time, logistical organisation and resources. It therefore has every interest in building a lasting alliance with a partner whose strategic objectives are complementary to its own. In our case, the two companies had very different motivations for forming a relationship: whereas the SME sought a technological solution, the large company saw an opportunity to enter the Spanish market, in a sector where it was not previously present. There was therefore no question of sharing the profits of the partnership, as they were not the same for each partner.

The second step is the construction of a strong and committed relationship that offsets the imbalance between the two structures. This requires a serious commitment on the part of the SME, which must nominate a “champion” within the company, i.e. a privileged contact person, with sufficient clout in the organization, someone who is respected throughout the company and who is capable of defending the project and driving it forward in spite of any resistance and obstacles that may arise.

The third step is to develop proposals of mutual value. At the beginning of the partnership, the SME and the large company each pursue specific objectives. But once the project gets under way, some appear unattainable and others incompatible, while new ones appear. The important thing here is to find the appropriate balance between obstinacy and flexibility: to be able to hold firm to one’s positions while taking the partner and unforeseen events into account, and being prepared to rethink the initial objectives. This requires an ability to listen, an open mind, and knowing the partner, its objectives and its motivations.

100% benefit for each side

The success of this strategy clearly shows that there is no reason why an asymmetrical partnership should inevitably be less beneficial to the smaller partner. In our case study, each partner obtained 100% of what it was seeking, because they had expectations that were in no way mutually exclusive and because they were able to build their compatibility together. This new perception of asymmetry in a cooperative spirit, rather than as an unequal balance of power, opens new perspectives for the understanding and the management of relations between unequal partners.

References: Based on an interview with Lourdes Pérez and the article “Uneven Partners: managing the power balance”, Lourdes Pérez and Jesùs J. Cambra-Fierro, Journal of Business Strategy, 2015.

Methodology

Lourdes Pérez and Jesús J. Cambra-Fierro undertook a qualitative case study of two companies, a Spanish SME and a large Italian company, engaged in an asymmetric partnership. The information collected was from a documentary survey (public information, sectoral information, databases) and a review of the scientific literature. Interviews with several qualified people in each company, based on open-ended questionnaires, helped the researchers determine the major themes of the study and build a matrix. The conclusions of the study are a synthesis of these different sources.

By Kévin Carillo

The rapid development of collaborative communication technology as an alternative to e-mails provides companies with a possibility of fundamental transformation but will require supporting measures to usher in a genuine culture of knowledge sharing.

The upsurge in businesses of collaborative communication technology from web 2.0 has been both rapid and widespread. Internal social networks, video-conferences, blogs, micro-blogs, wikis, document sharing—the number of those adopting these linked tools never ceases to increase, in the hope of improving productivity and performance; tools that open up vistas of profound change within their companies and in the working habits of their staff. Little by little the traditional ‘silo’ model whereby the various departments, roles and hierarchies are compartmentalized in a kind of internal competition, is being replaced by the new and more open Enterprise 2.0 model based on increased staff collaboration that breaks down this rigid structure and on sharing information through a kind of forum which itself creates knowledge.

Alongside this organizational revolution, collaborative tools may also be an efficient solution to the increasing problem of e-mail proliferation. E-mails were revolutionary when they first appeared and were unanimously adopted in the workplace but they are now a victim of their own success to the point where their overuse becomes a serious obstacle to productivity: staff members get scores of emails each day, spend hours reading them, don’t open all of them, lose them and their in-boxes get filled up. Finally, communication is hindered and collaboration handicapped. Certain types of interaction currently done by e-mail would be much more efficient with collaborative communication techniques and this is certainly the case, for example, for conversations, sharing of expertise or brainstorming within a group or community.

This said, cooperation and knowledge sharing cannot simply be imposed by decree. Although it is extremely important to give staff access to alternative tools and systems, it is equally important to ensure they adopt them in a productive way. More so in that they are disruptive technologies that radically modify work habits and ways of relating.

The essential role of habit

Our research has focussed precisely on determining just how far the habitual use of collaborative tools—their day-to-day and automatic, routine use—influences the inclination of staff to share their knowledge when they no longer have access to e-mail. The theoretical model we developed identified three perceived advantages to using collaborative communication systems: the relative advantage they offer (it’s useful for my job), compatibility (it corresponds to my needs, the tasks I have to accomplish at work and to the nature of my job) and ease of use. We hypothesize that these advantages have a direct effect on user habits and on knowledge sharing. We also postulate that user habit has a catalyzing effect on each of the perceived advantages in relation to knowledge sharing.

To measure the validity of these hypotheses, we undertook a field study in an information technology (IT) services and consulting firm and obtained the following results: if users see an advantage in using collaborative tools they are more likely to make a habit of it and to share knowledge; likewise user friendliness also leads to habit-forming. On the other hand, we were unable to establish a direct link between user friendliness and knowledge sharing. Nor was the study able to establish the immediate effect of compatibility on knowledge-sharing habits. Concerning the central focus of the study, the part played by habit, the results show that it is extremely important in that it strengthens the impact of the relative advantage and of compatibility on skill sharing.

Technological evolution and the human factor

The study confirms that access to these technologies, no matter how efficient they are, is not enough to change behavior. Their use must become a habit. The more at ease staff are with collaborative tools, the more naturally they will share knowledge and the more easily they will adopt the codes and methods of Enterprise 2.0.

Consequently, what management has to do is to encourage these habits, and the study shows that there are two important arguments that can help bring this about: lead staff to understand that using a collaborative system is not only extremely useful but also easy. This implies introducing a number of measures, some of which are very simple: communication, incentives, games and competitions, sharing the experiences of advanced users, targeted pedagogical programs, and so on.

At the end of the day, this study underlines the classic problem when studying information systems: the importance of the human factor. Simply deploying a collaborative system is not suficient for an enterprise to become 2.0. A collaborative culture must created before the tools are implemented.

Kévin Carillo and the article “Email-free collaboration: An exploratory study on the formation of new work habits among knowledge workers”, Jean-Charles Pillet et Kévin Carillo, International Journal of Information Management, Novembre 2015.

Methodology

In their research, Jean-Charles Pillet and Kévin Carillo carried out a quantitative case study. Starting with a review of the research literature at the time they constructed a theoretical model based on the idea that ingrained habits diminish the relationship between the perceived advantage of using a collaborative system and the ability of staff to share knowledge. To measure the validity of the 9 hypotheses they drew up a questionnaire with 21 items, each one having a 5-point response scale from “totally disagree” to “fully agree”. The study was carried out in August 2014 in an IT services and consulting firm with a workforce of more than 80,000 people spread over forty-odd countries. Several years beforehand, its executives had launched a global policy of dropping e-mails in favor of a collaborative system comprising three main tools: videoconferencing, an internal social network and a system for document sharing. The study focused on a single particular department in the company, the one responsible for handling the suspension of client IT services as soon as possible. Sixty-six valid responses (55%) were collected from 120 people divided into 5 teams in France and in Poland and an analysis of these confirmed some of the hypotheses made.

By Gilles Lafforgue 

Climate change issues are increasingly the focus of international negotiations these days. Could carbon capture and storage (CCS) be a more promising solution for reducing emissions without reducing the consumption of fossil fuels?

Today fossil fuels account for almost 80% of the world consumption of primary energy, since their relatively low cost makes them more competitive than renewable forms such as that solar, wind or biomass energy. Their massive use alone contributes 65% of the greenhouse gases, mainly CO2, which accumulate in the atmosphere and contribute to global warming.

Is CO2 capture and storage a viable alternative?

In the expectation of a transition to a more sustainable energy strategy, Carbon Capture and Storage (CCS) appears to be a viable medium-term alternative for limiting emissions without restricting the consumption of fossil fuels. Developed during the 1970s to improve extraction efficiency from oil wells, CCS involves capturing carbon emissions at source before their release into the atmosphere, then injecting them into natural reservoirs (e.g. saline aquifers, geological formations containing brine unfit for consumption), into former mines or even back into hydrocarbon deposits (still being exploited or else exhausted). CCS would appear to be effective since it can remove 80 to 90% of emissions from gas or coal power stations.

The cost of using such a process remains to be determined. Implementation of CCS becomes cost-effective if the rate of carbon taxationreaches between 30 to 45 dollars/metric ton for coal-fired thermal power stations and 60-65 dollars/metric ton for gas-fired power stations (given that this price ought to fall as a consequence of technological change). However, CCS can only be implemented at a reasonable cost for those sectors that produce the greatest volume and the most concentrated emissions: heavy industries such as cement or steel works, or conventional electrical power stations (coal especially). This technology is, however, inappropriate for diffuse waste gases of low concentration such as are emitted by transport or agriculture.

What strategy must therefore be adopted to optimize the sequestration of CO2?

CCS deployment strategies

To answer this question, and to determine a meaningful association between the exploitation of fossil resources and CO2 sequestration, we have developed a dynamic model. This model enables the optimum pace of CCS deployment to be defined, and takes three essential parameters into account: the availability of fossil resources, the accumulation of carbon in the atmosphere (and its partial absorption by the biosphere and oceans), and the limited capacity of storage sites. Using the model we show that the optimal sequestration of the greatest possible percentage of CO2 released by industrial activity occurs when the CCS process starts. CO2 sequestration then gradually falls until the site is completely filled. Note that as long as the CO2 can be sequestered, consumption of fossil fuels remains strong. Consumption slows down once the reservoir has become saturated and all the CO2 released has been subject to payment of the carbon tax. This is where renewable energies come in.

In another research project we sought to determine the optimum policies for capturing CO2 emissions by comparing two sectors. Sector 1with heavy industries such as steel and cement works, or conventional thermal power stations with concentrated emissions, has access to CCS and can therefore reduce its emissions at a reasonable cost. Sector 2, the transport sector for example, whose emissions are more diffuse, only has access to a more costly CO2 capture technology (e.g. atmospheric capture, a technique which involves recovering the CO2 from the atmosphere using a chemical process to isolate the polluting molecules). Considering these two “heterogeneous” sectors, we have been able to show that the optimal strategy is to start by capturing the emissions from Sector 1 before the permissible pollution ceiling is reached. The capture of emissions from Sector 2 starts once the pollution ceiling has been reached, and is only partial. As far as carbon tax is concerned, our research shows that this has to increase during the pre-ceiling phase. Once the ceiling has been reached, the tax must fall in stages to zero.

Carbon tax: the optimal cost for CCS competitiveness

It seems clear in a market economy that the only way to persuade industry to capture and store CO2 is to put a price on carbon, by taxing it, for example. Reasoning in terms of “cost-effectiveness”, industrial firms will compare the cost of sequestering a metric ton of carbon with the amount of tax they would need to pay if that same metric ton were released into the atmosphere. This tax must be unique and applied to all sectors, regardless of their number and nature. What level of tax would guarantee that CCS would be competitive and thus ensure its development? According to the IPCC (Intergovernmental Panel on Climate Change), if we are to limit the global temperature rise to 2°C then atmospheric pollution ceiling must not exceed 450 ppm (parts per million). This equates to a carbon tax of around 40 dollars/metric ton of CO2 in 2015, reaching 190 dollars/metric ton of CO2 in 2055 (the date at which the threshold is reached) which would widely stimulate the development of CCS.

However, it is essential to note that carbon capture is merely a transient solution for relieving the atmosphere of carbon emissions, while continuing to benefit from energy that is relatively cheap compared with renewable energy sources. Between now and 2030 policies should implement strategies for implementing a sustainable transition to sources of clean energy.

[1] Primary energy: energy available in nature before any transformation (natural gas, oil, etc.)

[2] Carbon tax: officially known in France as the Contribution Climat Energie [Climate Energy Contribution] (CCE), the carbon tax is added to the sale price of products or services depending on the amount of greenhouse gases (e.g. CO2) emitted during their use. This came into being in January 2015 and has risen to 7 euros/metric ton of carbon. This recommended ceiling in the concentration of atmospheric CO2 was established with the objective of limiting the rise in temperature to some desired value (e.g. the infamous +2°C).

Gilles Lafforgue, and articles “Lutte contre le réchauffement climatique : quelle stratégie de séquestration du CO2?” (Combating global warming: what CO2 sequestration strategy?) published in Tbsearch magazine, “Optimal Carbon Capture and Storage Policies” (2013), published in Environmental Modelling and Assessment, co-authored by Alain Ayong le Kama (EconomiX, Université Paris Ouest, Nanterre), Mouez Fodha (Paris School of Economics) and Gilles Lafforgue, and “Optimal Timing of CCS Policies with Heterogeneous Energy Consumption Sectors” (2014), published in Environmental and Resource Economics, co-authored by Jean‐Pierre Amigues (TSE), Gilles Lafforgue and Michel Moreaux (TSE).

Practical applications

The macroeconomic models that have been developed provide insight into how CO2 sequestration can be implemented so as to make an effective contribution to global warming while maximizing the advantages of exploiting fossil fuels. Expressed as a rate of CO2 emissions to be reduced, the theoretical results cast a pragmatic light, enabling governments to encourage industrialists to sequester CO2 rather than pay the carbon tax.

Methodology

In the first study, a dynamic model was developed for the optimum management of energy resources, taking account of interactions between the economy and the climate. Carbon was assigned a value that penalized economic activity directly.
For the second model we adopted a “cost-effectiveness” approach. Assuming a maximum threshold of emissions which cannot be exceeded (from the Kyoto protocol), the scale at which CCS had to be deployed was determined and we then ascribed a financial value to carbon.