Passive Stock Investment Strategy: An Alternative Analysis of the Phenomenon.

active-plus-passive

Introduction

Since the 2007 financial crisis, the passive investment strategy is on the rise. Each year a new record is established concerning the amount of capital invested using this strategy. Nowadays, the success of the passive funds industry is so important that it starts to be an existential threat to the entire, more classical, active-based Wall Street funds offer. To use a buzzword, the passive funds industry can be defined as a disruption of the classical Wall Street ecosystem: the vigor and speed of this process explains the distinctly stinging tone used by both sides in the ongoing so-called active versus passive debate.
Our contribution must be viewed as an attempt to provide a heterodox approach to this debate, by focusing on a neglected aspect: the analysis of investors’ rational reasons for embracing this allocation strategy.
Usually, an investor is assumed to act according to the rationality hypothesis, which means an aspiration to maximize the expected return given a certain amount of risk, then we can only understand the optimality of the passive choice by acknowledging some deep structural changes.
Our provocative statement is quite straightforward: because two major structural changes in terms of stock market functionality and the capitalist market economy have made the passive strategy optimal. On one hand, agents are specifically choosing this strategy because the stock market is drying up in its function as a main reservoir of investment opportunities; the stock market is shrinking in favor of other, out of this market, solutions. On the other hand, our capitalistic economy is less competitive, which implies that oligopolistic and even monopolistic markets are increasingly the rule with dominating firms either listed in a stock exchange or not. Altogether, these structural changes explain the success of the passive strategy. By focusing too much on analyzing the passive world and its consequences, we are missing the forest for the trees!
Thus, our goal is to bring some light to the forest and to discuss its complexity.
We will proceed as follows.
First, we will offer a short overview of the existing, and endlessly growing, literature about the active versus passive debate.
While the overwhelming negative consequences of excessive passive investment for stock market functioning are well discussed and presented, investor rationales underlying the passive choice are neglected or badly defined. Our first step will be to provide a critical analysis of this neglected element.
Second, we will introduce some challenging questions:
The literature tends to explain the post ’07 success of the passive strategy without referencing any structural changes. But, if it were the case that no structural changes occurred, then why this delay until after the ’07 crisis? At its inception in the mid-1970s the passive index tracking strategy was a success but its success was never so fulgurant during the ’80s or ‘90s than in the last decade. Why? Does this late success tell us something about the changing structure of the global economy?
Third, to best answer those open queries, we will explain the passive strategy’s triumph by referring to two structural factors.
The first factor is what we call the stock market’s drying-up process.
In our view, since the Internet bubble burst at the beginning of this century, the stock market has gradually but definitely lost its centrality as a source of new investment opportunities. Alternatively, the market is drying up because the pool of fresh available opportunities is shrinking and the few remaining opportunities are too stringently hunted based on, often, too harshly short-term considerations. Given this phenomenon, a lambda investor’s opting for a passive investment choice is coherent with it: Investors expect to receive more from big capitalization firms and so disparage small and medium capitalization ones because investors do not believe in small firms’ potential.
Here, investors, sometimes wrongly, believe those rare pearls are too expensive, in terms of analyst fees, to be detected.
The second factor is based on a simple finding: The market web that shapes our capitalistic economy is declining in terms of competition. Several key markets are now organized as either oligopolies or even monopolies. Basically, this absence of competition tends to favor big listed firms and their forecast for future solid earnings: Therefore, the lambda investor’s decision to go passive sounds like a very rational choice.
If the capitalistic features of our economy have fundamentally changed, then it might be normal for some of its traditional frameworks to be, at least partially, disrupted.

I. A short review of the active versus passive literature: Where we stand with this debate.

Recently, the active versus passive debate has become a hot topic, and a long list of studies can be cited.
The bulk of this literature questions the current and growing passive trend: If this trend continues at the same pace, what will be the future of the stock market in a few years’ time? This is the main question and concern.
The stock market scenario under this trend is gloomy and the main reasons can be easily grasped. Two reasons are worth special attention.
1. An overwhelming acceptance of the passive strategy will inevitably generate a huge misallocation of capital. Huge amounts of money going to big firms’ shares is a penalty to small firms’ shares. Here, the argument is quite straightforward: Small but dynamic firms, in which capital is likely to generate its best return, get penalized because their share prices will no longer reflect future potential cash flows.
Thus, the price signal generated by the stock market is distorted and investors’ savings are no longer allocated in an optimal way. To use Renaud de Planta’s words: “That wouldn’t be good for productivity and growth” [1]-All notes at the end of the paper-.
2. If more and more money is invested in passive funds, less and less will be available to managers of active funds. In other words, the more investment in passive funds grows, the less freedom in terms of both capital availability and stock-picking implementation active fund managers have. Here, a scenario where only a few active players remain in the market, becomes plausible. But then, what about the price mechanism under this new extreme regime? What about the efficiency of the price structure so obtained? If most of the shares available are detained under passive rules: how would the price mechanism work if only a few active funds remain?
This is an extreme scenario; the odds of reaching this territory are small. But, still, where is the process likely to end? Will it stop with room enough for active players and, therefore, ensure the survival of a “fair” price mechanism or not? No one, at present can answer those questions: They are nevertheless real threats.
The conclusion is clear, if the widespread passive epidemic continues its progress on the same path, then the stock market price mechanism as we know it will stop working properly. Its ability to deliver an optimal stocks price structure will be compromised and the consequences for the economy will be very costly.
Nonetheless, in depicting this gloomy scenario, no one can explain why the passive strategy’s amazing success story started right after the ’07-’08 crisis. The investors’ rationales, which brings them to choose this strategy instead of an active one, are badly treated and explained.
Indeed, only two main arguments are used to explain this choice.
A. By creating index tracker funds, the passive asset management industry has found a bright way to offer a diversified portfolio at very low fees for the clients. The funds’ holdings mimic those of an index. The way in which a fund manager allocates money among the stocks is easily defined, and depends on the weighting of each company in the index. Now, most of the time, the weighting index’s structure is in function of the size of the company, that is, its market capitalization. There is no real stock-picking mechanism here. The fund manager mimics the existing index, with zero added value but also with zero cost in terms of research to determine the right stock picks. Simultaneously, the passive fund manager offers a very transparent investment vehicle: any investor can easily grasp what an index is and how it works! For instance, it is likely that the transparency has played an important role in plenty of investors’ mind after the Madoff case.
Therefore, the passive offer is cheap compared to a classical active one, it is easy to understand, i.e. highly transparent, and, a priori, liquid.
B. Most investors are already aware of an empirical fact: Most active fund managers, once returns have been calculated after net fees and the medium/long-term have been considered, do not perform better than the market, they are not constantly beating the market. In other words, no one has been shown to be constantly good at picking stocks. The market always has periods in which it (or, better, its proxy represented by a chosen fund’s underlying benchmark index), will beat the active fund’s performance.
Therefore, the stock market, over the medium and long term is quite efficient. If luck is taken out of the equation, then no magic formula exists. Human and machine- algorithm today, more and more AI tomorrow- are condemned to accept some losses in return vis-à-vis the market [2].
Intuitively, factor A. is crystal clear: Passive investment is attractive because it is cheap, easy to understand and, ceteris paribus, liquid. Nonetheless, this was always so. A simple query then is why the amazing flow towards this form of investment took place the last decade. The question remains basically unanswered. This argument is too short to fully explain what is going on.
Factor B. is trickier and far more complex. Despite being a powerful argument in favor of a passive attitude, its influence is overestimated for two reasons:
1) The major shortcoming of this argument is, as usual, that past performance values do not guarantee future performance values. A priori, in our view, a lambda investor who is now deciding whether to embrace a passive strategy or take an active fund option, is always more concerned about the future than the past: The decision process is based on what the investor can foresee and not on what he or she has seen.
Alternatively, past data and past trends are not at all sure to be repeated! If efficiency is taken seriously and investor’s rationality is acknowledged, then an investment choice will place more emphasis on an investor’s future forecast than on a simple set of past performance data.
2) Stock market efficiency is a difficult subject whose analysis strictly depends on the definition of information used in the market at a certain time. This argument needs to be discussed by considering all the complexity of today’s stock-picking mechanisms. Ultimately, the arrival of first algorithms and now AI is moving all those mechanisms toward a more systematic data-driven process.
Does this evolution ensure more efficiency? What kind of efficiency is likely to result from a generalization of these technologies?
As said in footnote 2 we will soon devote a paper to those questions.
In any case, our intimate belief is that factors A. and B. are insufficient to fully explain the current run toward passive funds.
Why?
Let’s take a closer look at the decision process of an investor who decides to go passive. This should help find an answer.

II. The choice of passivity: An analysis of the lambda investor’s case.

Does a list of the main factors exist that is likely to explain why a lambda investor is choosing a passive strategy? To answer this difficult question, we need to acknowledge that becoming passive implies that our saving is allocated following firms’ weighting in each market index, which usually, because a weighting structure is in function of market capitalization, favors big firms and penalizes small ones.
Now, a passive investor is never fully passive. He or she is active when deciding to buy an index tracker fund and when deciding to leave it: Choosing an index tracking fund implies he or she is acting.
Even though intuitively straightforward the small investor experience is quite paradoxical. In a passive world in which each participant is supposed to not take actions, the lambda investor decides based on a set of information that is, a priori, more substantial than the simple knowledge of the weighting of each stock in an index.
Simultaneously, the investor’s perception and vision of the investment’s expected future returns are, ultimately, encapsulated by those weightings.
Then the problem remains, given those weightings, how can we grasp and understand the investor’s underlying motive?
It might be simply by noticing that buying an index-tracker passive fund means embracing a certain investment philosophy. We are conveying a certain vision of the status of the economy and its future, and we are agreeing to follow a corporate friendly policy.
Here, given the way a stock index tracker is built and the fund’s manager status, the passive choice implies the following:
1) Putting more money in big corporations than in small ones.
2) Taking for granted that your fund manager will not interfere with the way firms are run [3].
Why have these two simple factors become more valuable now than they were back then in the mid-1970s when John Bogle created this type of fund?
The answer is simple, but few commentators have given it serious attention.
Two major changes occurred. On one side our capitalistic economy is less competitive because big firms (either listed or not listed in a stock exchange) are much more powerful and wide spread in several markets and, on the other side, the stock market has faded in its role as a reservoir of value and growth. The stock market is literally drying up!
Equivalently, on one side, we have a first set of justifications arising from a paramount stock exchange evolution, and, on the other side, another set of arguments related to a profound transformation of the entire market’s capitalistic system in which firms conduct business. Let’s start by analyzing the issues related to the evolution of stock exchanges.

III. To understand fully the passive choice, we need to embrace a larger view of the economy: This choice is more rational than it seems.

III. A) Wall Street and its decline: Wall Street fades as a reservoir of small firms’ growth and value, the drying up phenomenon.

As we saw, by going passive an investor is weighting big firms more than medium and small ones. An element is implicit in this sentence: We are referring to listed firms in a stock exchange. Nowadays, more than in the past, quoted firms represent only a fraction of the web of businesses characterizing an occidental economy.
Historically, the total number of publicly traded stocks has been decreasing [4]. Therefore, the pool of firms available to invest in, is shrinking.
This result originates from three sources: α) Some small-to-medium firms prefer delisting from stock exchanges and using private equity financing and venture capital funds; ß) M&As remain historically high after the ’08 crisis; δ) fewer IPOs are taking place; and μ) finally, thanks to an exceptionality long and ultra-loose monetary policy, we got now plenty of zombie quoted firms: their presence is an additional noisy factor when someone is looking for valid investment opportunities in the stock markets [5].
If those elements are considered, the stunning increase of passive investors, can be seen from a new, refreshing perspective:
Investors are not convinced anymore by active strategies because they believe, fewer undervalued companies capable of growth are available in the stock market.
Equivalently, investors are now aware and convinced, perhaps wrongly so, that the firms likely to be dominant in the future are no longer in the pool of quoted small capitalization firms: Those who will be in power, the future champions are outside and they are financed via other channels.
At first glance a powerful rationale explains the passive choice: Why should I spend money, which means paying fees to an active fund manager, to search growth companies among a shrinking set of firms when I know that growth and value are mainly created outside the classical Wall Street frame?
The answer is frank: If I must invest in Wall Street, it is better to distribute my savings, paying a minimum of fees, predominantly among big firms and fractionally, among small and medium ones: this is exactly the main feature of an index-tracker.
Alternatively, the success of the passive strategy is a product of the Wall Street crisis. People are choosing this way of investing because Wall Street is drying up in terms of offering new exciting opportunities. Small capitalizations, therefore those with minimal weighting in an index, are being, at their best, a secondary repository of future stream of value [6].
But why? The answer is twofold.
First, as already mentioned most of the dynamic firms are now operating outside the stock market: A firm does not need to become a stock market insider to be properly financed and therefore prosper.
Second, small capitalized listed firms’ valuations are constantly under stringent data-driven analysis. The generalization of quantitative and AI-driven stock-picking methods implies a constant scrutiny of firm’s results: quarter financial values need to fulfill market’s short-term expectations.
Ultimately, the old mechanism where Wall Street was the center of savings’ allocation process is broken, and it is fading because, basically, too much is asked in respect of short-term results: Being in the market is just too stressful, operating and prospering outside it is just a lot easier.
With that perspective, it is better to remain outside the stock market and wait until the size of the business, e.g. firms’ turnover, efficient chain of value and large distribution web, becomes large enough to, eventually, envision a market public offering.
Here, being part of the market is no more a way to be financed but a nice to have long/medium term goal once the business is well-structured and already successful. Clearly, this is an extreme statement and some cases prove the contrary: still the trend is there. Here, for a firm, entering in the market at maturity, allows to keep, more easily, the Wall Street (short-term) mechanic disciplined.

Let’s now move on and investigate our second series of arguments.

III. B) Big companies are becoming clusters of innovation to master their future and to survey a possible disruption process.

A firm’s environment is characterized by a twofold threat. On one side, day by day, the need to keep pace with the ongoing numerical revolution.
On the other side, is the need to constantly assess the possibility of a frontal disruption, likely to destroy or, at least, seriously reshuffle the entire market.
If all firms are facing this reality, then big firms have an advantage when a defense strategy needs to be defined. They can create a cluster centered around them in which to develop and master innovation.
In this situation, they can use -at least- two available options:
First, a priori, they can dispose of a certain amount of capital that can be injected into startup initiatives. In addition, they can use their web of relationships with venture capital and private equity firms to increase the size of their projects.
Second, they can optimize their efforts using their own research and development divisions, whose budgets can be huge [7].
By so doing, the important firms will try to achieve two goals:
1)
To remain a dominant player in terms of current technology in use in its sector. A big firm protects its market position by endlessly innovating and keeping its degree of efficiency very high. The chain of value is endlessly reviewed, changed, and optimized.
2) To try to anticipate a possible disruptive process, that could modify the nature of the market in which it is operating. The big firm tries to sterilize the disruptor-technology by absorbing it into the existing organization. Therefore, big firms are always ready to absorb one or several startups present in their cluster or to fight fiercely to absorb something out of them. For several senior managers, the firm’s real competition takes place more out of the market than inside the market: A ghostly paramount fear can become more real than a tangible threat.

All in all, we see a new passive lambda rationale here: In a time where the innovation process has reached a stunning pace, big firms are, perhaps, paradoxically, the best equipped to mitigate this breathtaking constraint because they can, figuratively, build (defense) frameworks around themselves.  Once a framework is built, it is likely that it will act as a shield, which will guarantee better resilience in keeping the business afloat in the medium and long term.
A last remark here: Big firms partially behave as venture capital organizations. Now, the final victim of this big firms’ cluster philosophy is the stock market. Indeed, most of the startups in a big company’s framework will end up never quoted. And Why this happens?
Three possible ideas might support an answer here:
α) Enough capital is available outside the stock market to ensure proper development. No need then, to quickly go public and even at a late stage this is no more crucial!
ß) As already stressed in section III.A), by so doing, a startup will just avoid entering a system in which data (e.g. quarterly financial results), are likely to become far more important than plan and vision in conquering its own market.
The Wall Street apparatus is in trouble also because too much, if not everything, is based on short-term objectives tracking: why should young dynamic firms be interested in entering the Wall Street biotope when their goals are medium to long-term ones?
δ) Big firms become framework builders by surrounding themselves with dynamic pools of startups. This clustering attitude allows these important firms to remain in close contact with another huge startup generator: the academic world.
Here, it can be a lot simpler to keep a smooth, non-accounting driven, attitude in firms’ startups if you avoid an IPO: This can be a positive factor when a firm must attract new talent directly from the academic world.

This final remark closes our analysis of the structural changes in the stock exchanges: The Wall Street ecosystem is, in our view, under attack not only because the passive industry is becoming too important, but mainly because less room for trial and error is available to young small to medium firms: collecting capital in a stock exchange is costly, perhaps, too costly.
Let’s move on to the last rationale, that highlights more general aspects of the status of the capitalistic economy we currently live in.

III. C) Most big companies do not operate in a competitive market.

Historically, what are the main forces that have reshaped the entire capitalist system since the financial crisis of 2008?
Most of us would answer by saying the deepening of the globalization phenomenon or the fantastic acceleration in the numerical revolution. Few, would point out the lack of competition characterizing several markets.
Still, in plenty of sectors, few powerful firms are managing most of the supply side of the market. Consequently, in plenty of markets, insiders’ power is becoming progressively concentrated: big firms nowadays are operating mostly in non-competitive environments either oligopolistic or even very close to monopolistic market status- (e.g., Alphabet, to name the most classical example, with its Web search engine)-.
The US case is representative of a general worldwide tendency, as the US capitalistic economy is now under the spell of having numerous key sectors dominated by gigantic entities enjoying partially self-generated barriers to entry.
Do we have proof of this statement?
Sure, we have proof. Just take the example of the tech giants: Where are the real competitors of Google and Facebook or even Microsoft and lately Amazon?
Nowhere, at least in the occidental- (geography matters as we will see in a minute)- economic ecosystem. Similarly, we have the case of big banks: More than 30 big banks existed at the beginning of the ‘90s, so where do we stand almost 30 years later?
Four huge banks are left in the US!
But the same can be said for the automobile, energy, pharma and logistic markets: The list of sectors in which competition has been reduced to a minimum amount is massive. Add to this phenomenon the fact that even outside listed corporations, the famous startups, some of them with amazing turnover and valued billions -the famous unicorns-  are playing, often in oligopolistic markets (see Airbnb and Uber as the main examples).
In short, we can simply agree with the analysis of Chicago Professor Luigi Zingales, who described in detail the historical background of this concentration-process. His work shows how it is precisely this phenomenon that defines the major threat for the American capitalistic model, based on a web of free and competitive markets [8].
Now, we are far less nostalgic than professor Zingales.
Why? Because this evolution is just how big occidental- (once again, geography matters) -firms are very rationally and efficiently responding to two extreme and violent forces:
First, a technological challenge that is a constant existential threat to any business via a possible disruption process.
Second, the never-ending globalization dynamic, which brings out new economic powerhouses, firms, and markets but constantly demands reassessing the chain of value and correlated price/offer positioning.

Alternatively, Zingales’ analysis, which we agree with in several aspects, pays almost no attention to two points:
The fact that an economy, as large and powerful as the United States, is never a closed ecosystem, and simultaneously, that firms nowadays, live under a constant threat of seeing their chains of value and ultimately final offer become obsolete in a very short period.

It remains without a doubt that the lack of competition is a source of inefficiency for the final consumers. Antitrust legislation has been created in all (occidental) capitalistic states to fight this inefficiency.
Why are those policies not triggers?
Most commentators will explain that competition is stunted due to the presence of lobbies at the level of the legislator and the fear of blackmail over losing jobs.
For us, it is a lot more complicated than that, once the globalization factor is taken properly into account:
Antitrust legislation is not triggered because, those big firms are our global champions!
Occidental governments accept the presence of those consumers’ inefficiencies because those firms are engaged in a worldwide competition with other groups outside the country, hence, political apparatus protects them in the country market, to guarantee their full strength externally and to compete in the worldwide market [9].
But why is strength outside a country’s borders so important?
Because, in plenty of cases, the place in which the key markets are, but also where the value chains are physically set and major competitors are, is Asia.
For instance, it is there, not in the west, where most of the world population lives, and it is there, not in the west, where the middle-class is growing-which is the main target for any mass production offer-. Finally, it is there, not in the west, where new major groups grow and planning to conquer traditional occidental owns markets.
Occidental governments simply acknowledge that competing in Asian markets implies huge effort and expenditures because “local” competitors are well protected- often by governmental measures- and, often they operate in mono/oligopolistic internal markets.
And here we see the spiral:
Occidental customers are forced to support inefficient prices- and, therefore, lose their own purchasing power-because their national corporations need to enter in the Asian/global market! Occidental consumers are under the spell of a geopolitical game that is well out of their reach but still cost them-and we are not talking about the cost as workers-.
Moreover, after the crisis, several listed big companies in occidental countries received a gift that was quite expensive for the taxpayers. Several of them are now considered, in one form or another, too big to fail.
If the situation is like this, and it is, we can see a powerful rationale for a lambda investor to go passive:
Big firms are, in principal, heavily protected in terms of barriers to entry- even though the possibility of a technological destruction is always open-. No one’s will trigger -seriously- antitrust policy against them. Big firms are global players ready to play in growth countries and endorsed by a too big to fail policy that ultimately guarantees solvency against almost all risk!
Why under these conditions, would I spend time, energy, and money to find small quoted firms that can ensure future cash flows with minimal risks (this is the basic principle of finance) when I have the opportunity to cheaply diversify my allocations among huge, heavily protected firms with almost no risks?
To be complete, under those conditions, an investor should prefer a blended allocation strategy: a mixed between a passive market one and a direct participation to the outside more dynamic world, e.g. by investing in a venture capital and or in a private equity funds. On that side, a strong asset management structure is likely to become the key edge in the private banking industry: the ability to efficiently pinpoint the right mixture at the right time, under the constraint of a « dynamic » client profile, it is likely to become the leading ingredient for a winner offer in this industry.

To conclude, the success of passive investment is a complex phenomenon that cannot be explained only by referring to low fees or the stock market efficiency:
We just illustrated another way to consider phenomenon and try to trace and discuss all possible consequences.

NOTES:

[1]
Renaud de Planta, “The hidden dangers of passive investing”, FT.com, May 2017.

[2]
I will discuss in more detail the efficiency of the market and its relationship with AI in the paper “AI and Stock Market efficiency: Some General Considerations.

[3]
We assume that a passive investor accepts a « […] hands-off approach to investing [provided by a passive fund]: One reason Vanguard is able to charge such low fees is that it doesn’t expend a lot of resources investigating individual companies or meeting with managers. […] Its index-fund managers don’t engage with companies about their businesses. » In Frank Partnoy, September 2017, “Are Index Funds Bad for the Economy?”, The Atlantic.

[4]
Please see the raw numbers presented here https://finance.yahoo.com/news/jp-startup-public-companies-fewer-000000709.html  and here https://www.economist.com/news/business/21721153-company-founders-are-reluctant-go-public-and-takeovers-are-soaring-why-decline

[5]
Concerning M&A deals in the US, please see: https://imaa-institute.org/m-and-a-us-united-states/ , US and European IPO data are presented in the OECD Business and Finance Outlook 2015, p. 206.
And finally the notion and the evolution of zombie firms in US stock exchanges is presented here: http://lipperalpha.financial.thomsonreuters.com/2017/09/news-in-charts-the-rise-of-american-zombies/

[6] It is interesting to note that some major university endowment funds have recently decided to pull back from passive ETF exposure, e.g., https://www.reuters.com/article/us-usa-funds-endowment-etfs/largest-u-s-university-endowment-funds-pull-back-on-etf-exposure-idUSKCN1BO2L0 But what to do instead? Well directly invest, in mainly major quoted stocks: The underlying rationale is that returns are more likely to come from owning fewer big firms than from selecting a portfolio of small/medium capitalized firms.

[7] Details of the amazing amounts of R&D expenditures can be seen here: https://www.linkedin.com/feed/update/urn:li:activity:6316546299462193152. Interestingly, not only tech companies are investing huge amounts. The R&D effort covers all major sectors, proving that the concern about how to master this unprecedentedly volatile innovation phase is ubiquitous.

[8] Luigi Zingales, A Capitalism for the People, Recapturing the Lost Genius of American Prosperity, 2012, Basicbook. Other general accounts of this phenomenon can be found here: https://www.theatlantic.com/magazine/archive/2016/10/americas-monopoly-problem/497549/ and here: http://equitablegrowth.org/research-analysis/market-power-in-the-u-s-economy-today/ .

[9] In Switzerland, the classical example is the pharma market: It is a duopolistic situation, explicitly protected by the Swiss political system, which does not complain even though drugs prices are well above the European standards. This is accepted because research and development are still based in Switzerland but also because, everyone knows, R&D is the best way to help this industry fight in international markets. Same can be said about the Swiss big bank and the way they are protected.

A note: Human Intelligence (HI) vs Artificial Intelligence (AI) in the Asset Management (AM) world, why AI is still not good enough.

AIVSHI-MAIN

1.Introduction.
In the previous essay-the addendum essay- we listed three paradoxes likely to prevent a full implementation of an AI wealth allocation decision process in the AM world. As usual, by so doing, we introduced some ambiguous terms and unclear elements, which need to be further developed and discussed. We will concentrate our attention on the most important paradox presented: the stability paradox. This essay’s aim is to show how the discussion of the stability paradox opens a Pandora’s Box: New, tricky issues, such as AI consciousness appears, and testify that the road to reaching a fully satisfactory implementation of AI in the AM world remains long. Our investigation will prove that these problems are unavoidable obstacles, and explaining why a pure AI solution- that is, one which requires no human intervention-is not yet possible in the AM world and will not be for a while.
We will show that the main flaw characterizing a pure AI solution is a methodological one: An AI is a combination of applied mathematical methods-broadly labelled data science-used at an astonishing speed on huge data sets. This science, to quote Chris Anderson, accepts the possibility of developing a “science without theory” as one of its postulation. Because a theoretical narrative is absent, this nevertheless favours the acceptance of black-box solutions, which are far from helping those in the AM industry who need sound, logical and coherent narratives to justify an allocation decision. We believe that moving to a pure AI solution is not yet optimal: At the end of section 5, we will offer the first discussion about why this is the case. Through this discussion, we will introduce the notion of a “dynamic system”, which is likely to be the best approximation of our economic-financial-political world in mathematical terms. If this is the right approximation then we must acknowledge the need for Human Intelligence (HI) theoretically based investigations and results alongside the perfectly valid and needed AI investigation and results.
If we use our own terminology presented in the last essay, this is exactly our main conclusion: current AI cannot pretend to match HI, because AI continues to provide only correct scenarios -constantly anchor on data- and cannot formulate true -holistic- scenarios, or which have this aim. At the contrary, we believe AI becoming, and this will be more and more the case, complementary to HI. We will treat these arguments in sections 5-5.1: The right path is as usual in the middle; indeed, the wise AM provider will be the one who will blend both AI and HI, the real challenge ahead it is to find the right proportions.
Namely, AI is a very powerful device, but it is lacking in, at least, two key areas: creativity and awareness. Due to these two weaknesses, AI cannot yet pretend to reach coherent and logically sound asset management advices.
As such, to reinvent the AM industry in this new century, we warmly suggest looking further into conviction narrative theory than AI; for example, David Tuckett and Milena Nikolic presented this theory in detail in the June 2017, SAGE Journal, or more generally, to the work done by Yuval Harari about how humankind reached its current- technological driven- condition, i.e. his emphasis about our “capacity to elaborate [share] allegorical stories”; and not only focusing in developing AI.

2.The current definition of AI in AM and its main limits: Powerful skills are not enough to define an intelligent device.
Nowadays, AI evangelists are used to present the intelligence of their processes by referring to two main thoughts (I did not bother to quote the exact reference, I saw these two phrases so many times that I have the feeling there is a huge copy & paste contest going on): First, “[I]n 2016, a machine beat the world champion of Go – a game renowned for its subtlety and complexity – for the first time. The victory by the AlphaGo programme, developed by Google DeepMind, has put artificial intelligence (AI) back into the spotlight.”, and second “[The] usage of AI and deep learning will allow us soon to see correlations among time series we will not think of.”
Clearly, these two arguments are marketing statements: They are facts, but they do not define the meaning of the word “intelligence”. In other terms, in these two sentences we do not actually understand why seeing AlphaGo beat a human at Go represents proof of AlphaGo’s intelligence or, similarly, why seeing a device that is able to create and check all sorts of possible correlations among inconceivable numbers of data series- with everything done in a blink of an eye- should define, a useful form of intelligence. In my view, in both cases, we are describing a very powerful, nice and useful tool endowed with paramount skills, which are far more efficient and powerful than those of a single (or even a group of) human (s): However, a machine continues to lack some major features that define HI, and therefore constantly requires the presence of humans at its side.
The next section will discuss these elements in more detail and present three main problems characterizing the current definition of AI, when applied to the AM world.

3.If faced with real economy or the allocation of wealth, which require an understanding of the economic system, how is AI supposed to work?
Let’s go back to the statement about AlphaGo: The device beat a human. But can we interpret this victory as a sign of a device’s intelligence? No, clearly not. Why? Because even though AlphaGo has learned how to play during the game -we are not denying that- it was playing, by definition, within a stable framework. After all, despite the game’s complexity, the game is defined by a framework and a series of rules, limiting, the “creativity” if we can really use this word-more about our usage/definition in section 5-. Each step must be considered to elaborate on the best strategy; that is, what move will be useful to win the game. Fixed rules and fixed framework are taken for granted: The environment is what it is and the “laws” of the game are set in stone.

What about a changing environment? What about having a “game” in which the rules and framework are constantly redrawn or at least appear to be so? Even better, what about an evolving “game”?
For example, if Go were to transform into another game-TroGo– how would AlphaGo-or any AI device- perform?
Here we have at least three fundamental issues that the AI must solve to pretend holding the status of a real, intelligent device:
1) AI needs to be conscious that Go is now TroGo, thus, AI needs to be aware that the old world/game is gone and AI is now living/playing in the new one; 2) AI needs to figure out (fast) the new rules of TroGo to elaborate the optimal choices, and we take for granted that the AI’s final objective is still to win. As such, the fact that the objective remains the same in Go and TroGo is a strong assumption: It is, indeed, easy to think that in a new environment each player’s objective may also change. 3) AI needs to consider a theory, or at least a theoretical narrative, that will explain and justify the choices taken in 2). Ultimately, the AI needs to explain why some choices are executed and some others disregarded. It is important to note that a pure numerical approach, based on the best fitting algorithm for a given historical data sets available is likely to appear poorly structured if a narrative is not simultaneously developed. Indeed, at any given stage, AI will need to convince those outside the specialist realm, who can understand AI’s mathematical discourse, why AI is picking a given move.
Even without entering a deep discussion about these three points, we realise how overcoming these points represent the real AI challenge. Now, I want to stress that full comprehension of these aspects goes beyond my current understanding; I am an economist, after all, and not an AI specialist!
Nevertheless, given that I know the complexities of working as an economist in the AM world, I can explain some simple elements that the AI device should fulfil to deliver a great AM wealth allocation service.

4. AI and science: Why a pattern-seeking device is not good enough.
Let’s consider the problems listed above in order. The first one is amazingly difficult: Figuring out that Go is now TroGo, requires the AI to be consciously aware of the changes in the world in which it lives. Here, we wonder how an AI can determine that the “game” has changed. To answer, we need to, finally, try to roughly grasp what AI really means in the AM world: AI is “just”– it is already a lot- an infinitely skilful and quicker pattern-seeking device -though AI can be a lot more if economic and financial systems are treated as parts of a dynamic system and maths simulation techniques are used accordingly, see points 6&7-. Given the huge amount of historical data sets at its disposal, AI will process like a human (but more quickly), trying to find patterns and correlations among them to establish (statistically robust) links between them. Here, at its essence, nothing is new. This is the basic procedure used in quantitative (quant) finance. Finance algorithms are based on this procedure, but the technical analysis charting is also derived by the same intuition. This is why, computers oversee trigger buy and sell signals in trading desks today-: If a correlation exists, it might be useful! In other words, if a pattern is out there, it is worth discovering and then to using it to make money! James Simons earned a lot of money by bringing these ideas and methodologies into finance.

In our case, however, AI faces a more complex task: In Go, a certain set of patterns was established and seen, now, in TroGo, are these patterns still relevant? In other words, those “ancient” correlations might just cease to be relevant. Meanwhile, the new patterns characterizing TroGo are likely not fully visible yet; the data available does not fully reflect the new patterns-the release frequency of some data can be long-. Moreover, in a trickier way, the relevance issue could concern some data series as well, i.e. the definition of the time series may be unfitted in TroGo! Who is assuring us that some of the data used in Go to depict a certain phenomenon are still those needed to depict the same phenomenon in TroGo? Here, the entire procedure is under the spell of huge threat, the well-known GIGO issue, that is garbage in garbage out.
For instance, before the ’07 crisis in the US job market the unemployment rate was, from both a statistical and an economic standpoint, a trusted indicator of potential upwards pressure on salaries. Nowadays, however, with labour market participation having plummeted and with an unbelievable rate of technical progress, are we still sure that a low unemployment rate is a valid indicator of future pressure on salary and then inflation?
Similarly, with money growth and its relationship to inflation: the correlation seems to be broken! But are we sure? Are we sure to refer at the right money supply aggregate? What about, as in the case of US M3 aggregate, if a series is no more available? And more broadly, which is among all series, the most relevant one in TroGo? In a prediction endeavour, if we are using the wrong one, a possible garbage in garbage out type of error is likely to occur.
In general terms, can we still use these historical relationships?
The data’s relevance is under scrutiny and in doubt!
In other words, the maths on data is fine, and, as I said, it can be a lot more complex than in my simplified description of the existing AI-system dynamics simulations and agent based modelling which are vivid examples of this complexity-. Still, all these methods share an infinite trust on collected raw data but: Raw data are not entities without a life-they transform as well-despite constantly being considered with the same definition. Please note, this kind of consideration would never occur in a pure application of AI within financial markets: For instance, the price of a stock, or other financial market determined instrument prices are the ultimate example of fixed definitions. The relevance issue does not apply in these cases, AI and algorithms are working well, no doubts, with dead “definitions”. By the way, this explains why Warren Buffett is yet out of reach for an AI device: a value investing strategy is not data driven-data are missing-decision process, it is more based on shared visions with senior firm’s management about future market conditions.

At this stage, AI sponsors may tell me: Do not worry, Big Data, is here and this is the new data that will tell us where we stand. But, Big Data is still in its infancy. All serious tech firms in this domain hire plenty of anthropologists and psychologists: To help maths folks better master the data-in particular those related to human « behavior » and choices-, but this procedure will require time. Once again, pretending that this will be done soon to help solve the AI consciousness issue and the related relevance question is just asking for too much too quickly.
Ultimately, I do not see how a historical data based related device- a mathematical machine-, which is what AI is, will be able to alert me if we are in a new economic era. A new era, characterized by new rules, where the present does not follow previously established “patterns” and which require new ways of reading raw data; hence, the relevance problem data “definitions” are living parameters themselves. Let’s take a closer look here.

5.Can AI solves these issues? AI is likely to become a black box which requires, constant HI presence to be optimal.
Now, an easy solution would be to think about an AI device in which a set of thresholds have been set: If those are triggered, the AI will decide to enter a new world.
However, this is not our point:
How can we ensure that those thresholds are being set based on relevant factors, such as those which show changes and which changes matter in the new world?
Who is choosing the right patterns, and which deviation (and how much?) will alert us to the change of the world?
This looks like an endless spiral if a theoretical discourse is not simultaneously build. Only a human can justify a choice, not by referring to historical data, but by providing a new theory and a narrative coherent with it. This is creativity! In other words, to solve the problem and to highlight the right thresholds, the AI should create a theory. This is the only way to solve an issue similar to the chicken and egg paradox! However, a theory needs creativity.
Here is an open question: Is there any real with AI?

I see an easy reply: You can imagine AI devices endowed with deep learning mechanism. However, as far as I known, deep learning would entail a machine extracting information, often in an unsupervised manner, to teach and transform itself. Those new lines of code, might end up being unintelligible to the human programmer who first created the device!
If this is the case, how are we supposed to extract a coherent and intelligible theory which can be presented and used to explain AI’s choices?
Using deep learning, we have-further-proof of the main philosophical principle underlying AI and (partially at least) algorithm usage: I do not know why it works but it works. It solves the issue, so I accept it!
By doing so, you are, implicitly, disparaging HI: You are preferring a device solution-which-no doubts-has worked in the past-to human ideas, which may be slower to elaborate upon, but which can encapsulate both past insights and real creativity!
In other words, you are charging humans with the failures and the need to please and follow, a priori, the machine.
In my view, this is the equivalent of accepting black box solutions for handling and solving problems. This science without theory approach does not really explain a phenomenon. The method is interested in the data, and the data are the phenomenon’s outcomes. The real added value of this activity is to find patterns in these data; thus, once again, a human is viewed as a pattern seeker, AI just reinforces this view. However, this is a huge statement, I prefer to see humans like “root seeker”: When a phenomenon occurs, independently from the pattern the data is associated with, the phenomenon would reveal to be that, humans want to understand why the phenomenon is there. Therefore, applied mathematical researches, the methods used in AI, remain “dry” and unpalatable if someone wants to explain why an event occurs on and not “just” how the event happens!
There is an easy solution available to us. Let’s treat the AI as it should always be treated as help, and, as an amazing source of advice, but never as the final decider when a given phenomenon is analysed.
Our maths, our data and our computers gave these results, but, those results should never fully replace human decision process, which is not only a data driven process, because it is based on critical thinking and theoretical structuring. Nota bene: This holistic approach has a cost, always the same: it needs time. We will discuss these aspects in section 6 and 7. Before some remarks concerning, a hypothetical pure AI solution in the AM world.

5.1. AI vs HI: Some general philosophical considerations of the AM industry and its future, or why pure AI would likely be a failure.
For an economist like me, what matters is grasping what is in front of us, or in this case the likely outcome of having AM exposed to pure AI. As such, the basic question we should consider is: whether the robot advisor alone, in front of a client (we can take IPsoft Amelia) would guarantee a better service, with a high-quality standard and better results, regarding allocations advises?
Sure, Amelia’s AI software, will have the best skills of the planet when considering the past, and the future incorporated via thousands of numeric forecasts, but, she is not and will not be able to be present, here– because Amelia’s consciousness of being present is lacking-as explained above-.
The client’s time will never be Amelia’s time, even if she is endowed with empathy and plenty of other funny gadgets, to mimic a “self”. She will never manage to feel and share a client’s existential struggle of living in the now and this despite allowing her using all client’s Big Data, which are, data, which significance may always be challenged.

Outside of these considerations, the key feature to succeeding through offering pure AI in the AM space is mainly the standardisation (commoditisation) of financial services. Standardisation is nothing new in the AM industry, or in any other industry: It is a basic element that cuts costs and increases margins. From this perspective, AI will be an enhancement, which, by definition, tends to reduce the heterogeneity of the AM’s offering. This is the well-known other side of the standardisation coin: Homogenisation of supply and, in the case of finance, a huge tendency to select few investment vehicles, meaning a high likelihood of ending with some bubble phenomena.
Nonetheless, the standardisation will allow to achieve two main targets: speed and efficiency. At the end of the day, this are the main reasons moving into a pure AI offer: You can deliver quickly, standardised wealth allocation solutions to plenty of client at once.
But, as we had already explained fast allocation advice might become an issue. Namely, the consciousness of the data available and the usage of real creativity, used to describe the current economic status, require time for being encapsulated in a satisfactory and well-structured wealth allocation advice.
Nothing new under the sun, AI can count on all power of cloud computing to deliver its fast service, but this has a cost: Less quality-no real creativity- and less transparency-no consciousness and so no solution for the data relevance problem-. Sadly, an excuse will always be available in the case of AI failure: Either raw data are wrongly collected, so the machine is correct, and the failure is due to the statistical apparatus, or we have still some missing data, and so more data must be collected.
The outcome will always be the same: Few humans willing to think (Warren Buffett’ spirit please lasts forever with us) alongside all others spending their time collecting (soft) data-or even involuntarily producing them- and traditional (hard) data, hoping they will teach us what to do, and this thanks to mathematical machine named pompously AI.
All in all, the AM service quality will continue to depend in the only real variable that matters in the AM industry: The time an AM provider wants to spend before delivering a wealth allocation advice. Here, I really want to be clear: I am not against those actors in the AM world who want to move as soon as possible into a full AI solution. This is their choices, and we are in a free economy, consumers will decide after all, my goal here it is only to stress the overall consequences and possible risks.
In any case, due to these considerations, smart clients and/or clients passionate about finance will ignore an AI based advisory offer-or use it as a simple benchmark at best-: They will create their own offer-trading websites which will provide them more than enough- and they will look for real creativity and freedom. In our view, this phenomenon is already ongoing, and with AI involved, will likely amplify. Others, will move more and more wealth into venture capital, shadow banking and other -heterodox- solutions: In these places, those involved will have the feeling of financing real valued projects and not trends, patterns and all sort of statistical considerations, which they may not consider being part of real finance!
What about the institutional clients of AM in a pure AI world? They are likely to be squeezed between a homogenous offer and a more and more standardised regulations framework, which, may be based, on AI driven metrics soon. The likelihood of having institutional portfolios more exposed to systematic risks will likely increase.

However, are we sure the AM world is going really in this extreme direction, i.e. having a generalized pure AI solution?
We do not believe so. Why? Because, it is important to note that embracing a pure AI solution will imply accepting a world of financial advisories defined without theories, to rephrase Chris Anderson.
Here, data are mastered but not understood or properly challenged-as we said above data definitions are taken for granted, raw data are the real masters-. Theoretical investigation, on the contrary, often challenges either the relevance of the data set when we describe a phenomenon, or the data as it has been defined!
More dramatically, having all those AI numerical results on board may not be enough if the world we are living in is, to use a mathematical expression, a “dynamic system” (details in section 7 bellow) characterized by radical uncertainty. Indeed, in this “system” maths methods on data without theoretical discourse, therefore without awareness, creativity and intuition, are doomed to fail!
In this world, the presence of several theoretical discourses in addition to, and at time substituting, the results provided by data science are imperative. Let’s present these arguments in the last two paragraphs of this essay.

6.What is difficult for AI is normal for HI: So why bother!
Funnily enough, questioning if we are still in a Go game or if we have moved to TroGo is just the normal daily work for an economist in the AM industry. However, the economists starting point is different from the AI one. The first and most basic question is: Am I sure that what I know in terms of ideas, from my learning and experience, is still valid for interpreting the current economy, or should I redraw my knowledge and consider new ideas? This starting point is broad, and it does not contain a direct reference to the data sets in our hands. Nonetheless, it is by observing these data we decide to activate our brains.
Here, an economist’s brain is open to the new environment, and his main query will be: Are the ideas I have in my repertoire useful, or should I add some new ones?
Please note that- and it is not a rare occurrence- these new ideas may not always fit the present data series (the data set could be extremely short). We are nonetheless ready to defend these ideas if the theoretical narrative stands, if it is convincing and it is logically correct.
An economist will use a new concept because it is required to mentally handle (mentally, at least at first) a new puzzling query (or several). The concept will then generate, in turn, a (series of) correlations to be check on the data. These data- call it the world or the nature- is now under stress and scrutiny. We are asking data for proof of whether our ideas are correct, and this is where statistics and maths are supposed to come to the centre of the scene! -This has been the standard plain modern scientific procedure since Galileo, Descartes or Kant to Popper and Kuhn-: Once again, the starting point of a knowledge act is to understand a phenomenon, and an economist wants to see if he can understand why and how an economy is changing! The starting point should not only be the (raw) data value or its patterns.
Afterwards, the economist will focus on some sets of data which are supposed to help to confirm the new theory:
By so doing, the economist is solving the issue of knowing whether we are in Go or in TroGo- the consciousness and the relevance issue pointed out above are solved-: by building a theory and focusing on certain correlations. Clearly, all these steps have a cost: Time. As already said, without time no real great service.
The economist points forward to what is new in the new framework and what really matters, which in this case is understanding the data and justifying them! The hard part of the HI approach can be described in a single sentence: Creativity and constant questioning.
The difference between AI and HI is clear now: Humans are not straight on data trying to please them. Man, first, evaluates what he has available in terms of concepts and theoretical background, once this is set properly, he will attack the world, or rather, data. The explanatory, theoretical discourse is, basically the set of arguments and ideas which will, eventually, be presented to the client: the pedagogical discourse we must deliver in front of clients is always in our minds, data will drive many decisions, but simultaneously, some decisions will be based more on our critical thinking and our consciousness of being in a new economic environment, which is asking new, out of the box, actions.
Our actions always have goals, but this is not always to find a “regularity” within data, which at best explains the past and used to extrapolate into the future: We are always focusing on elements, which, we believe, represent key aspects that define new environments and we ignore others because we believe they are irrelevant in new frameworks. The future is then built from those few elements that we believe are key: A HI has this unbelievable and unreached flexibility, if it is needed we are working using a n dimensional causality space, but if needed, effortlessly, HI shrinks the space to a less dimensional one.
Why this?
Because, the HI is finding the right number of dimensions so to “easily” treat the issue: This is what we call, broadly, an intuition, and which might appear only by being conscious about the need of a reduction in complexity.
Once the intuition there a discourse can be elaborated and a vision shared, all this done without, a priori, constantly looking at data. Our democratic political world works like that, but also any AM advisory team is basically follow the same principle: A client wants that a discourse, a frame, not only defined in terms of data but also full of passion. The future will always bring unexpected events, but it is better to prepare receiving them by taking our passion on board and not only our rational calculus.
HI simultaneously solves the three points presented above, that is, HI is conscious that we are in TroGo, HI learns the new rules associated to the new framework by creating a valid theoretical picture of it, which can be easily shared! Please note: The theoretical picture might be wrong, still it is based on the awareness that something radical need to be done to understand the new framework. The scientific debate (and the analysis of the data) will do the rest about knowing the validity of the theoretical construction.
Indeed, economists talk, communicate and share their views. Being in TroGo is, after all, a possibility among others: What matters here is knowing if the view is shared and agreed upon with others, HI is always a form of social intelligence (SI): This is, by the way, the famous strategic interactions we talked about in our previous essays, and which will be developed in the next one.
Funnily, as a side note, we can even think about the possibility of having several AIs communicating as humans do:
Will this solve anything? How can this be organized when each AI machine is privately owned? What will be the implication when deep learning is present in each AI?
I just believe this is an entirely new Pandora’s box: I will not open this discussion now, may be in the future. Besides, in any case, I do not see AI solving its awareness issue thanks to communication with its pairs.

7.AI vs HI: Why the human touch is definitely needed: We are living in a dynamical system.
As we saw in the previous section, HI has an attribute that AI does not: a real creativity. Ideas can be and are created independently from data sets and some ideas are used if and only if the environment requires them. Now, my long thesis hinges on a simple remark: If Go always remains Go, AI will do the job despite not being fully HI compatible.
Indeed, if Go is Go, it is key to remember that this implies that i) data are defined in an indisputable way and ii) we master those data and we use them, but we do not care to ask ourselves why they are present in the first place. We are in Chris Anderson’s dream world, -the end of any theory world-, in which scientific method is considered as obsolete. In Chris’ dream, we live in a world shaped by data patterns! Everything will then be rationalized, pictured and forecasted, as part of a gigantic- alienating-framework, which can only change based on following patterns or complex data structures: Is this not the best representation of Weber’s Iron Cage?

Fortunately, Anderson’s dream is likely to remain a nightmare (let’s hope forever).
In our view, there are plenty of signals, showing that our economy is likely to be a “dynamic” system to use a mathematics and physicist-based term, in which data are far from being defined in a fixed, stable way. Creativity and the openness to question and theorise via our old scientific approach still makes a lot of sense. That said, a “dynamic” system’s main characteristic is radical uncertainty. Let me quote the excellent FT columnist Wolgang Munchau here:
“The financial crisis turned what outwardly seemed a stable political and financial [economic] environment into what mathematicians and physicists would call a “dynamical” system. The main characteristic of such systems is radical uncertainty. Such systems are not necessarily chaotic-though some may be- but they are certainly unpredictable. You cannot model them with a few equations…Radical uncertainty is a massive challenge, because you can never be sure of much. In particular, you can no longer be certain that you can extrapolate the trends of the past into the future”.
Why are we living in this sort of system? The list is long, so I will write only two main examples, with the rest to come in following essays:

1. If we examine the market economy in developed countries, the price system mechanism from the top (finance) to the bottom (consumption goods and services) is just in full distortion.
At the top, we have phenomena like, zero or negative interest rates due to massive monetary expansion policies which have drowned financial markets and distorted numerous prices. In addition, some central banks invested billions in these markets due to their excessive balance sheets adding further distortions. Furthermore, these phenomena, are likely to be amplified, thanks to passive investment vehicles which push series of prices up and reduce the liquidity of certain underlying instruments. Not a surprise then that some prices appear unrelated given the business plans of the underlying firms.
At the bottom, in a huge paradigm shift, globalisation and technological progress are constantly pushing prices down, offering more for less and making several prices unintelligible in economic terms.
Why do WhatsApp, Google or Facebook services have no price for a consumer? What does this mean? Does this mean they have no value? Do we have a valid theory for this area? How do we economically interpret the raw data coming from these activities? What about service providers, such as Uber, Airbnb and all the constellations defined by the so-called sharing economy? Are we sure there really no evil-to rephrase Google motto- in these new great services, which are reshaping the entire economy?

2.The way in which we work and in which we interact with each other: reflect that everything is changing in those spaces as well: Notions like (un)employment, the job market, career transitions and career paths are constant challenges which are not captured in statistics. Furthermore, the stream of revenue associated with our work is also under threat, with huge repercussions in terms of political instability and social frustration: Where are we going from here forward? What is the added value or marginal productivity (in value terms), of a person working in an e-service firm, when all firms become e-service providers? What are the links with that person’s salary? Is the marginal productivity of labour still a good measure of a worker’s added value and of a worker’s salary?

I will continue to elaborate this list in future essays. Nonetheless, these two aspects are enough to show the dramatic changes we are experiencing. The economy is changing so drastically that we must embrace the dynamic system’s vision of it. This effectively implies more complex and elaborate models in one hand, and, welcomes computer simulation, such as the AI approach to an economy, in the other. However, this approach causes huge and important alternatives to human judgment and analysis because only humans are endowed with the main source of light allowed to go through the dark: creativity and constant, critical questioning.

A methodological addendum to the CIO essay: A discussion about true vs sure approach and the three AI paradoxes in AM.

AI-VS-HI1.Introduction.
Our previous essay was marked by some form of ambiguity. To highlight the importance of the CIO creativity we used (apparently) ambiguous sentences like this: “Human beings are truly creative and this is because they look for true scenarios and not merely correct (sure) ones”. Now, to move forward, we need to acknowledge the meaning of this sentence in a very transparent way: This will allow us to properly introduce the discussion about the strategic interactions among CIOs. Our starting point will be to enrich our allegorical crisis narrative to discuss the assessment of the houses after the fire (the origin of the fire will be discussed in our last essay). These elements will authorize us to reach a twofold conclusion:
a) There are three main paradoxes associated with a pure AI (algorithm) based asset allocation process; we will clearly identify them and discuss; and b) we will, once again, clearly stress the importance of a human factor when asset allocation choices need to be made.

2.Why sure scenarios are not enough or why human touch is so important.
In our previous essay a financial crisis was described as fire declared at the ground floor of two major houses. Then our narrative was mostly concerned with discussing the swift intervention of the fire department, and its consequences. Now, it is time to enrich our story by adding some details about the organisation of the building complex and its ownership. It is important to know that all families living in the houses are renting them: the costs associated with the fire will be met by the owner, first, who will then share this burden with the families. Clearly, the owner wants to assess the real status of the houses after the fire, and so ensure that the houses are solid and resilient. He can gather advice in two different ways:
i) He can call a construction engineer to inspect the houses or ii) he can commit to spending more by allowing, after the engineer’s work, an analysis to be undertaken by a private investigator (a character like Columbo).

At this point, our metaphor becomes crystal clear: if the owner considers i), he is taking for granted that the engineer’s specialist knowledge i.e. his proven expertise about how to evaluate and use data, are enough to guarantee a proper assessment of the houses. The engineer will assess each house by considering a model of the house. The engineer will then collect all sorts of data, such as the materials used to build each house, the dimensions, the quality of the terrain and so on and so forth.  With this data, and by use of calculations, he will then be able to assess the status of each house, as well as to provide a forecast for each house once some of the works have been undertaken. The engineer’s results are sure ones: Given the data and the model, the results are logically, rationally correct (sure).  By contrast, with ii) the owner takes another option: he leaves the door open to receive a more creative analysis in which the technician’s work is acknowledged, but in addition, further remarks and nuances are incorporated. This is because it is humans, not machines, who are living in the building complex. In this case, the Columbo character will spend time talking to the members of the families (and with some firefighters), he will count the number of paintings hung at the walls before the fire, he will evaluate the shape of the new furniture and the reactions of each family members to the new environment and so on. Here, a holistic approach will provide a big picture sort of results: the truth is more likely to be found with this sort of global analysis.

3.The end of the story and why a sure approach is not always the best one.
The above narrative clearly encapsulated at least three main paradoxes when only AI is used and strictly followed:

A. The Stability paradox: If we accept the AI approach, based on modelling and data, we are implicitly accepting the stability of the framework. In other terms, to use our narrative, once the house is reinforced, the laws of material physics must, and will, apply. From an AI perspective, the algorithm is correct if, and only if, the economic laws remain the same. This is, by the way, a huge contradiction, because the AI evangelists are simultaneously telling us that everything is changing and we need to constantly adapt our scenarios. However, key data needed for algorithm scenarios are only available on long time frequency -monthly but more often quarterly-. Here a fix framework is just a paramount constraint, which is imposed by the data themselves and cannot be defeated.
In any case, undermining any model (algorithm) and its sure (correct) results is a form of stability -this also includes the stability of random factors which are considered to create “realistic” predictions-. Now, this stability does not prevent the possibility of a major collapse of the system (another fire). But although the systemic risk exists and it is considered by the AI, the systemic risk and the paradigm shift -e.g. no clearer link exists, nowadays, between money creation and inflation- are minimized and this because they cannot quickly enter in the set of formula defining an algorithm.

Besides, the stability paradox is likely to generate short-term strategies: how? Simply because some principles are taken for granted, like the idea of market efficiency and -closely related- the impossibility of beating the market. Here, the door is open to massively invest in passive vehicles, i.e. ETF index tracker. But, this is pure short-term finance because we are buying a basket of instruments instead of carefully picking one by one: by choosing a basket, we are giving a premium to companies which do not deserve it. Financial markets, like any market, are systems and processes generate (un-distorted) prices, which are guides for efficiently allocating wealth: if prices are distorted because there is this pooling effect, what will this imply in the medium long run? What about the role of creating an unbiased price signalling system (i.e. the hard of a free market economy)? What are the incentives for the firms’ management? What about the good ones versus bad ones? What about the firms’ governance if a section of the stockholders does not care, essentially, about the firms’ actual business?

On the other hand, with the Columbo investigator, the modelling results will not represent the result: the character of the family, their relationship with the neighborhood, and even more importantly, the fear of a future accident might fully change the way in which the (rebuilt house) structure is used and will evolve in time.

Similarly, the true future destiny of an economy does not depend on numerical factors alone. This is a far more complex business: only with a constant pedagogic engagement from CIOs, the asset management customers will end capturing this complexity, here the importance of a robust narrative.

B. The data mining Paradox: It could always be maintained that our first point is valid only because with AI, the engineer does not have enough data. If we want a better, stringent and sure prediction, then we should simply collect even more data (among them the famous “big data”).
But here we enter straight into the data-mining spiral: we need to collect more so we will add data collection devices everywhere (in our story, we will add smoke detectors inside and outside the house). But this will push agents to change their methods of interaction and behavior, which will then in turn require additional data collection to get a better view of their choices and behaviors; and so on and so forth, in an endless cycle. By the way, no one can exclude that the engineer, who suggests collecting more data, may have colluded with the data providers who want to install more data generating devices: Out of metaphor, in our world, it is might be time to critically scrutiny the influence of the GAFAM members in the current Big Data -AI- mania.
The conclusion is simple: no one is taking the time to think anymore, because we are all just spending our time collecting (soft) data and traditional (hard) data, hoping they will teach us what to do. Two funny consequences of this paradox:
a) we are just piling up data hoping that it will tell us something, thanks to the algorithms we apply to it. But by refusing to think, we are stuck in the “algorithm box”, and any algorithm, even with a built-in self-learning device, is a box!
b) funnily enough, we do not evaluate the numbers anymore, and we are losing our critical view of them. A great example is the analysis of the US job market. This spring -almost all commentators said that- the US economy is at near full employment; the data is telling this without ambiguity!
But is this true? What sort of full employment are we actually talking about? Few are ready to recognize that, if it is true that the unemployment rate is low, so it is the participation rate at the job market, i.e. we have a rate that is 4.5% less than at the turn of the century (before the Internet bubble): this is something like 9 million people who are no longer participating in the job market! I guess that’s more than enough, politically, to lose a lot of swing states!

C. The human paradox:  It is important to notice that Columbo is not denying the importance of the work undertaken by the engineer: he knows that this is valuable and he has no problems admitting that this is our best method of understanding where we currently stand. But the challenge lies more in the ability to forecast. Let’s refer to the narrative one last time; the same powerful fire will generate different behaviors in different families. One family is likely to become very cautious, while another will react indifferently. What matters is, once again, that a priori, Columbo’s intuition is not diminished into a (data) box when he is elaborating his ideas.
Said that, we can always await the famous AI singularity and its intelligence (NB: how this concept is defined is pure metaphysics). However, the singularity is not there yet and why, in any case, should this represent a real progress for mankind, this is another open metaphysical debate.

In conclusion, we really do not see any valid reason to be so against human intervention in the asset management domain, if we exclude (and no robot advisor’s evangelist will tell us this) a main element: the cost factor of having human ideas as a plus.
The attitude vis-à-vis the AI analysis and forecast will clearly be a differentiating factor between CIOs, and will partially explain their strategic choices. We are now ready to enter in this last part of the discussion.

Where do we stand with the ’07-’08 crisis, and why is the CIO job so important?

1.Introduction: Storytelling and its insights.

One of the puzzling questions in today’s asset management industry is why the role of Chief Investment Officer (CIO) is still viewed as one of the most important. Indeed, nowadays, artificial intelligence evangelists – believers who trust “only” mathematical algorithms- claim that this job does not have a future: A well calibrated (in terms of past data analysis) software is already allocating assets under management in a more rational and cost-efficient way than humans could ever do as we are more likely to issue misleading interpretations from the analysis of big data sets.

In this first -of two- essays we will shed light on some factors that answer this question. The main theme will be to justify the CIO role by referring to a neglected feature, the CIO’s ability to elaborate and write down allegorical stories. In other words, the neglected aspect of the CIO job resides in their storytelling abilities, the skill at finding the right words to present complex phenomena by creating an allegorical narrative that can easily be shared with clients inside and outside his/her financial institution. This story is key, as it is needed to convince the public about the quality of the allocation scenario offered at a given time and about the timing chosen to modify this allocation. Besides, as we will demonstrate this ability allows to enhance a CIO’s major quality: Creativity.
It is because CIO is a failing creature and error prone, that they will invent scenarios, non-conventional rules and paths: Human creativity is what matters and AI is far to having a gram of that.

2. An example of CIO narrative based on ’07-’08 Crisis or how pointing forward some central facts.

As a starting point, let’s take the role of the CIO who must create a nice and powerful narrative to describe today’s global economic situation.
Now, it has been exactly ten years since the beginning of the worst financial and economic crisis since the Great Depression: What is the best metaphor to describe the current situation?
Without being very original we can build a scenario by referring to the ’07-’08 crisis as a fire declared in a modern residential area, in which different types of houses (i.e. modest and complex ones) have been built closely enough to expect flames propagating from one house to the next.
In this residential complex, the fire simultaneously started at the ground floor of two major houses. The fire department came swiftly, which did not prevent some major pieces of furniture from being fully destroyed in both houses, but saved both houses ‘structures and several items. Few minutes later, a third very complex house was also impacted with a fire problem: We disregard this case because the fire department is still working on it and no clear evaluation of the damages is yet available.
The intervention of the fire department implies two major shortcomings (a third one is presented in my next essay):
A. Given the degree of urgency, the fear of a propagating fire, the amount of water used was too important. Moreover, in the heart of the action, no one evaluated how much water was absorbed by the walls, how much ran away into the basement and whether some finally got absorbed by the ground: These effects would likely weaken the solidity of the buildings, therefore changing their structures.
B. The fire department did not have enough expertise: They saved some items and let others burn without a clear idea of each item’s value and, even more inconveniently, without knowing all the consequences of letting some burn. Moreover, despite the huge effort, we are still unsure whether all the sources of the fire are dead.

3. From a narrative into its analysis: why human creativity matters so much

As stated, our allegory is not very original, but it allows us to highlight two main sets of the crisis’ consequences in simple terms. Let’s now use financial/ economic terminology to obtain an equivalent translation of these two sets:
To fight the crisis, point A, the monetary authority injected too much liquidity (i.e. the famous QE experiment) to avoid a generalized credit crunch: the goal of this liquidity creation was to calm down the financial/banking system. Here, the point is to know when the excess of liquidity and the associated price distortion will eventually be digested. Nevertheless, a major unanswered question remains, even ten years later:  At what speed does the monetary authority need to retreat?
In theory, i.e. rational expectation and general market equilibrium, we all know that any liquidity excesses get corrected -in the long run- thanks to an inflationary process. But, even this long-term wisdom is challenged nowadays because, on the one hand, globalization and technological progress are generating a strong deflationary pressure, and, on the other, the bank industry rules have changed due to the crisis (e.g. new requirements in terms of capital – the Basel agreements-) so that large monetary creation has been prevented.
In this fundamentally new environment, how and when will price increases be registered i.e. at what speed are houses walls are supposed to dry up? Is inflation likely to come soon? And without the inflationary guide how is a central bank supposed to guide itself?
Now, what really matters is not to own a set of definitive answers to all those questions, but instead, to add constantly new puzzling queries, which also implies to assess those new questions so to review and rewrite future scenarios.
The CIO’s minds, not algorithms, are what matters: Humans have a unique ability of re-analyzing and re-interpreting a phenomenon, despite having no new data. It is because we consider our conclusions as being true conclusions that we constantly reevaluate them: An algorithm will provide correct (logic and rational) conclusions given the data, but which cannot be discussed in terms of true or false!

Merely, by referring to his allegorical story and his holistic and critical view of the economy new questions will be created and new future challenges defined to justify both the current asset allocation choices and how to modify them. The key factor is human creativity: Only a human being, a being prone to mistakes, is always ready to rethink and rewrite his narrative of a financial and economic phenomenon.

The intervention, point B, has dramatically reshaped financial markets, without anyone really knowing whether this will assure or prevent some future problems or whether we are now following a path that will ensure better allocation of our savings. For instance, the crisis had serious consequences in terms of public debt (i.e. some needed furniture got substituted to assure a functional ground floor):
How will this fact influence the future allocation of private savings?
Besides, while some investment vehicles have disappeared, others are now dictating the financial tempo in the asset management space, e.g. passive investment vehicles. However, what about a critical analysis of these new instruments? Are they as transparent, cheap and liquid as pretended? What about the ETF’s judgement being biased by roughly 8 years of bull markets in the USA? What will be the long-term consequences of a generalization of these vehicles? Are we out of trouble? Are we sure that no bubbles are currently running in the financial markets?

Once again, to consider and evaluate all these questions we need a more holistic and critical approach, which only the human brain can offer.

4. Conclusion and few steps into our future analysis: Strategic consideration and investment timing.

As we saw above, the main objective of the CIO is to explain why a given asset allocation is chosen at a given time. The narrative is there to prepare and justify further choices. Namely, only time provides the answers to all open questions, so what matters is to implement choices over time that maximise the expected gains once these answers have materialized. As we have already point out these choices are likely to be based on a very holistic and critical approach about the current situation.
Still, and this will represent the essence of my second essay those moves are also dictated by strategic considerations: A CIO is not isolated.
He knows that other CIOs share a close financial and economic outlook, i.e. the data used and economic and financial notions are common knowledge among them: therefore, the timing for a reallocation is more likely to be set by strategic considerations, a variant of Keynesian beauty contest type of game. But then, once again, a robust and well-structured allegorical story may reinforce the chances to be among the winners because the analysis is likely to widespread.

More details will be explained in the second essay. It is the presence of this strategic environment which will represent a further main obstacle in substituting CIOs by machines.

La décentralisation comme solution à la globalisation : Un début d’analyse.

Un constat s’impose après les échéances électorales qui ont émaillé la fin 2016 et le début 2017 : l’utilisation du mot globalisation est devenue un exercice périlleux pour n’importe quel acteur politique, tant ce terme a désormais une connotation négative auprès des électeurs. Pire, lors de débats souvent empreints de populisme et simplifications, le mot devient carrément une insulte à utiliser envers l’adversaire coupable de ne pas vouloir imposer des restrictions à un commerce international devenue source ultime de tout problème.
Pourtant, historiquement, s’il y a bien une caractéristique occidentale, elle est bien, ontologiquement, celle d’être porteur de globalisation.
Si l’origine de l’Occident est bien à rechercher du côté des villes grecques, nous savons en même temps, sans tomber dans un lyrisme déplacé, que ces villes se sont bâties autour d’épopées faites d’insoumissions, d’inquiétudes et in-fine d’insatisfactions. Depuis lors, ces facteurs marquant les cités occidentales ont façonné l’idéologie des élites citadines en les poussant à regarder ailleurs soit sur le plan économique, d’où l’intérêt pour le commerce et l’exploration de nouvelles voies commerciales, soit sur le plan purement intellectuel, ce qui a fait naître l’envie de la recherche philosophico-scientifique.
En conséquence, la ville s’affranchit avec le temps de son environnement proche pour s’intégrer à un (ou plusieurs) réseau(x) structurant des espaces géographiques de plus en plus complexes. Paradoxalement, ce même environnement proche est impacté : d’une côté, l’agriculture doit devenir de plus en plus efficiente afin de pouvoir répondre à ses besoins (et celui de son commerce) et, de l’autre, une partie de ses activités productives (industrielles ou non) est externalisée.

De nos jours, des pans entiers d’Occident luttent pour freiner la globalisation, ce qui est signe d’un changement radical. Il faut donc s’interroger : pourquoi une partie de l’Occident ne veut plus de ce processus mais aussi, et ceci est moins souvent discuté, pourquoi une autre veut (semble tout au moins) continuer à y croire ?

Si on se fie aux résultats électoraux, les (très grandes) villes occidentales sont prêtes à accepter la globalisation. En effet,  c’est sur la base de la « ville occidentale » qui s’est façonné  la mondialisation (stade ultime de la globalisation occidentale) : comme l’université occidentale est devenue paradigme universel pour  promulguer et diffuser le savoir technico-scientifique occidental, les grandes « villes occidentales » sont devenues les structures incontournables aux quatre coins du globe pour soutenir une production et une distribution de biens et services qui sont pensées et réalisées à l’échelle mondiale. Elles ne peuvent pas renier leur nature, qui est d’être devenues les fondements d’une maison nommée mondialisation, et ceci malgré le fait que l’ancrage de ce phénomène n’est plus foncièrement occidental mais tend à suivre la taille des marchés, et à migrer vers les endroits les plus peuplées de la planète (Asie en tête).
On serait donc tenté de conclure que les grandes villes occidentales se complaisent dans ce rôle, fières de faire partie des capitales du monde, et ceci malgré une division intestine, la présence d’une jeunesse révoltée (dernier exemple en date : le vote pour Mélenchon en France, ou Sanders aux Etats-Unis) qui voudrait une meilleure répartition des gains liés à la mondialisation.

Ceci dit, ce qui davantage surprenant est de retrouver du soutien à ce processus hors des murs d’une grande ville, et en conséquence à l’échelle nationale : Pourquoi un pays comme l’Allemagne est plus enclin à accepter ce processus quand tant d’autres voudraient si ardemment revenir en arrière ?
Pour tenter de répondre à cette interrogation, maints facteurs devraient être considérés et la littérature à disposition est déjà étoffée. Cependant, un élément est souvent oublié : en Allemagne, l’esprit de la ville est littéralement entré dans les rouages du système productif, il s’est diffusé de façon capillaire à travers tout le pays.
Plus précisément, dans ce pays, chaque entreprise est très autonome et le pouvoir décisionnel très décentralisé : ceci permet, d’une part, d’être extrêmement flexible concernant les négociations salariales et, d’autre part, de répartir les bénéfices en fonction d’objectifs de moyen à long terme grâce, entre autres, à des syndicats qui siégeant dans les comités de direction.
De plus, l’Allemagne, grâce à cette autonomie productive si loin de toute sorte de planification centralisée, a su garder une diversification productive très forte :
la présence d’un tissu dense de petites et moyennes entreprises très performante permet d’absorber plus facilement des pertes éventuelles d’emplois dans un secteur par l’existence d’autre activités.
En conclusion, le rejet plus au moins marqué de la mondialisation dans la population d’un pays est peut-être, après tout, lié à présence ou non d’une décentralisation forte, à un pouvoir diffus et peu centralisé :
il faudrait y réfléchir dans bien des pays occidentaux.

Trump and Congress: A drawback can easily become a plus.

US Congress in Washington finally delivered their first major decision: For the time being, Obamacare is safe, and we will not see any new healthcare reforms soon.
This outcome is a major drawback for the new administration. Nonetheless, from a strategic perspective, this loss could (easily) become an asset to Trump for three major reasons:
A. This battle regarding Obamacare allowed the new administration to count (and to observe) his majority in Congress. Trump’s team wanted (and needed) to know his Republican troops better; they wanted to separate those who will follow the new president from those full of doubts. These objectives were fully achieved.
B. The main reason for fighting Obamacare for President Trump was gone once he was elected: His fight was to ensure a high tea party vote on Election Day. In his view, reshaping or even fully eliminating Obamacare was never a primary goal; it has always been a secondary one. The main political goal of Trumps’ agenda is to implement an unprecedented US fiscal reform to reinvigorate (and, to some extent, reindustrialise) the American economy. Now, it is fair to say that by eliminating the Obamacare the government would recover several billions which would then give some room to finance other ideas and programs. To be honest, I really believe Trump does not care too much about this money: increasing  the government debt, in his view, it is not such a big issue. What matters is to change the fiscal structure and the way in which globalization is done. Besides, given that he is convinced to reestablish a growth rate of 4% the debt problem might be solved thanks to the economy itself!!
C. Now, it was no surprise that simultaneous to the announcement of its important defeat, the administration proclaimed its decision to rapidly submit a tax reform concerning firms and international trades to Congress.
This reform is a twofold package: On the one hand, a first set of measures will mainly reduce taxes for American firms; on the other hand, a more unorthodox set of measures is aimed at taxing imports and favouring US exports.
The tax cut concerning firms’ profits and income taxes will be easily obtained by the administration. From President Regan onward, Republicans have always been in favour of this sort of manoeuvre to increase economic activity. On the contrary, the second part of the project is trickier and more controversial for any traditional (pro trade) Republican deputy. Therefore, the administration needed to put some pressure on the Republican members of Congress. Now, Trump’s team can count on more solid Congress loyalty. Several Republican members cannot deny holding up Trump’s measures after their failure to support a quick Obamacare reform. The Republican majority needs to quickly approve measures which will clearly help implement Trump’s promises.

These three elements favour Trump in his struggle to impose some sort of protectionist trading measures.
However, the outcome remains uncertain, and it will heavily depend on the following: a) the lists of protectionist measures, mainly import taxes and/or export subsidies; and b) the expected reactions from all other players involved in the global economy game (i.e. other countries).
On that note, the administration should never forget that these countries are key markets for several major US corporations (for instance, the Silicon Valley giants). The risk of huge reputation damage for those firms and an endless commercial war needs to be carefully considered by both Trump’s government and Congress.

Will Congress be ready to follow Trump along this dangerous path?

Despite the pressure put on Congress after the Obamacare case, the administration does not have a clear, positive answer.

Quantitative easing: A first analysis of this policy based on US and Euro zone cases.

Officially, the FED decided to use the quantitative easing (QE) policy after the crisis of 2007-09:  however, QE1 was already applied in 2008, but this first phase was more an intervention to stabilize both mortgage market and banks’ balance sheets than a “real” QE exercise, when its last intervention QE3 was done in October 2014. The Euro Zone (EZ) central Bank (ECB), followed this unorthodox approach only in January 2015 and its program is still running now. Despite a clear time lag between their implementation, a common goal characterized these interventions: FED & BCE were just changing their policy target; moving from a policy of stabilisation of the long-term inflation rate, to one of reinvigoration of the economy activity through the stimulation of global demand.

Now, in this essay, we do not want to focus on the unorthodox nature of this policy, which can be summarised as seeing central banks shifting down the medium long term yield curve (often by massively buying medium long term government bonds) instead of the usual money market interventions (i.e. only short-term interest impacted).
What matters for us it is to stress and discuss the underlying theoretical premises characterizing this policy. This discussion will allow us to answer some basic questions, which are often not considered:
Why do central banks believe in the potency of QE? What is the main mechanism which is intended to guarantee the success of this policy? Are we sure that underlying forces assuring the success of QE are present in our economies? And if the QE is not working as expected, what sort of dangers might be foreseen for our economies?
Let’s start with a clear definition of a QE policy and its underlying mechanism.

To start with, QE brings, as with any economic policy, a distortion in terms of market forces: by purchasing medium-long term (high grades) bonds, central banks are injecting liquidity in the financial system and increasing the price of those assets (which explains why yields move lower). Now, this new liquidity ends up on financial institutions’ hands, mainly banks and insurance companies (the latter are major players in the EZ case). Those institutions are supposed to play the role of middleman: They should convey this excess money into the real economy by expanding their credit lines to firms and entrepreneurs. This is the (classic) channel which any central bank views as a method to stimulate the economy: credit becomes cheaper and this should encourage more (private) investment in the economy. From this point of view, it just remains for central banks to gauge the economic reaction to these stimuli, and ultimately to stop them once the inflation starts to weak up. Clearly, if inflation remains quiet then the central bank can keep running the QE policy. There are numerous caveats underlying this ideal process. We will stress those which might challenge most the success of QE.

I. Historically, in both cases, US and EZ, central banks started QE because they were obliged to fight against a phenomenal debt crisis. This crisis had a common ground in both cases. In both, we got a huge transfer of money from wealthy areas -US costs, North EZ countries- into peripheral areas in which main economic activities were (are) specialised either on housing construction or, in EZ, on government grants activities. When debt creation dried up (i.e. families’ income streams were no longer in line with house prices, for instance), the transfer mechanism broken and then the crisis started. At this moment, in both cases, economic heterogeneity increases: the lack of demand hit violently the remote areas, where economic activity was highly dependent on the money transfer. Whereas, in the wealthy parts, the willingness to transfer additional money into poor areas decreases (i.e. the faith on a stable financial system decreases): This element was (is) key in the EZ case, where the monetary union was (is) not supported by a federal state, i.e. a political entity which might autonomously force a fiscal stimulus or transfer to the poor areas.
Given this heterogeneity, is the QE policy the right policy to reinvigorate the economy activity? It is doubtful.
As mentioned earlier, the goal of a QE policy was (is) to favour private investment. Still, in depressed areas, there will simply not be enough hungry entrepreneurs to absorb the banks’ easy credit offers. In those areas, you need to intervene with fiscal policies, i.e. favour public spending in infrastructure, workforce skills upgrade programs, research and development and so on.
As a conclusion, the QE’s aim (i.e. to reactivate the economy) has been slightly achieved. The real main positive impact has been on banks’ balance sheets: here, thanks to QE, banks (and with them also the wealthy areas) did gain the time and means to offload bad debts from their books and restore faith in the credit system.
Nonetheless, politically, this operation is costly: if no fiscal policy initiatives are taken simultaneously to the QE, the heterogeneous character of the economy worsens and, in the weak areas, discontent and anger will just increase.

II. With QE policy, central banks are making the assumption that the source of the crisis is a lack of demand in the economy, rather than an economy’s structural change. As we already pointed out, this lack of demand is the right diagnosis for post-crisis peripheral areas, but what about the wealthy areas? Those areas are likely to exist on a radically different plane, in which global demand is likely to be a lot more resilient even in crisis and in their ability to react to other forces.
Before the crisis, wealthy parts in both the US & EZ did fully embrace (and even gave life to) two forces: globalisation and digital revolution.
Let’s take a close look at the subject of globalisation by referring to the seminal German case. Germany was known as the sick man of Europe in the late 1990s, due to its job market reforms which were disapproved of. Now, in 2000 the European Union agreed to implement the Lisbon agenda: in order to reinvigorate growth in Europe, EU members should stimulate investment in new technologies and implement more liberal policies in the labour market. The German government focused on this last part of the plan and in 2003 a major reform of their social state and job market was announced and implemented (the famous Agenda 2010). The outcome of this policy-and also thanks to a huge, firm by firm, new decentralized wave of salary bargaining-  was dramatically to increase the competitiveness of the German economy inside the EZ, by lowering the cost of labour of a very high skilled and motivated workforce.
Now, in these globalized and highly technological areas like west Germany, consumer prices were pushed down. As such, global demand in these areas was increasing, pushed mainly by the arrival of new (highly skilled) workers migrating from the remote areas. Neither the downward movement of prices, nor the healthy global demand stopped with the start of the crisis.
Under these conditions, judging when to stop QE based only on the inflation flag seems particularly misleading: in a heterogeneous economic environment, can we be sure that inflation represents the right indicator to gauge if the economy is out of the crisis?
What about changing the focus and looking at a statistic like unemployment rate instead? But are we sure that QE needs to be stopped when real economic indicators are moving?
Or have the current experiments told us something different? Namely, we should stop this policy a lot quicker, without waiting for a signal in the real economy!

III. The QE’s dreamed scenario envisaged that financial institutions would find credit hungry actors in the real economy. As already discussed this is nothing more than wishful thinking in the remote (non-globalized) areas; even in globalized and wealthy areas, no such massive hunger for new credit is present, i.e. what needs to be financed is already done and the new QE’s means are superfluous.
So, the assumption of seeing the flow of new money injected into the financial system being quickly (or with a reasonable delay) migrated into the real economy remains an unrealistic one.
Under these conditions, the outcome of an important QE is likely to be the creation of one (several) bubble(s) in one (several) financial market (s), e.g. the current situation of Euro zone government bond market or the current valuation of the US stocks. This implies an increase in the volatility of financial assets, which is likely to increase an overall process of misallocation of resources.
These bubbles just indicate that banks are only marginally playing their role as middleman between financial system and real economy. Besides, in the case of EURO zone, the crisis has reinforced the division between the North & the peripheral banking system. Thus we have seen the creation of a dual banking system. In this configuration, inter-bank credits between “wealthy” North players and peripheral (South) ones dried up: this is another factor which supports the idea that banks are not playing fully their expected role as middleman. Meanwhile, ECB continues to play the game because this is the only way to keep afloat the South banking system and to assure those banks with low charges in term of interest payments.

So, to conclude the analysis of this last paragraph, in addition to the initial distortion related to the QE policy, central banks are likely to face others distortions in several financial markets. This is a phenomenon which clearly complicates a soft landing of these markets once the QE ends. Moreover, the presence of the market bubbles obscures the waters even further, making a decision on when to stop the QE policy more difficult, and increasing the fragility of the entire system!

On Donald Trump’s Economic vision or why globalization matters.

Thanks to Trump speech at the US congress we start to fully apprehend his economic plan and, perhaps also, start to grasp the underlying economic vision which is leading his choices.
What matters now, is to ask ourselves: Is this plan a coherent one? Is there a solid economic rationale? And if this is the case, can we capture the main contours of this vision in economic terms?
The main objective of this article will be to illustrate the means, which Trump wants to mobilize and how he intends to reach his goals.
Two main elements can be highlighted from the speech:
I. Trump is asking the United States congress to approve cuts on several existing programs and various administration costs. And, in the meantime, he is asking to increase military expenditures and (surprisingly) approve a huge infrastructure expenditure plan. Currently, the contours of these cuts and expenditures increase are largely unclear (a part the military bit).
Later -in this article- we will focus mainly on the infrastructure expenditure plan, which is, indeed, a surprising element, whereas the cuts in the administration and the increase in military expenditures are typical republican policy.
II. Trump proposes an important cut in corporate and individual income taxation, coupled with partial or full roll back of the Dodd–Frank bank regulation act: The congressional process required to roll back the act will likely be quite complex and time consuming. On the contrary, the tax cuts are likely to happen more quickly, despite being politically trickier: Trump has promised a massive tax cut for the American middle-class and not another reform beneficial to the top one percent.

Now, we can ask ourselves two seminal questions: A) what is the rationale behind these two sets of measures? B) Is there a link between them, i.e. why it is key to present both simultaneously?

To answer let’s take a closer look at the most unusual aspect in point I. above:
the one trillion dollar expenditure plan concerning infrastructure.
A lot of caveats are associated with this plan, two of them deserve further scrutiny:

a) Who will bear this cost? It seems that the administration wants a sort of partnership with the private sector, using tax credits: But then, how much will be borne by taxpayers and how much directly on private sector shoulders, i.e. banks, financial market in general? There is no clarity at this stage.

b) Who will build this infrastructure and who will decide what to do and when? Such a humongous plan needs a lot of constructions workers and a lot of coordination- potentially an entire bureaucracy -: Is the administration sure there will be enough workers (without the Mexicans) available for such a long time and spread across the entire country? Who will coordinate the project? A new federal agency? All these “details” remain undefined.

For the time being, it is far more pertinent to understand (and investigate) point a), i.e. why the private sector would be interested in financing such a plan and using which money (especially considering interest rates are likely to raise soon).

Take a step back and look to the other part of the program, namely point II. : Trump wants to implement a substantial corporate tax cut, which aims to bring back the vast amount of liquidity amassed by major US companies overseas -the main example being Apple which capitalized an estimated amount of cash overseas of 246 billion-.
If the corporate tax cut is implemented, then these major corporations are expected to repatriate the cash. Firms would then face a twofold dilemma:

1. They can decide to increase dividends, which might explain why the Wizard of Omaha (Warren Buffett) bought heavily Apple’s shares last year- or 2. They can decide to invest in new production plants in the US (quite certainly the most preferred outcome in Trump’s mind).
Now, option 2. makes (economic) sense only if you accept the following hypothesis: the current chain of production and distribution used by major American firms is inefficient and an increase of efficiency can be obtained by re-investing in the US. This is a major hypothesis: large US firms ‘production chains (which includes several plants in several countries working in a “tense” web relationship) have reached a very high standard of efficiency due, manly, to the numerical revolution.
It is clear the US administration can disrupt these chains by using commercial barriers. Nevertheless, a lot of these groups are fighting global commercial battle, Apple and Samsung is the main example, and the US market despite being an important one, it is not the only one for them (globalization is played by an orchestra of countries and no more by few occidentals’ countries).
As such, those firms cannot modify something that works and switch easily to some undefined/ unprepared more US centric solution: The likelihood of losing the global commercial battle will just increase for them. In addition, commercial barriers remain a dangerous option for Trump’s team: If the US starts a commercial war why would others remain silent.
I continue to believe that president Trump will end up moderating this part of his program because US major businesses have just too much to lose.

From this analysis, I reckon that the most likely outcome after corporate tax cutting would be an increase in firms’ cash distribution, i.e. option 1. above.
The shareholders (mainly, the famous 1%) will then receive a lot (of new) money and they will need to invest these new amounts: by imposing favourable fiscal terms, the US government would then try to convoy these additional moneys into its infrastructure plan. Interestingly, the fact that, simultaneously, banks would become more risk-taking actors- due to the reduction of banking regulation- should also help the process.

Three final remarks/warnings need to be stressed here:
A. Trump’s team need to come up with a solid scheme how he intends to move money from private individuals/ institutions to a semi-public (government regulated ?) system: The last time we saw a president trying to favour this sort of transfer we ended up with the subprime crisis, i.e. the seeds of the mortgage meltdown were planted during Bill Clinton’s presidency when he decided to loosen housing rules by rewriting the Community Reinvestment Act, which pushed banks over invest in the home mortgages market.

B. Trump’s administration cannot simultaneously ask US major firms to bring home tons of cash earned in foreign countries (i.e. efficient production plants) and foreign markets (i.e. key selling points) and then fight commercial-fiscal wars with those countries.
As explained above, Trump needs this money back, but he needs to keep alive the flow, i.e. those groups need to remain highly profitable. From this perspective, we foresee no benefits for the US government on entering any commercial-fiscal wars with foreign countries.

C. Politically, Trump needs to implement a successful infrastructure plan: He has been elected based on a gloomy and pessimistic economic picture of his country, several US fellow citizens agreed with him about this representation and cast a Trump vote on the promise of bringing back some form of decent (secure) work and thus prosperity. Now, as explained above, and despite some exceptional cases, we do not believe in any large re-investment movement coming from major groups: On that side, we forecast minor effects concerning job creation in poor areas.
It is precisely, for this reason, that Trump needs his infrastructure plan up and running as soon as possible: this plan is needed to create jobs among the “rust belt” (low human capital) population. For them American first clearly means the creation of thousands of stable jobs.
As a conclusion, underlying Trumps’ announcements there is a main target: To assure a transfer of wealth from the globalization’s winners into the hands of the losers. Only time will tell us if this (still highly hypothetical and unclear) scheme will be solid enough to move (in a normal positive/ raising interest rates environment, by the way) money from US coast’s affluent people to its inner territories’ poor (low skills) people via new job creation.
That said, in a medium long term perspective, only one factor can really save these areas: A huge investment in education and new technologies and this should be done, ideally, in parallel with the implementation of the infrastructure plan, but we can doubt this will be done anytime soon!

Un pourquoi pour ce site

Le commencement, comme souvent, c’est un constat : On vie dans le temps de la noyade, de la déferlante chiffre, qui nous noie. On est, à bien des égards, « Drowning by Numbers » et cela est tout particulièrement vrai dans l’espace économique (et donc) politique. Comment éviter l’abrutissement relatif à cet état de fait ? Comment revenir à une mesure humainement gérable du flux, de sorte à éviter l’égarement et les fausses pistes ? Le blog ici se veut donc cheminement et outil pour s’affranchir des modes faciles qui, elles, se nourrissent de cette déferlante numérique.

De l’Homo Economicus à une approche davantage liée a la condition humaine : L’homme comme Projet Jeté.

La notion d’homo Economicus est, par définition, un des fondements de la théorie économique traditionnelle.  Selon cette vision l’homme s’apparente à une machine calculatrice : Il collecte l’ensemble des information pertinentes, capable d’influencer ses choix, en gros l’ensemble des prix et le montant de ses disponibilités, et rationnellement il choisit la meilleure allocation (panier) de biens lui garantissant la maximisation de sa satisfaction.
D’un point de vue purement philosophique on nage en plein cartésianisme : A l’aune de l’intelligence artificielle, on n’a aucune peine à imaginer cet Homo Economicus substitué par un algorithme – bien calibré, l’algorithme, ne devrait point peiner pour reproduire ses choix (rationnels et maximisant).
Soyons clairs, la science économique est bien consciente du côté partiel et réducteur d’une telle démarche. De ce fait, on a intégré l’information imparfaite (seulement une partie des prix est connue lors du choix), l’incertitude et, même, l’interaction avec les autres au moment du choix, à travers l’introduction du comportement stratégique.
Cependant, une constante a été toujours jalousement gardée (ou bien moins souvent abandonnée) : le caractère maximisant des choix effectués par l’Homo Economicus. Or grâce à la philosophie et à ses développements récents, on peut envisager une alternative à cette hypothèse -ce qui nous permettra d’avoir un autre point de vue, plus frai et « humain ».
L’homme existentialiste nous offre ce dépassement fécond qui nous éloigne de la mécanique cartésienne. L’homme est, en simplifiant à l’extrême ce courant philosophique, dans son essence et avant tout, Projet Jeté. La définition semble inintelligible et loin d’avoir une quelconque utilité : cependant c’est bien plutôt le contraire qui est vrai.
Analysons cette définition mot par mot, de sorte à pouvoir l’applique au monde actuel et comprendre sa richesse.
A. Chacun de nous est « Être Jeté ». Ceci nous parle intuitivement : à notre naissance chacun de nous est « Jeté » dans une réalité, un environnement, ou encore plus précisément dans un monde. Mais surtout on est « Jeté » dans le monde avec (grâce à) des autres et c’est bien ces autres qui vont assurer notre vie (et même souvent survie) dans le monde.
Maintenant, les objets (biens) viennent à nous grâce (à travers) les autres : qui, par exemple, n’a pas observé l’aisance d’un enfant dans le maniement d’un téléphone portable. On est toujours surpris. Cependant, si l’on réfléchit, l’enfant applique tout simplement un comportement mimétique : il a observé notre façon d’utiliser l’objet et donc il va simplement reproduire son maniement. L’enfant, comme nous tous du reste, ne va pas se soucier de comprendre l’objet téléphone portable (et ses dangers éventuels) avant de l’utiliser. Son intérêt est d’utiliser (au plus vite) au pair de tous les autres gens qui l’entourent.
Imiter les autres dans leur choix est donc un comportement à mettre en avant car c’est ce qui nous a permis d’intégrer -de s’adapter à- notre monde. A bien des égards, ceci agit tout au long de notre vie :  Si ce n’est pas cette force, qu’est-ce qui peut expliquer le comportement moutonnier lors d’un gonflement d’une bulle spéculative sur un marché financier, par exemple ?
Ainsi, l’envie d’être comme les autres peut, aisément, nous faire oublier notre rationalité (rationalité qui est une stratégie/methodologie, elle aussi, apprise grâce aux autres) et nous conduire à des mauvais choix.
A ce propos, si vous avez un ami banquier, n’hésitez pas à lui demander le nombre de fois qu’il se vu contraint d’expliquer à ses clients que la performance passée sur un actif n’est point garantie de son évolution future. Une hausse du cours pendant plusieurs années de suite est, toute chose étant égale par ailleurs, signe d’un prix surfait plutôt que d’une occasion à saisir.
En résume, la « jetitude » de notre existence est à la base de notre attitude conformiste, de notre suivisme : Plein de choix peuvent donc s’expliquer sur la base de ces considérations.
B. L’être est également « Projet » : En effet, nous sommes constamment en train de planifier et de nous projeter, et donc vivre, dans le futur. Le présent peut bien s’égrener comme il veut, ce qu’on souhaite est constamment devant nous. De ce point de vue, la projection individuelle vers l’avant requière d’un côté de la visibilité, de l’autre de la stabilité : l’homme comme projet a besoin au moins d’entrevoir les résultats de ses plans dans le futur.
Il va sans dire que dans un monde où les certitudes sont devenues des pures chimères, l’individu va oublier, une fois encore, sa rationalité de maximisation. Ses choix seront plutôt dictés par sa volonté d’un retour en arrière dans un monde prétendument connu et stable.
Mais, le fait de se projeter vers le futur en espérant la résurgence du passé a-t-il un sens ? Non, certainement pas.
L’histoire passe et marque le monde d’une façon indélébile- pensons seulement aux traces dues à la technique moderne : le retour en arrière est pure illusion (de plus retour à quel point, en éliminât quoi au passage).
La seule solution pour chacun de nous est de rester ouvert face au changement – on doit accepter un aggiornamento perpétuel -.
Mais, l’aggiornamento coûte : une aide significative et individuelle est ici nécessaire. Cette aide représente le prix à payer au niveau social pour obtenir une projection individuelle apaisée.
Ainsi, l’aide aux laissés-pour-compte implique forcément la présence d’un système de redistribution de la richesse parmi les individus mais aussi, et surtout, d’un système d’éducation performant : en effet, le seul véritable antidote à une instabilité croissante est d’enrichir son propre capital humain et donc garantir une ouverture (positive) sur le monde.