The Digital Revolution of the Asset Management World: Is Artificial Intelligence the Right Answer? Some Notes on AI Demystification in Finance

 

NICE-APart I. Finance as a Data-Driven Ecosystem: Time More than Data is the Ultimate Limit, or Why Data Have Angles.

I. Introduction.

The artificial intelligence (AI) revolution is touching the asset management (AM) industry, and plenty of actors seem ill prepared to fully embrace this major change. Why does AI feel more like a major threat than an opportunity to move forward in the asset allocation industry? Why is it so difficult to view a harmonious integration?

The answers are possibly because the change involves decision makers whose skills were considered out of reach for algorithm-based devices, when the digitalization of some aspects of the AM business, such as the streamlining of several repetitive but also “seemingly-expert” tasks, had already taken place. In any case, the arrival of AI represents a quite logical consequence of the sequential increase in computational capabilities associated with Moore’s law. If computational calculus is becoming less expensive and more powerful and tons of additional digitalized data are available, then it is just absurd not to take advantage of it in a domain where human decision making is complex, the expertise is highly sophisticated and (if possible) quickly delivered.
This discourse seems to perfectly fit the reality of the financial sector. Indeed, more than in other sectors, decision makers operate in an ecosystem that creates and digest huge amounts of data and everyone is eagerly figuring out how to use the data in forecasting to optimize their decision processes.
On that note, it is well known that the AM industry has always been keen on convincing its customers to use data-driven decision-making tools. For instance, what could be more convincing to an investor regarding the validity of an investment proposal (bearing in mind that before becoming a choice, the investment is always a proposal) than a historical graph of the past performance of a selected financial vehicle or a series of historical financial ratios associated with a given asset?
Generally speaking, two clues often characterized a standard AM offer:
On one hand, any AM investment proposal implies a subsequent customer choice and, on the other hand, the proposal is often built around data-driven arguments, which take for granted that clients want to receive this sort of information. Yet despite all the recent “know your customer” efforts and clients’ digital profile this idea of providing clients with a proposal based on data-driven information is at most an educated guess about each client’s expectations. Basically, we are guessing about customer’s ability to understand a proposal, and we assume that client would easily approve a data-driven offer.

Moving away from quantitative-based suggestions implies opening a Pandora’s box of tricky questions.
If numerical arguments are not enough, then what should we use instead as additional arguments?
Should we open a transparent discussion about a proposal by discussing the data used, and how far should we be ready to push? To what extent do we meet this challenge? Is there a natural boundary in our ability to discuss a proposal? If yes, based on what considerations?

In a sense, the rest of this paper could be read as an attempt to answer those questions and explain why they would remain significant even in a hyper-connected and soon heavily disintermediated financial world.
Depending too much on data-driven solutions has led to an infinitely more debatable and dangerous message among investors:
If a series of wealth allocation decisions are made based on solidly grounded data-driven analyses, then clients become incline to expect positive return and a properly master risk factor.
In other words, humans are easily impressed by numbers because, on one hand, they seem clear and straightforward; numbers do not lie. Thus, if an action is based on them, it ought to be right. On the other hand, a presentation based on data tends to blur the boundary between simple knowledge and full understanding of a given phenomenon. The core message of this paper it is precisely to offer a series of hints allowing a more fruitful discussion of this second aspect.

II. From Data, to Data biases and Data Angles.
II.A. The notion of Data biases and why is so important in science.

By definition of a rigorous, scientific approach, when data are used to describe an economy in a given time period, then the information vehiculated by the data is considered in a strictly unidimensional way: Given the data definition and the methodology used to determine a value, it must be taken as a cornerstone from which science and knowledge can be built. On the contrary, financial “data” are essentially social and thus call for a much more cautious approach.
For instance, the way in which a measure is defined is infinitely more convoluted. Because, the numerical “rock” is, by definition, fragile due mainly to its historical unicity; effectively, it is impossible to recreate the same measure under the same conditions. Finance and economics are social sciences: Everything we collect as a number is unique because the ecosystem surrounding each value appears, by definition, only once. Any statistical (thus mathematical) analysis of these data should always acknowledge this key feature.

In this context the famous Nate Silver quote “there is no such thing as unbiased data. Bias is the natural state of all data [added emphasis is mine]” takes its full value. This claim sounds intuitive and easily believable (which is why it is so often quoted) but, in the end, fully grasping its essence is very difficult.
For instance, what is the exact meaning of the term “bias” in this sentence? Are we likely to face more bias in the social data realm?
If this is the case, then is it a sign of a high degree of complexity? Why is it important in finance and economics to accept and carry with us this complexity instead of refusing it?
To answer these questions, I point out a key fact:
While numerical data measures “something” of a phenomenon, this measure will never be accurate because we define it by using a historically based data definition. As such, this does not imply to disparage the measure, but this just reminds the importance of the historical context and the fragility of any measured value.

II.B. From Data Biases to Data Angles.
Let’s consider an example of bias due to a rigid, inflexible definition of a measure.
The unemployment rate has been statistically defined given a certain ecosystem and (quite) strong separation between employed and unemployed statuses.
Today, that barrier is blurred and various types of work relationships exist: How valid, then, is the unemployment rate as an indicator of job market frictions in a given country? Basically, we should try to modify the measure. But then by how much? How do we determine the new, better performing definition? Moreover, how do we judge and redefine the historical values? Is it not better to create and elaborate a web of new measures to describe the complexity? Fair solution, but, if we do this then we will lack of historical values.
To create a valid answer for this evolution we need time: The phenomenon is evolving, so we must describe it and we must agree on a new theoretical frame. Why?
Because as Albert Einstein once wrote
: “It is the theory [i.e. the concepts we set and how we use them] that decides what can be observed”. Now, a detailed discussion of this key methodological aspect would take too long and is too complex; we will not fiddle while Rome burns [1]. -All notes at the end of the paper-

At this point, I must stress two points:

First, despite the historical definition of a measure, its evolution still tells us something. For example, regardless of the definition of “unemployment rate” that we consider an increasing unemployment rate is a bad signal.
Second, given the definition and the value associated with a measure, the value is, a priori, true. Our discourse here is not a moral one; no government plot is in motion to produce false, or tainted data.

What matters it is that a data series is likely to lose its value with time because it decays and no longer describes with sufficient precision what it is supposed to, but there is no plot to force acceptance of this measure and we can improve our knowledge of a phenomenon by starting another measure, which implies basically, to embrace another theoretical frame or at least change the old one. [2]
Now, it becomes clear that any data series implies a definition which carries an angle: The data is biased because we establish a single “socially” accepted and shared way to interpret it, and we continue to carry it out.
Numerical data defines a value and a theoretical and historical background simultaneously. By definition, a data point or even a vast set of data points cannot define the “full reality” because the real-world complex and dynamic, two elements, constantly at work to reshape our representation of the world.
Therefore, all data is a filter because it takes for granted the validity of a vast number of definitions used to set the measures in use. The historical time-frame is a huge burden that we cannot refuse to carry with us. The real big challenge it is to constantly acknowledge the presence of this burden and thus to remain modest about what we know (and how) and what we do not know (and how much).

III. From Data Angles to Ethical Concerns.
III.A. Why Discourses and Ethical Considerations Matter.

In the previous paragraph we just start to grasp that too often, we tend to forget that a given number is polyhedral in essence, so its significance might change in the blink of an eye. Therefore, its usage in any decision-making process, thus also in finance, needs to be reviewed and therefore constantly challenge. Why? Because an allocation proposal must always be accepted by an (often reluctant) investor; therefore, words matter as much as numbers when a proposal is analysed at a given time. Again, words are needed to picture the context.
On that note, we have the chance to witnesses one of the most extreme displays of the power of words and discourses overcoming that of numbers and data: The Tesla case.
Here, we have a firm whose market capitalization is greater than those of well-established and profitable car producers, e.g. Ford and G.M, despite pass and present financial losses. Here, the value of the stock is used to holistically evaluate Tesla’s future and thus the expected (great) success of a hypothetical (for now) mass production of its cozy electric cars; a big bet, albeit so typical on a capitalist, idealized saga.
Now, the foreseen numbers and data are interpreted and considered from a single, very optimistic but fragile, angle:
Consider a coming change, for instance, a more serious cost assessment or possible turmoil of the batteries supply chain process, the core element in an electric car, in the future. This scenario’s change in a single stock might generate a domino effect: In Tesla’s case, does the shift affect the car industry or the tech industry? How the entire Tech ecosystem will be affected if a catastrophic scenario will materialize?

This extreme case shows how numbers or their absence are always used in sync with a discourse, which embraces a broad holistic view of reality and in particular of the future.
As individuals, we accept or refuse the angle from which data (or even their absence) are interpreted if we perceive as good the picture of the future associated with that choice in a particular domain. Our choices are based on a biased (e.g. gloomy or rosy) judgement.
Finally, the essence, the real engine motivating an investor’s choice, appears, and it is always based on ethical and therefore moral considerations:
It is always a battle between good and bad that determines any AM costumer’s choices.

One of the main illusions of the AM industry is the belief that clients do not constantly wear ethical lenses: They are always on investors’ noses, even more so when they are evaluating and discussing numbers. Here is a hint to consider: What a client is looking for-and this will be more and more the case, as the data overflow environment is just ahead of us- is a series of ethical considerations allowing him or her to overstep the numerical presentations and results.
This is basically why a client (and we are all clients) at the moment of an important decision wants to understand and not simply know.

Let’s stop here and breathe deeply. The last couple of paragraphs shed some light on the element that is easily and often drowned in numbers when an AM proposal is delivered. Now, in defence of the usual AM proposal, ethical considerations are terribly difficult to determine and set a priori. But are we really so sure about this? Or can we try to highlight a relatively easy procedure? Is it out there?
Clearly, we believe this is the case. To illustrate some of its main features, let’s deeply analyse some more examples.

III. B. From Data Angles to Ethical Concerns: Some Further Examples.
III.B.i. The Facebook case: What is good or bad in a given number?

To start, we can refer to a recent case in which a key number that was used to justify the success of a firm’s strategy, suddenly became a major sign of its weakness. Unsurprisingly, we highlight the famous Facebook case. Historically, the existence of a broad (and growing) user base was a strength and a sign of Facebook’s success. Everything was based on a simple equation: More users implies more possibilities to sell user profiles to other firms keen on organizing tailored marketing campaigns on the platform. Besides, as any economic management textbook will confirm, big is always a priori better because scaling of production costs is possible with large numbers.
Clearly, any financial analyst seriously following Facebook’s stock was aware of and (openly) pleased with this definition of Facebook’s core business plan.
However, quite certainly, some of those analysts were worried about this business plan resilience.
Indeed, any wise and therefore sceptical analysis of a (mainly) new phenomenon is always based on a series of basic questions:
How long will the “good star” shine over the new business?
What factors are likely to dim this light, and when?
Ultimately, time is the sharp sword that will cut the Gordian numerical knot and determine if those numbers were “good” or “bad”.
Those analysts were justly concerned with how the company managed its users’ profiles day after day and the criteria for selling those profiles to the external companies: They saw the possible poison source, but would they leave the business and refuse to buy its stock when the company is surfing on its momentum? No way. After all, historically, the firm’s data (and chiefly the increasing number of users) were there to prove that this part of operations was smoothly managed – if we exclude the European buzz from an obscure Austrian dude -and under control.
So why search for fire, if (at this point in time) there is no smoke? Why not follow the majority? Following the group is not only temping it also economically rewarding because investors gain money and please their clients.
Thus, the sceptical analysts are just silenced, drowned out by numbers and Mark’s globally worldwide recognized genius.

Still, in the blink of an eye, the same number became a weakness because of the simple principle that more users means then more potential damages. One starts with Trump but ends up with Brexit, German election and you name it, in a typical avalanche effect.
Thus, the entire structure becomes more fragile, if the Facebook’s number of users had been “small”, the fire would have been easier to be contained. In other words, something that was seen as a benefit suddenly became a major source of problems because the number of users was interpreted using a single restrictive angle/perspective.

By doing so, the firm and the majority of the analysts devoted to following and tackling the platform giant forgot an ancient warning raised when this business was in its infancy: Facebook is a social-media and as a medium necessarily educates its users. Facebook clients are informed via their feeds, which subjects the clients’ perceptions and interpretations of the world to a series of filters. Here, one needs to be extremely careful when manipulating those filters for commercial reasons: Facebook users use the platform because they trust the firm and its ethics. Alternatively, the vast majority of users are ready to accept a certain degree of commercial noise on the feed and Web-personalized ads would appear either funny and, sometimes, useful.
We all concede to these noises in the real world; we are used to them, and we know they are the way firms earn their money. But users do not accept fallacious attempts to reshuffle their political opinions. This is not about suggesting the best car given that I live in Geneva and I often drive on mountain roads; this is about what I am supposed to think about other human beings and the way in which I want to organize my life with others!
Here, the trust factor is disrupted, damaged and weakened, putting the survival of the entire network at risk.

III.B.ii. The Food Industry case and the Short-term Long-term Risk fallacy.
At this stage, one could always say that this sceptical data analysis does not apply often.
But can one be sure? Consider the food industry. All major actors there are using too much sugar in many precooked dishes. We all know that, hundreds of news pieces, scientific studies and details about processed foods are coming from everywhere.
However, companies in this sector have started to recognize the problem, as indicated by the famous Milan Declaration of two years ago, in which all producers agreed to voluntarily reduce the usage of sugar. The presence of sugar is still unhealthily high -chiefly in the case of processed foods sold in developing countries- by healthy standard because sugar is very addictive to consumers. Its presence ensures a positive consumer’ experience and consumer loyalty, and the producer can use a very cheap ingredient to ensure plenty of nice features in any precooked dishes.
But, do many financial analysts covering this industry track the risks inherent in continuing this policy? Is it not time to evaluate the likelihood of a possible legal action and the consequences for the entire industry?
One of the roles of financial apparatuses after all, is to help efficiently allocate savings, which entails transparent judgements and foresee the avoidance of risks and threats characterizing a particular business activity, considering that those activities take place in real time.

A discussion needs to begin at this point about the misleading usage of “short term” and “long term” in the analysis of human activities.
Take the last example we discussed. Again, adding sugar represents a risk factor for the food industry, but is it a short-term or a long-term risk?
No one knows. If tomorrow a new important study proves that the majority of our diabetes problems are coming from the food industry and this study happens to be backed by several experts and spreads over the Internet, would it be an expression of a short or long-term risk?
Honestly, it is just something that explodes thanks to the normal flow of time and its endless supply of surprises.
When we analyse human activities and risks, we should minimize the use of those two concepts. The materialization of something happens on time but remains a question of fatalism more than any short- or long-time horizon. Therefore, if an accident does not happen, there is only one radical explanation: luck. The reader should note that without any accident, we cannot know the system’s resilience. Facing some disasters (on a mild scale, if possible) here and there, is favourable for the survival of the humankind and its technological progress.
The last Japanese tsunami provides a very nice case study of a poorly anticipated fatal accident. The event materialized, and the infrastructure was apparently ready to support it up to a certain threshold: The surprise lay in the tsunami’s intensity, and the problem’s depth was proven by the tragedy of the Fukushima nuclear plant.
This extreme case had exactly the same chance of occurring the day after the Fukushima plant opened as it did years afterwards. There is (was) no short-term/long-term argument here; there is (was) only the possibility that a disastrous event would occur whose impact would initiate a chain reaction far too complex to be fully understood and mastered.
The Japanese case clearly illustrates how the problem lay in the plant’s structure (its business model so to speak) rather than the extreme event.
The extreme event and its full potential destructive force had been there since the plant was built. Moreover, the fact that the plant was built created the probability of an accident-before the plant, the risk was clearly zero-: Generally speaking, it is the fact of doing something new that generates a chain reaction, which, eventually, might end up in a serious issue. The natural disaster was magnified by technological achievement, which in a blink of an eye was revealed as a nightmare.

But after all, is this not the usual way of any human endeavor? If we decide to do something, then we modify (often more so at the beginning) our risk exposure, and if the technology we decide to use is complex, then it is likely that the problem, in the case of an accident, will be very difficult to handle. Here, we can better appreciate Pascal’s quote: “All of humanity’s problems stem from man’s inability to sit quietly in a room alone.
We are constantly searching new solutions and new ways of doing and mastering processes, and this generates future rewards and risks; risks, in turn, that cannot be fully evaluated because the extend of the damage remains unknown: We know the existence of a risk but we do not understand fully the mechanism which generates it (e.g. how an earthquake extremely powerful will occur).

Considering this fact, I strongly believe that a serious data-driven AI investigation may shed some fresh light, helping us in evaluate all sorts of scenarios and prepare better answers to potential problems and related risks. Specifically, rather than spending too much time trying to foreseen a chimerical when, we can better prepare for the after. Alternatively, an AI approach to envisioning and preparing contingent plans seems very interesting because it would bring many more side effects and possibly hidden correlations.

Returning to the food industry example, if one refers to the numerical results of a firm that sells processed sugary foods, then one can determine how much profit was made from an unhealthy product. Here, once again, is this number positive or negative? If the selling trend is upward, then what social consequences should the firm prepare for? Which angle does it prefer? Which angle does it decide to stress? And which angle to consider if you are a financial analyst?
Usually, since the “water” is quiet, sceptical and superficial sector analysts accept a smooth and continuous firm’s senior management discourse. Everything is fine folks, just enjoy. The numbers do not lie, and they are all positive. Why bother, then? No fire, no smoke.
It is the reign of the “business as usual” complacency, which is reinforced by the firms’ numbers and data widely accepted by the financial community. We can see a spiral created by the usual force, that is trust. Indeed, ultimately, numbers are trusted to be “valid” signals of the present and future status of the business.

III.B.iii. The GDP Saga: A bad Interpretation of a Historical Evolution can be very costly.
Consider a last example.
The chiefly debated macro numbers used to indicate an economy’s health are its GDP and its historical evolution. Everyone knows that today’s GDP is something we cannot read with a perspective from 20 or even 5 years ago. Indeed, any economist readily to recognizes that elements like major sectors shift affect overall economic activity, such as the change of developed economies from an industrial base to a digitalized-immaterial service base-the monetary value of a service is far more difficult to be properly capture if compare to the value of an industrial item- or a demographically changing regime, such as fewer young people and more retired workers.
Those cases are constant sources of new interrogations about the type of information we can grasp from this statistic. For example, can we really judge the overall economic activity when services are valued at their cost of production? What about all the work done without direct compensation because, for instance, there is no more a clear cut between working and no working hours? And what about activities like the personal marketing and networking added values done on platforms like LinkedIn, how all this can be evaluated?

Nonetheless, the election of president Trump and the Brexit vote stressed another, more hidden, and often forgotten element:  Although GDP is an indicator of economic activity, it is, by definition, silent about the distribution or concentration of the economic activity in a given territory.
In the famous Mark Blyth anecdote, he decided to participate at a Remain (counter-Brexit) rally in northern England. At the Q&A session, a member of the audience said, “You guys for Remain you have a serious issue here: You continue to say that the GDP has grown as never before since the UK has been in the EU. Still, this is your GDP in London not ours here!
This story reveals the core of the issue. A politician’s objective is definitely to ensure GDP growth, but, simultaneously, he she must ensure that the whole territory would enjoy it.
In other words, once again, the number is telling something, which a priori is uncontroversial, but we may miss the forest for the trees by paying too much attention to it. Its value should always be considered with other data so to embrace a more holistic view. Our understanding is not fixed but rather is changing and begging new perspectives. In short, we should always remember those Henry Miller wise words not so far from the Einstein’s ones previously quoted: “What goes wrong is not the world, it’s our way of looking at it ”.

IV. Data, Time and Data-Decision Process: A first Assessment.

All these examples share a common theme: Data are not tricky, per se; they become tricky as time goes by. Thus, if I want to define an optimal data-driven choice, then I would assume that data definitions remain constant, which in turn, implies an abstraction of time that must inevitably go by.
Here, the core of the problem is defined.
If we acknowledge that agents are living in a truly dynamic ecosystem, then data definitions, or the way in which they see and understand the data, should not be written in stones, but if agents’ choices are considered optimal, maximizing their objective, then those definitions must be set in stones.
This is a paradox: Widespread, fully coherent maximization behaviours among agents require a set of fixed data definitions, but a satisfactory recognition of the real world and its dynamics implies the changeable nature of those definitions. We need time to understand everything that surround us, but, due to our maximization choice, we do not have it, because an optimal choice needs to be implemented.

We are all facing this issue. On one hand, we set an action to maximize our objective, which requires stability, when, on the other hand, by carrying out the optimal action we embrace the presence of time and its dynamics, and therefore the fact that nothing is really “stable” or fixed as assumed, which implies risks, failures and misjudgments.

The Facebook saga perfectly illustrates this dilemma. Since the inception, the number of users was the measure used to judge its success. This was and is right, Facebook is running on ads-to paraphrase the words of Mark Zuckerberg in front the US Congress- and more users imply more business and therefore more money.
But more business implies successful Web-ad campaigns, which require tailored commercial messages, which imply the creation and maintenance of a state-of-the-art user’ database (rich in numerical details) as tailored-ad campaigns require a deep knowledge of users.
The simple reference to the number of users was hiding something much more complex and highly sensitive: the presence of a database whose content is not transparently presented to the public but frequently used and presented to corporations ready to run their Web ad campaign on this platform. In Facebook’s eyes, more users are beneficial if and only if more details about them are simultaneously extracted and added to the database.
Here is the business plan’s weakness: To maximize its profit, the firm needs to show that they own “rich” user’ profiles to third parties ready to run “campaigns”. But, who guarantees the morally “fair” usage of those “rich” user’ data once a third party is allowed to collect/access users’ Facebook data? A “strict” Facebook policy would, quite certainly, seriously dent its profits by increasing dramatically its costs and by preventing some companies to run their campaigns.

In any case, considering a numerical value from a single, apparently undisputable, angle is a fallacy. Undisputable angles do not exist as time passes. Everything is doomed to be reviewed through various perspectives. Basically, time’s main role is to reveal fragility and ambiguities in historical numerical data and, by doing so, degrading and possibly destroying the knowledge build on those numbers.

Hence, we might clearly know that Facebook is a firm running on ads, and that growing number of users was positive, but it was only with time that the features of Mark Zuckerberg’s business were eventually unveiled and therefore understood by many.
Ultimately, the understanding was there to be acquired from day one with minimal effort, but the knowledge that a growing number of users was correlated with an increase of revenue, was enough.
One could simply compare Facebook’s ad revenue Google’s to judge the quality of the two business plans. But few understood how these new forms of business extracted their revenue, that is why having a database of digital profiles is extremely important.
Time is simply a mechanism pushing people toward the understanding of a phenomenon instead of accepting “simple” knowledge with the help of human nature and qualities like curiosity, willingness, pride and fear of being the only who doesn’t understand.

V. Data, Time and Data-Decision Process: Conclusions.

One can draw at least two main conclusions:

A. The real “magic” word that does not appear in any discussion about data but should always be considered is trust. A person does something because he or she has a project or a plan that will be completed in the future. To realize that plan, one needs a “trusty” vision of the past and present, so to confidently project the future. Data and its analysis provide a way to establish these cornerstones which, in principle, should help build a solid decision.
The problem is that we tend to forget how biased the data are and how data might fail to convey a true understanding of a phenomenon, despite offering an “easy” (superficial) knowledge of it.
Everyone can acknowledge the extent of uncertainty carried out by the market, due to the presence of these multiple data biases. In the Facebook’s case savvy analysts, equipped with a solid understanding, were certainly aware that the firm was/is running an “addictive” algorithm to increase users’ time spent on its service and therefore guarantee high ad campaign visibility.
To develop such an engine, one must acknowledge an underlying database with many individuals’ information. On that side, Google’s success indicates that this firm also owns a huge record of our Internet habits and therefore is living under very similar constraints [3]. Here, from those elements, one can conclude that the firm’s weakness was/is its database and that protecting it was a natural step to take.

Why did the financial markets not pay attention to this argument until the scandal?
There were warnings, and there were discussions, but the revenue was growing steadily, and the firm was wisely using the money to diversify its audience by investing in other platform-related applications.
There was no smoke and therefore no fire!
Ultimately, financial markets and data-driven decision makers (human or machine) do not like to review their notions or their hard-data.
Again, if one enters the analysis where most stop, such a sceptical analysis could be very rewarding intellectually, but what about present decision making?
Basically, the problem is always the same: If one is preaching alone that something is wrong out there, it may cost him or her a lot of money. For instance, before the 2007 subprime crisis, there were at least two years of frequent discussions on business channels like Bloomberg and CNBC about the existence of a possible bubble in the US housing market. There was no “Black Swan” here: People like Michael Burry, who took a huge bet against the market by forecasting a major crash, were losing money until the crisis began.
This scenario shows how turning points around data’s meanings cannot be forecast.
Would AI have any advantage on that side? To be honest, no, which is why, AI are by default, in the active fund world, “quantamental”. The human touch is likely to intervene even after the AI’s recommendations are established.
It is clear that social-network “Big Data” should help build better sentiment indicators, which in turn should predict those famous turning points.
Nonetheless, those new indicators are likely to be very biased.
After all, they are based on “reactions” or approval-disapproval rates to news or discussions on social media by a “special” subset of the population likely to be far from representative. Hence, the real sentiment is likely to remain well-hidden among a silent majority.
Here, despite all our efforts, the choice of standing with or against the market, remains a bet:
Fundamental analysis is basically silent about precise investment timing.

B. All these caveats concern so called “fundamental” data, or the data describing and defining the environment in which market actors operate: business results, economic data, etc. Those caveats should not apply to market data themselves: prices and various analytics data we can derive from them, e.g. from means of prices to stochastic oscillators. These are the real hard data, right? A price is a price?
Libor was supposedly a “fair” and transparent rate until it was revealed that some banks were manipulating its calculation by submitting unfair rate well aware of the math used to define this interbank rate. So, a price is a price, but it will always call for close scrutiny and a dose of uncertainty.
Certainly, AI or more specifically, Machine Learning are certainly very useful but, even with those more solid data, the outcome obtained should be taken with pinch of salt and in a modest and humble mood.
On that side, recent events, chiefly the return of a “normal” (i.e. in line with historical values) financial markets volatility seems to prove the need of this humble attitude. Loosely speaking, and as we will discuss more in details in our second part, AI is a trend hunting device, but to be useful we need “stable” trends, which keep running long enough. Now, if the turmoil is important, as in the first quarter of this year and likely for the remaining of this year, trends tend to become short living and this could lead to important losses. [4.]
Why?
Because, in short, when a market regime changed occurs psychological and common knowledge aspects are likely to play a far more important role than quantitative analysis. Again, we will be back in details in the next part.
We should always remember that we are living in the era of unicorn tech firms, the amazing growth of private equity funds- and plenty others out of the markets activities-. Are they a sign of a fatigue vis-à-vis of the market mechanism? Equivalently, the fact that plenty of financial activities are done and set out of the markets is not a sign of a lack of trust on this institutional arrangement? On that side, during the last ten years, a “great distortion” of the financial market price mechanism was implemented due to central banks’ policy, which has certainly helped disparaging the trust on this institution.
To conclude, what we can really say is that, when markets become too complex, when prices are too difficult to figure out (that is, understand, not just simply know), chiefly in its short-term evolution, several actors may be tempted to move away, and find other institutional frameworks in which investments are evaluated based on calculus and numerical data, no doubt, but also primarily by human and shared long-term visions and considerations.

 

Notes.

[1.] The interested reader can refer to this excellent article: http://creativethinking.net/your-theory-determines-what-you-observe/#sthash.goBkQ9KR.dpbs . Basically, in the Einstein’s tale the professor is a knowledgeable person but he does not understand- figure out- the concepts he is manipulating. Very often we are victim of the same trap: By referring at our knowledge we pretend to understand-we know how our television set works, but we do not understand it-. We will disseminate hints about this key distinction between knowledge and understanding all along this paper.

[2.] It goes without saying that here we see one of the main advantages to living in a free society: A datum’s definition can freely be discussed and challenged, which implies a critical review and therefore favours the formulation of an alternative, which might lead to better knowledge of a phenomenon. We also know that troubles often start when people refer to a single value as the right guide and the right tool to approach and solve an issue. Again, democratic and free speech regimes allow the denouncement of this obtuse attitude and push us to consider alternative views and angles. Clearly, the same remarks apply when the society is facing a new phenomenon: A priori, open society are better equipped to elaborate and fine tune measures, tools and finally solutions.

[3.] It is interesting to note that one’s Google profile is likely to differ from one’s Facebook profile. When I am searching on the Web, for example, I am more open to a “guess, learn and review” behavior but on Facebook, I am facing a medium whose usage reveals more details about my real preferences. Funnily enough, even Big Data are likely to be biased. Google’s one is shaped by and contains features that we will not find in Facebook’s one, and vice versa.

[4.] On that side, this February AI funds posted the worst month ever performance, i.e. here: https://www.bloomberg.com/news/articles/2018-03-12/robot-takeover-stalls-in-worst-slump-for-ai-funds-on-record, which has reopen the question to know if AI and Machine Learning are not Overhyped as presented here: https://www.zerohedge.com/news/2017-10-23/machine-learnings-overhyped-potential-headed-toward-trough-disillusionment

The Active Stock-Picking Funds Industry: The Current Status and few open queries on AI & Big Data consequences.

stock-picking-fund-managers1

Clifford StollData is not information, information is not knowledge, knowledge is not understanding, understanding is not wisdom.

Last available data seems to confirm the success of the passive stock fund industry is founded on an additional monetary outflow from the active fund industry [1].

Apparently, the sunny days of some are the rainy days of others. What is certain, in 2017 at least, is that the capital inflow in this sort of vehicle seems as unstoppable as the bull stock market itself.
In addition, during the summer some active stock-picking fund managers, became vocal about sharing their daily struggle to keep their clients. What needs to be done to save the active stock fund business? Should managers simply await “the great correlation collapse”, as analysts at US money manager Bernstein are calling the unprecedented synchro stocks’ movement that characterize the last ten years? After all, few months ago, this was the magic recipe pointed to finally fight effectively the mounting threat defined by the passive funds industry [1]. On that note, it is likely that we will be fixed soon: at the beginning of this year, the stocks volatility seems definitely back on track at normal level.

But will this be enough? Will these performances, if confirmed at the end of this year, last long enough to bring back a portion of the money lost on favor of passive funds?

In response to these existential threats, most active fund groups are structuring their answers around a central focal element: The only solution available is to keep investing in technology and data to beat the market.

From an outsider’s purposefully naïve perspective we can then raise a further, deeper series of questions:
Are we sure this choice defines the right answer? Why are fund managers considering only these two pillars?
Two main and, apparently, unquestionable facts seem to reinforce the idea that this is the correct path to follow:

1. The active fund industry needs to lower its feeds because the passive index tracking funds are, by definition, very cheap [2].
To fulfill this requirement, there is only one option: Generally speaking, fund managers must use software instead of humans. In other words, fund managers must increase the proportion of their choices made via quantitative digital data analysis-which implies to increase the part of systematic decisions. A priori, by so doing, more efficient actions, that is based on larger sets of information, would be implemented despite keeping costs.
2. Simultaneously, by increasing the technical endowment, your fund’s organization would become ready to embrace the AI-driven stock-picking revolution. Automatically, you will then be ready to absorb and to fully profit from the next data wave: the huge amount of -often- highly granular, but also highly unstructured, Big Data collected, mainly, thanks to web 2.0 services.

Once again, from a purely outside perspective, both points seem to boil down to a simple and general idea: To beat the market, to perform more than your benchmark year after year-that is, to create the famous alpha (you) as an active fund manager-you need more data and more data analysis capabilities. Nowadays, in data we believe seems to be the only motto widely accepted among active fund managers, instead of a healthier on future we believe.
That said, a well-known shortcoming is widely acknowledged among all fund managers and experts:
Although they have access to an unbelievable and fast-growing amount of digital data, the data is, by definition, no information. A detailed discussion of this fact would take too long and is too complex (e.g. the famous Clifford Stoll quote above); we will not fiddle while Rome burns.

What matters is that two main insights arise once we acknowledge the existence of a difference between data and information:
i) Data need to be assembled and ordered to be properly analyzed in preparation for being encapsulated in an informational framework. Basically, this is loosely defined as the process of extracting information from data.
Often, this implies answering questions like the following: Is the raw data sufficient to describe the phenomenon in informative terms? What is the most appropriate manner in which to work out the data? Which intervals should be used to analyze them? Should we transform the raw data (e.g. rate of growth) to obtain information? Many other questions in this vein exist and need to be explored.
Clearly, this sort of investigation is not free of cost. The cost is rising, from, to use a buzzword, the process of data mining on huge and unstructured data sets: Its costs are likely to become more and more important once web-based Big Data sets come to be considered.
ii) Some information cannot and will never be available in a digital-data-version or, more precisely, will never be obtained by working in data sets. Concerning this problem, the formula developed by Emmanuel Tahar is spot on and deserves to be quoted: “After all, out of all information out there, only some of it is indexable [i.e., can be read and used by computers]. Out of all that is indexable, only some of it is indexed [i.e., ready for a digital analysis]” [3].

In several hedge funds where the most sophisticated variant of AI, Machine Learning (ML), has already being used for years, those concerns have already been addressed and answered:
They use a blended solution of AI alongside with Human Intelligence (HI) to guarantee that the quant program-AI-ML suggestions are always, ultimately, validated by human eyes and thought.
You might think this is common wisdom: by definition, not all the information can be first extracted and then processed by software. Therefore, AI choices cannot encapsulate the whole “present-informational-reality”. There is always a “mechanical” part in any AI solution due to a lack of finesse because AI is not fully present and therefore involved: Basically, AI is answering using a “real” build and derived from data, but data despite all effort in preparing them are never capturing the full “realityand the missing part may play a huge role if the system is dynamic.
This part will be analyzed in a further paper.
Some authors have nicknamed these sorts of funds’ strategies “quantamental”, which combines the traditional stock-picking skills of fund managers with the use of data and computing power [4].

I do not fully agree with this labeling: HI is there because some information cannot be extracted from data when it needs to be considered. More than a combination of insights, a human is the only one who can really feel the presence and the weight of what is going on and then make a holistic decision, full stop.

As a side note, another argument in favor of a quantamental solution is that it is well known fact that ML suggestions may be very tricky for humans to fully comprehend. It is the famous black-box case in which the validity of the relationship found by the machine is too complex to handle for a human mind- a sort of “ML-intuition” that, like in the case of human thinking, is difficult to explain in plain English-.
Here, humans can decide to follow the machine’s insight, even without having a full understanding of the procedure [4].

In any case, once the decision is made, it can, always, be managed and followed using the standard risk procedures: The starting point is often deemed to be unclear, the outcome is simply a change in the holdings of a fund, which can handle risk wisely like any other modification.
In summary, what defines the current status characterizing the active stock picking fund industry? It is a paradox.
On the one hand, active fund managers need to lower fees to stop the money outflow toward the passive industry and, on the other hand, they must hugely invest in data analysis and teams that are well equipped to handle this new task, which implies an increase in the cost base. Additionally, we have seen some crucial reasons explaining why HI has a key role in any new AI-oriented fund offer. Rather than sticking solely to AI, the fund industry is enjoying an augmented form of intelligence by combining HI and AI. This is the right paradigm to embrace in our industry: AI will enhance HI, but HI will simultaneously prevent some AI dead spots that are there to stay, whichever data set we consider.

Here, a harsh approach that accepts only pure data-driven decisions in making stock-picking decisions, is condemned to fail sooner or later. Two main reasons can be highlighted to explain this failure:
1. There are plenty of cases in which the data do not tell enough or, paradoxically, where they show too much: They are just too noisy.
This is why statistics was created in the first place, and data science cannot entirely dismiss its skeptical core: Critical thinking remains key to highlighting the limits and to building feasible and acceptable workarounds. Once again, we will publish soon a further paper devoted to those arguments, but generally speaking, data are like a streetlight in the night, we can see a “reality” around it, but not the “real” which, by definition, includes also the dark around.
Besides, if plenty of data are used to analyze a share price, then their past movement are “explained” using this huge set. This implies plenty of possible trends and correlations, which potentially will keep running ahead: Which one should a fund manager choose?
A data-driven procedure may easily become a Pandora’s box that, instead of calming down a noisy market, increases complexity when prices need to be understood.
Such procedures are likely to add further noise to an already hyper-complex (noisy then) market-frame: how and why some movements occur and remain, will be more and more complicated to explain.
2. As simple as it can appear, as in the case of autonomous cars driving alongside humans’ drivers, the real issue of pure data-driven stock picking is that humans will be acting in the market. The “competitive” presence of humans and AI-driven, like cars sharing the road, is likely to induce both accidents and misunderstandings and, paradoxically, new opportunities, for fast cars drivers!
Indeed, is the human decision always made based on data? Certainly not. What about, for instance, decisions based on strategic-thus perfectly rational-considerations? To forecast the outcome of our decision we are assuming others’ participants behavior: Are we sure to get it right? If a fund manager is bound to a pure discretionary strategy in his/her choices, what about taking action while expecting a series of “mechanical” reactions from the AI-driven “competitor” side? Nowadays, this happens already with traders speculating on regular readjustment of passive index tracker.
Furthermore, a more fundamental query remains: Are active fund powerhouses truly obliged to follow this path of investing so massively in AI? Are those strategies really well-founded?

In our Why Mr. Keynes still matters: The stock market and its dysfunction (here), we will present some arguments that prove how, funds’ active powerhouses ultimately cannot escape their data-AI-driven fate due to their objective of beating the market.

NOTES:

[1.] Just few days ago the FT online reported how 2017 was a further record year for the passive fund industry with a global growth of 460billions. All details here: https://www.ft.com/content/09cb4a5e-e4dc-11e7-a685-5634466a6915

[2.] For an overview and in-depth, analysis of the passive industry, I suggest that the reader consult my most recent article, https://www.linkedin.com/feed/update/urn:li:activity:6322720846750326784/

[3.] Emmanuel Tahar “Busting the Bot Myth: Why the Investing World Still Needs Humans” is available here: https://themarketmogul.com/busting-bot-myth-investing-humans/

[4.] For more details please refer to Joshua Maxey “The Rise of the Quant Fund: It’s Not Only About the Machines”, which can be found here: https://themarketmogul.com/rise-quant-fund/

[5.] This point has been analyzed in a one of my previous papers and in a very interesting article. My contribution can be found here: https://www.linkedin.com/feed/update/urn:li:activity:6294165616588922880/ The article, which describes stock-picking decision-making in one of the first hedge fund powerhouses when AI/ML was used alongside humans, can be found here: https://www.bloomberg.com/news/features/2017-09-27/the-massive-hedge-fund-betting-on-ai

Why Mr. Keynes still matters: The stock market and its dysfunction.

KEYNES-NEW-LIGHTKeynes was the greatest economist of the last century. He was acute and most of the time right in his analysis. Why? Because he was embracing a very modest and humble attitude: To understand what is going on, to provide advice, and so to improve the current state of affairs, you need to be present/involved. That is, you must patiently spend time carefully describing the mechanisms, that define an economy. What matters is to face what is in use in the economy, not what we believe is used. A theoretical debate is useful for shedding light on human choices. However, institutions, such as the stock market, must be carefully described for what they are because their mechanisms deeply influence agents’ choices!

In short, Keynes was like a biologist: What is killing a virus today is unlikely to work tomorrow because the virus will evolve into something different and therefore unknown. But to understand the virus evolution you need to carefully study the environment. As an observer, you must constantly be open to reviewing your analysis and starting from scratch!
It is not surprising, then, that at the core of his main contribution, The General Theory [1], he spent a chapter, the twelfth one titled “The State of Long-Term Expectation” [2] depicting the structures of the financial markets, at his time.
Is this detailed description still useful for us?
Yes, it is for at least two reasons:

A. Keynes acknowledged a key factor: The stock market runs on a fuel called uncertainty. The only sure element you can take for granted in this market is that all participants share the same burden: the inability to forecast the future. This implies that predictions on monetary streams that determine the fundamental value of a share, are not certain, and this is due to our inability to forecast the events that will determine the future firm business environment. There is only one way to narrow down our uncertainty about a share price: You must forget your own estimation about the fundamental value of a share and focus on what the other participants think its value is.
Alternatively, the idea goes like this: You can easily have in your hands, the most sophisticated price forecast mechanism on Earth, but if the other participants do not share your view, the price today and its evolution ahead are likely to differ from your forecast. The market is a social institution, not an abstract mechanism in which quantities or values are judged by an impartial actioner: To cope with it, you need to foresee others’ estimates, because, ultimately, these are the factors that will determine prices-thus values-and their dynamics.

Basically, the central message underlying the Keynesian beautiful contest analogy is twofold: on the one hand, the likely winner in the stock market game will be the one who evaluates where the majority thinks the “right” prices are better than others. On the other hand, as a participant, you have to live with your time; that is, investment ideas and strategies that were valid in the past are likely to fail today simply because those procedures become common knowledge among participants [3]. Today financial literature often refers to this process by using the term “model decay”: basically, the underlying idea is always the same, a winning strategy will be always copied and “arbitraging” so to become “standard”. On that side, there is nothing more frustrating and useless than the nostalgia of old methods and investment procedures. Given that the majority of the participants are interested in maximizing their returns, past successful ideas are not gone, they are integrated in the background, routinely applied by various participants. Therefore, they are just unfitted to be alpha generator.

Now, the active fund manager, who offers an investment vehicle that promises to beat a given benchmark index, must tackle the market as closely as he/she can, and, simultaneously, he/she must depart from it, by a series of “discretionary” choices [4].
It is this departure, this willingness to seize opportunities, albeit strictly observing others, which represents the fund manager’s real challenge and added value. Given this duality, the active fund’s success will entirely depend on two main factors:

(i) What others are doing, which is basically the way in which agents are selecting, judging stocks and finally acting given the set of opportunities at their disposal.

(ii) The pool of opportunities available for “departing” from the market, which in the case of a fund means from its chosen benchmark.
Presently, as an example of (i), we can definitely consider the current huge investment by active fund managers in AI-ML.

Those procedures for detecting opportunities are not yet commoditized, so they are viewed like a promising signaling source of unexploited actions or informational participant gaps. Today, a fund manager is ready to do a stringent AI-ML inquiry of the data available then use those results to bet on the abilities of the market; that is, ultimately, the abilities of other participants to reabsorb the gaps, which would imply to receive a reward as first mover.

Clearly, as time goes by, the standardization of procedures and data will continue on and, thus these opportunities will decrease. Investment in AI-ML-Big Data sounds like a valid decision but you should not count on it too much in the medium or long term.
Despite having a larger potential, AI-ML-Big Data waves are similar-on that side- to the high frequency trading wave, as well as any generalization of decision process in the stock-picking domain: time means standardization and it will always work against new techniques. Ultimately, we should always bear in mind the words of Ben Graham’- a Keynesian fellow, so to speak who, just before he died (1976), was asked whether a detailed analysis of individual stocks, a tactic he became famous for, was still a rewarding option: “This was a rewarding activity, say, 40 years ago, when our textbook was first published. But the situation has changed a great deal since then.” [5]

We cannot pretend to offer an investment vehicle without considering today’s market forces-i.e., chiefly information on used-, technologies and procedures: Many ways exist by which you can select the opportunities, and those methods are constantly evolving, you need to be up to date to have a chance of winning your battle against the market. There is no problem on that side. Still, one needs to evaluate the real factor that, whether in the short term or in the long term, represents the main deal of any active fund proposition: the presence of numerous or at least enough opportunities for “departing from the market”, as noted in point (ii) above.

The fund manager’s hope of beating the market is related to having a pool of opportunities “large” enough.
The key queries become the following: Are we sure that today’s pool is “large” enough?

What exactly do we mean by “large”? Is it not more a question of availability of enough “fresh” dynamic stocks in the market? Is not this dynamism the real indicator of stock market health? And finally, if it is the case, when a market becomes less dynamic what happens to the fulfillment of its social role?
We should always remember, as in the case of biological evolution, that if an institutional framework loses its primary function, another institutional arrangement appears and gradually takes its place.
Few are interested in those questions, when, this is exactly where we should start from if we want to judge and evaluate the stock market. The last part of this article is devoted to outlining the answers to these questions and thus depicting the contours and the limits of the new institutional arrangement partially already in place.

B. The beauty contest analogy was encapsulated in a section called “The Inducement to Invest”. Keynes was investigating the motivations to invest. As discussed in point A., Keynes was alerting us that the stock market can easily become a pure sophisticated game among insiders. In this game participants anticipate each other’s moves and subsequent prices’ movements by analyzing all sorts of information. It is precisely in this section that Keynes presented the famous analogy of a stock market appearing as a casino in the public eye: The strategic choices made by the market’s players easily appear incomprehensible to the layperson.

Nonetheless, despite this negative outlook from the market’s outsiders, the system will always manage to provide a price structure, helping to efficiently allocate people’s savings and thus to guarantee cheap capital to the “right” producers-that is, those who are expected to be in line with future social needs-. After all, the key benefit of a capitalistic system is that markets are institutions that allow optimal social choices to be determined and properly financed simultaneously.
Equivalently, economic growth will be dictated by people’s aspirations: In particular, savings will be allocated to generate the capital goods used to produce items, which will be in heavy demand because, a priori, they will increase people’s well-behaving.

What was Keynes’s real worry about this institutional arrangement?

To be honest, I believe that his main concern was more about seeing everyone participating in the game. His view was quite straightforward: Until, the participants were people with enough wealth to occasionally support significant losses, the game was harmless to the economy.
No one is free from investment misjudgments because no one knows the future, what the economy will be like or what will happen, i.e. what will be the main activities ahead. By investing, you are taking a bet, which you can definitely lose!
On that point, the trouble would start only once a number of ordinary people were entering without having a buffer to carry on losses. These participants would borrow to have a say in the market. Rightly, Keynes saw coming the spiral of private debt leverage that would be dangerous because it was borne by weak shoulders. It is no surprise that the brutality and widespread impact of the 1929 stock crash was also due to the burst of a generalized debt spiral (like in 2007-08, by the way). This last aspect is fundamentally one of the core facets of a free economy, and nothing has changed with time: we just changed the scale, the numbers associated with different forms of leverage are just bigger, more out of control and difficult to evaluate in their full, dangerous potential.

Despite this despicable debt-related feature, Keynes was not castigating the system. Clearly, this magnanimous attitude towards the capitalistic economy was on one side, historically based: Keynes had no idea about the cost in terms of negative externalities, e.g. environmental costs, due to an economic growth process sets on “biased” prices. And on the other side, his analysis was more concerned with what to do next and how to repair an economy in trouble.

Generally speaking, excessive euphoria followed by slumps and depressions were part of an overall picture defining the essence of a capitalistic economy. There were definitely measures to limit excesses-e.g., less generalized leverage- and fight slumps- e.g., such as the active intervention of the government. But what matters more is that Keynes was taking the centrality of the stock market for granted. The stock market was fundamentally the right place to guide economic social choices, because a very large set of alternative projects were evaluated and available to investors. To be complete, Keynes was also taking for granted another aspect: Firms had the possibility of financing their projects using debt, but this way was not view as a valid long-term solution. Basically, for long-term projects an entrepreneur was almost forced to consider the stock market solution. As a side note, we should stress that, at least in several developed countries in the last few decades, firms’ debt financing has been excessively fiscally favored compared to an equity financing scheme.

At his time, any dynamic entrepreneurs were quite certainly at the moment of financing their project sharing their plan with the market by going public.
It is specifically on this point that our world differs substantially from the Keynesian one. The size and the role of this allocation of capital out of the market, i.e. without a public offering on a centralized market place, has totally changed since his time and this has a direct consequence on the number of fresh opportunities in the markets.
Three main factors can be considered to explain this sharply different ecosystem:

1- In Keynes’s time, the stock market was “alive” and at the core of the economy. New firms in need of capital or old ones already quoted were actively using the market, such as by publicly increasing the number of shares or becoming participants. Nowadays, the centrality is not fully gone but is seriously fading away. Indeed, today, when considering the biggest and most develop stock market of all, the US stock market is literally “drying up”. This is due to a series of phenomena that include fewer IPOs notably for very young, small-caps firms (data from OECD Business and Finance Outlook 2015 p. 211-14, and the latest data concerning the last two years confirm the trend), as well as the constant high numbers of M&A, stocks Buy Back operations, and, finally, delisting of small and medium caps [6].

Here, we can easily see a reason for the difficulty of beating the market for an active stock fund. Despite the efforts to find fresh, not yet correctly priced stocks, the bundle of opportunities available is shrinking over time. It is like having to catch a fish in a barrel: It is a lot easier to catch one if the barrel contains plenty of fishes instead of a few. Now, clearly, in the case of the US, we cannot forget that there is a lot of competition to catch the remaining fishes, which drastically reduces the chances of each fund manager getting the right one and then reaching the frantic end of beating the market.
What is really interesting, and rarely commented, on, is that the US story does not generalize so easily in the case of other developed, old, capitalistic economies. In Europe, for instance, we observe that active funds are beating the market more easily. [7]

Personally, we do not believe that such European performance is only due to less heavy competition among fund managers. In our view, this result is also partially explained by a less dramatic “drying up” phenomenon in this part of the Atlantic. The presence of dynamic medium and small firms in European exchanges is less under threat than in the US. For instance, M&A activities are less intense due to national and cultural barriers in Europe.
Besides, the US and Europe differ in another key respect: the organization and professionalism, mainly in terms of money availability, of this out-of-the-market framework. These features constitute our second and third points.

2-In the US we have a huge web of out-of-the-market financial institutions, from Private Equity solutions to Venture Capital (and recently ICO) and Business Angel structures, which can easily provide important capital injections into new firms. After 2007-08 crisis, the extremely loose FED monetary policy has certainly favored the availability of money out-of-the-market ready to be invested in this start-ups ecosystem. On that note, we cannot exclude that a too brutal end of the loose monetary policy would generate a sharp decrease of this form of financing: the very cautious FED tapering approach can be explained by this concern.
What is clear is that under these conditions, there is no need, then, to go public quickly and even at a late stage, becoming public is no more crucial! To go public is a late option, which is often delayed and seriously considered only once the firm’s founder wants to monetarize part of his/her success.

3-US big firms, more so than European ones, are behaving, more and more like venture capital organizations. In a time when the innovation process has reached a stunning pace, big firms are perhaps, paradoxically, the best equipped to mitigate this breathtaking constraint because they can figuratively build (defense) frameworks around themselves. Big firms become framework builders by surrounding themselves with dynamic pools of start-ups.

The key question remains:

Is this new institutional arrangement a better solution or a worse one from an economic point of view?
This is an extremely difficult question to consider. We can only guess at some insights:

We should always remember that at any point, whether AI and Big Data are available, we do not have any idea about, for instance, the steps and the path that will lead us to our next main technology, which will be the real force that reshapes our own economic system.

It is the Blade Runner paradox, in reference to the original film released in 1982: The hero of this famous science fiction film is communicating with a replica of humankind using telephone boxes!

When we imagine and see the future we think about big changes -replicas, flying cars- but we are less ready to consider a bundle of key technological breakthroughs that revealed as being the real modifier of our lives. The mobile web technologies are the best examples ever on that side because they emerged from unpredictable marriage between new telecommunication infrastructure, phone industry improvements and web-IT technologies all available and released in a short period of time!

This misjudgment is everywhere and thus influences the people and AI machines-which look at the past by definition-in financial markets as well. Building the economic future, is a huge and messy stop-and-go game, in which the final result is also determined by an ultimate big divider: luck. That is to say, often a technological advancement can be achieved because several other small parts are released simultaneously or are available but used for other purposes. It is mainly this shift in purpose, that explains why engineers are extremely creative human beings and also why the time of success of a promising commercial idea is so difficult to forecast.

Now, if we think of a centralized market as a fantastic institutional framework that aggregates the intelligence of all participants, a priori, and despite the noise of excessive speculative games, the “fair” value will be determined and investors may use it as a guide. The share price should fairly reflect a wise, intelligent and rationally based consensus about the future quality of the proposed firm’s production.
A priori, being a young innovative firm early quoted in the stock market implies having its shares available to everyone and submitting to some high quality “standard” in terms of the financial results displayed.
The lottery ticket is available to all participants and, in particular, to the fund manager eager to beat the market.

Now, this is all in theory. In practice, overly harsh short-term concerns, may just prevent innovative entrepreneurs from focusing on their long-term goals. Today, the market is generally asking for money too quickly, and thus entrepreneurs may end in an optimization trap-that is, to paraphrase Alan Perlis- they may be forced to choose between “optimization and evolution” [8].
The centrality of the stock market is at risk due to an excessive pressure on short-term objectives. The market is the reign of optimization more than ever before. Firms are devoting their R&D budget to development efforts and a lot less to research plans; everything is done to obtain quick and high shareholders monetary returns, groups are too often seen and treated as money cows, basically.
This is why, research efforts are likely to be better assessed, understood and financed out of the market as well as any projects with long-term prospectus and low return on a short-term period.
However, this framework has at least two major shortcomings:

(i) Being out of the market, implies currently, being unavailable to the majority of people’s savings, savings which they “happily” invest in tracking the market’s momentum, such as by buying an index tracker ETF, which means favor big companies and disparage small ones!
Now, the money to finance the ‘evolution’ is likely to be in short supply due to existential worldwide risks ahead, which could be solved only by collectively embracing and accelerating the effort at ‘evolution’.
On that point, the truly sad part is to see institutional investors (i.e. our pension funds) being prevented, by governments’ too strict risk rules, from diversifying their portfolios on these out-of-the-market activities: not really a sign of an intergenerational solidarity and neither a coherent message, since environmental concerns are endlessly repeated by world-political leaders, with few exceptions of course!

(ii) At the same time, the out-of-the-market allocation process needs to be protected and helped. For instance, if current market players enter on this out-of-market world with the same eagerness and short-term habits, the “evolution” is likely to be restricted lost on lunch money.
Nowadays, in the stock market, we lack real evolution giants: We are financing new digitalized distribution platforms, keeping oil companies among big capitalizations and guarantee stellar quotation to huge AI driven advertising powerhouses like Facebook and, partially, Google while grid energy storage and distribution firms and other innovative de-pollution firms are scattered in companies’ portfolios, such as Tesla, and thus not properly financed and pushed to the front within the system!
I am perhaps exaggerating, but we are too focused on analyzing and pricing assets based on current sales and costs and not enough on investments and plans (with only few exceptions)The future is simply too much undervalued and unseen. If, as it is true, the best way to have an amazing future is to build it then today market activity is just too focus on keeping the present.

The unrealized market crisis lacks an appetite for risk, it is the reign of the optimization, which prevents us to accelerate and push through private investments.
AI/ML or any other digitalized analytic technique will not change this gloomy picture an iota: The spiral of asking firms to distribute more money to push consumption and thus confirm the sales figures the computers are asking for is tragically already realized!
Alternatively, AI will not be the device helping to challenge this “keep the present running” spiral: AI is an amazing tool to detect trends on data and its logical decisions-making is based, basically, on keeping running those trends.
How to solve this problem? Two axes seem feasible.

First, from the political side, we should expect a change on fiscal rules to favor access to out-of-the-market vehicles for both institutional and private investors. Now, this fiscal reform should include all sorts of ethical/long term environmental projects: If our savings are used in these domains we should receive a fiscal discount whether this investment is done in the market or out of it. Meanwhile, all sort of fiscal holes and blind spots which still exist today and benefit polluted activities must be removed. It is definitely time to accelerate the movement and to seriously move into our next economic paradigm.

Second, the private banking industry should reshuffle its offer thanks to these new fiscal environments. On that note, the private banking industry should urgently take onboard and develop the ability to provide as large a spectrum of investments as possible to their clients.
In the future, millennials, will not readily accept only in the market solutions, they are likely to ask hybrid solutions, where classical in the market funds are mixed with (a priori less liquid therefore riskier) out of the market solutions. They may ask for direct participation to those investments, which may cover and satisfy also their ethical standards alongside a more cautious and picky selection on market stocks: The first criteria in choosing how to invest is likely to become durability and environment concerns not just short-term returns.

In any case a pedagogical effort will be required: On a planet, finally recognized by all its wealthy inhabitants as being finite, that is, with finite resources and production capabilities, unhealthy and unrealistic rate of returns on investment should simply be banned. More, if tailored and more accurate selections in and out of market must be defined then the clients should be ready to pay the price.
On that side, we should also see flourish new thematical stock funds devoted to investments in firms whose objectives are long term ones. But, may be, even more importantly, at the level of the regulator we should see some anti-trust applications coming back: Some firms are just too big and they are just preventing free ideas to be properly financed!

This message needs to enter in the financial world once and for all and the proper method is to use the fiscal incentives wisely.
One of the Keynesian implicit messages was also the following: If we are like biologist studying the dynamics of a virus we can definitely try to modify the environment in which the virus is developing itself.
But to modify it we must start thinking. Therefore, we should stop referring at data as a given. We must, instead, constantly challenging them using a critical and skeptical approach: only then we are thinking, that is we are really trying hard to solve a given problem.
Now, governments are the ultimate source of the legal power: Laws and international agreements are powerful weapons if private agents see clearly governments’ will and commitment.
We must abandon, once and for all, the carpe diem-or “après moi le déluge, to use a famous French expression-ideology, which still defines unfortunately the cornerstone of our collective choices!

In conclusion, we should remember that one of the key roles of the regulator should always be to try to ensure that the right mix of long and short-term considerations are followed by the actors in charge of allocating people’s savings and, therefore, of creating our stock of capital: Historically, if something is unbalanced then the solution is to try to fix it via a direct fiscal intervention.

NOTES:

[1.] The correct and complete title of this book is “The General Theory of Employment, Interest and Money”, Keynes, John Maynard (1936). New York: Harcourt Brace and Co.

[2.] It is very interesting to note that this chapter represents the core of his Book IV, “The Inducement to Invest”: Keynes explicitly recognized the centrality of the role of finance in a capitalistic economy. Essentially, how finance works and its analysis cannot be treated as a minor or technical matter; understanding this is key if we want to have a proper view of the capitalistic ecosystem and of the forces shaping its evolution. Finance, by enhancing ex nihilo creation of capital, is the key place to measure the rhythm of transformation in the economy.

[3.] It is important to note that Keynes lived on a very different financial planet compared to our current situation: For instance, at that time there was no debate about individual stock or a portfolio performance versus a benchmark index. The index was already there, but it was used simply as a way to measure the evolution of the market as a whole. The key problem during that time was to pick the right stock that would guarantee a good return given a certain, often, long-term horizon. Clearly, we can easily guess that in this social environment, short-term returns were less valuable than long-term ones. But, once again, this is part of what we encapsulate with the expression “being there” or handling our own time.

[4.] “Discretionary” has become an ambiguous term nowadays. A fund can be classified as active, stock picking despite following a sort of mechanical “tit-for-tat” stocks selection strategy. A major example is the so called smart beta funds family.

[5.] I have to acknowledge and express my gratitude to the content of the article “The Unsolvable Puzzle” by Morgan Housel, available here http://www.collaborativefund.com/blog/the-unsolvable-puzzle/ , where Graham’s quote was first used.

[6.] For a full discussion, see my last paper: https://www.linkedin.com/feed/update/urn:li:activity:6322720846750326784/

[7.] We refer to a UBS paper “Active vs Passive: Why Does the Myth Persist that Passive Performs Better than Active in Europe?”, https://neo.ubs.com/shared/d1Hfi6ic5mMgrF/

[8.] The original Alan Perlis quote is “optimization hinders evolution

Article I: The active stock picking fund industry and its future: An attempt to depict the current status.

AI-KEYNES-1A new year is coming and the available data seems to confirm the success of the passive stock fund industry is founded on an additional monetary outflow from the active fund industry. Apparently, the sunny days of some are the rainy days of others. What is certain, is that the capital inflow in this sort of vehicle seems as unstoppable as the bull stock market itself. In addition, during the summer some active stock-picking fund managers, became vocal about sharing their daily struggle to keep their clients. What needs to be done to save the active stock fund business? Should managers simply await “the great correlation collapse”, as analysts at US money manager Bernstein are calling the unprecedented synchro stocks’ movement that characterize the last ten years? On that note, apparently, active funds’ results referring to the first half of this year seem to confirm that there has been a decrease in stocks’ correlation, as those favored most by active stock-picking funds finally performed better than their benchmarks.

But will this be enough? Will these performances, if confirmed at the end of this year, last long enough to bring back a portion of the money?
In response to these existential threats, most active fund groups are structuring their answers around a central focal element: The only solution available is to keep investing in technology and data to beat the market.

From an outsider’s purposefully naïve perspective we can then raise a further, deeper series of questions:
Are we sure this choice defines the right answer? Why are fund managers considering only these two pillars?
Two main and, apparently, unquestionable facts seem to reinforce the idea that this is the correct path to follow:

A. The active fund industry needs to lower its feeds because the passive index tracking funds are, by definition, very cheap [1].
To fulfill this requirement, there is only one option: Generally speaking, fund managers must use software instead of humans. In other words, fund managers must increase the proportion of their choices made via quantitative digital data analysis-which implies to increase the part of systematic decisions. A priori, by so doing, more efficient actions, that is based on larger sets of information, would be implemented despite keeping costs.

B. Simultaneously, by increasing the technical endowment, your fund’s organization would become ready to embrace the AI-driven stock-picking revolution. Automatically, you will then be ready to absorb and to fully profit from the next data wave: the huge amount of -often- highly granular, but also highly unstructured, Big Data collected, mainly, thanks to web 2.0 services.

Once again, from a purely outside perspective, both points seem to boil down to a simple and general idea: To beat the market, to perform more than your benchmark year after year-that is, to create the famous alpha (you) as an active fund manager-you need more data and more data analysis capabilities. Nowadays, in data we believe seems to be the only motto widely accepted among active fund managers, instead of a healthier on future we believe-see the final considerations at the end of our second article for a discussion of this sad state of the affair-. That said, a well-known shortcoming is widely acknowledged among all fund managers and experts:
Although they have access to an unbelievable and fast-growing amount of digital data, the data is, by definition, no information. A detailed discussion of this fact would take too long and is too complex (e.g. the famous Clifford Stoll quote); we will not fiddle while Rome burns.
What matters is that two main insights arise once we acknowledge the existence of a difference between data and information:

i) Data need to be assembled and ordered to be properly analyzed in preparation for being encapsulated in an informational framework. Basically, this is loosely defined as the process of extracting information from data.
Often, this implies answering questions like the following: Is the raw data sufficient to describe the phenomenon in informative terms? What is the most appropriate manner in which to work out the data? Which intervals should be used to analyze them? Should we transform the raw data (e.g. rate of growth) to obtain information? Many other questions in this vein exist and need to be explored.
Clearly, this sort of investigation is not free of cost. The cost is rising, from, to use a buzzword, the process of data mining on huge and unstructured data sets: Its costs are likely to become more and more important once web-based Big Data sets come to be considered.

ii) Some information cannot and will never be available in a digital-data-version or, more precisely, will never be obtained by working in data sets. Concerning this problem, the formula developed by Emmanuel Tahar is spot on and deserves to be quoted: “After all, out of all information out there, only some of it is indexable [i.e., can be read and used by computers]. Out of all that is indexable, only some of it is indexed [i.e., ready for a digital analysis]” [2].

In several hedge funds where the most sophisticated variant of AI, Machine Learning (ML), has already being used for years, those concerns have already been addressed and answered:
They use a blended solution of AI alongside with Human Intelligence (HI) to guarantee that the quant program-AI-ML suggestions are always, ultimately, validated by human eyes and thought. You might think this is common wisdom: by definition, not all the information can be first extracted and then processed by software. Therefore, AI choices cannot encapsulate the whole “present-informational-reality”. There is always a mechanical, tit-for-tat part in any AI solution due to a lack of finesse because AI is not fully present and therefore involved.
Some authors have nicknamed these sorts of funds’ strategies “quantamental”, which combines the traditional stock-picking skills of fund managers with the use of data and computing power [3].

I do not fully agree with this labeling: HI is there because some information cannot be extracted from data when it needs to be considered. More than a combination of insights, a human is the only one who can really feel the presence and the weight of what is going on and then make a holistic decision, full stop.

As a side note, another argument in favor of a quantamental solution is that it is well known fact that ML suggestions may be very tricky for humans to fully comprehend. It is the famous black-box case in which the validity of the relationship found by the machine is too complex to handle for a human mind- a sort of “ML-intuition” that, like in the case of human thinking, is difficult to explain in plain English-.
Here, humans can decide to follow the machine’s insight, even without having a full understanding of the procedure [4].

In any case, once the decision is made, it can, always, be managed and followed using the standard risk procedures: The starting point is often deemed to be unclear, the outcome is simply a change in the holdings of a fund, which can handle risk wisely like any other modification.
In summary, what defines the current status characterizing the active stock picking fund industry? It is a paradox.
On the one hand, active fund managers need to lower fees to stop the money outflow toward the passive industry and, on the other hand, they must hugely invest in data analysis and teams that are well equipped to handle this new task, which implies an increase in the cost base. Additionally, we have seen some crucial reasons explaining why HI has a key role in any new AI-oriented fund offer. Rather than sticking solely to AI, the fund industry is enjoying an augmented form of intelligence by combining HI and AI. This is the right paradigm to embrace in our industry: AI will enhance HI, but HI will simultaneously prevent some AI dead spots that are there to stay, whichever data set we consider.

Here, a harsh approach that accepts only pure data-driven decisions in making stock-picking decisions, is condemned to fail sooner or later. Two main reasons can be highlighted to explain this failure:

a. There are plenty of cases in which the data do not tell enough or, paradoxically, where they show too much. In this last case, they are just too noisy. We should always bear in mind the words of a leading statistician, Nate Silver “There is no such thing as unbiased data. Bias is the natural state of all data.” This is why statistics was created in the first place, and data science cannot entirely dismiss its skeptical core: Critical thinking remains key to highlighting the limits and to building feasible and acceptable workarounds.
Equivalently, if plenty of data are used to analyze a share price, then their past movement are “explained” using this huge set. This implies plenty of possible trends and correlations, which potentially will keep running ahead: Which one should a fund manager choose? A data-driven procedure may easily become a Pandora’s box that, instead of calming down a noisy market, increases complexity when prices need to be understood.
Such procedures are likely to add further noise to an already hyper-complex (noisy then) market-frame: how and why some movements occur and remain, will be more and more complicated to explain.

b. As simple as it can appear, as in the case of autonomous cars driving alongside humans’ drivers, the real issue of pure data-driven stock picking is that humans will be acting in the market. The “competitive” presence of humans and AI-driven, like cars sharing the road, is likely to induce both accidents and misunderstandings and, paradoxically, new opportunities, for fast cars drivers!
Indeed, is the human decision always made based on data? Certainly not. What about, for instance, decisions based on strategic-thus perfectly rational-considerations? To forecast the outcome of our decision we are assuming others’ participants behavior: Are we sure to get it right? If a fund manager is bound to a pure discretionary strategy in his/her choices, what about taking action while expecting a series of “mechanical” reactions from the AI-driven “competitor” side? Furthermore, a more fundamental query remains: Are active fund powerhouses truly obliged to follow this path of investing so massively in AI? Are those strategies really well-founded?

In our second article, we will present some arguments that prove how, funds’ active powerhouses ultimately cannot escape their data-AI-driven fate due to their objective of beating the market.

Clearly, nonconventional alternatives exist and can be defined, but to define them, we must first acknowledge an unprecedented feature of our economy: namely, the stock market has, historically, occupied a central role as the ultimate allocator of people’s savings, but its role is challenged by the existence of a powerful out-the-market frame.

This ongoing complexity has a series of key consequences that need to be considered if we want to fairly judge the active funds industry and its performances. All this will be done once I have provided a clear illustration of why and how Keynesian stock market analysis is still valid and helpful. The fact that the centrality of the stock market is fading will allow us to end our analysis by pointing out the main challenges ahead. These challenges are key for both the funds industry and for our economy as a whole.

NOTES:

[1.] For an overview and in-depth, analysis of the passive industry, I suggest that the reader consult my most recent article, https://www.linkedin.com/feed/update/urn:li:activity:6322720846750326784/

[2.] Emmanuel Tahar “Busting the Bot Myth: Why the Investing World Still Needs Humans” is available here: https://themarketmogul.com/busting-bot-myth-investing-humans/

[3.] For more details please refer to Joshua Maxey “The Rise of the Quant Fund: It’s Not Only About the Machines”, which can be found here: https://themarketmogul.com/rise-quant-fund/

[4.] This point has been analyzed in a one of my previous papers and in a very interesting article. My contribution can be found here: https://www.linkedin.com/feed/update/urn:li:activity:6294165616588922880/ The article, which describes stock-picking decision-making in one of the first hedge fund powerhouses when AI/ML was used alongside humans, can be found here: https://www.bloomberg.com/news/features/2017-09-27/the-massive-hedge-fund-betting-on-ai

Article II. The active fund industry facing its fate: or why Mr. Keynes still matters.

AI-KEYNES-2

Keynes was the greatest economist of the last century. He was acute and most of the time right in his analysis. Why? Because he was embracing a very modest and humble attitude: To understand what is going on, to provide advice, and so to improve the current state of affairs, you need to be present/involved. That is, you must patiently spend time carefully describing the mechanisms, that define an economy. What matters is to face what is in use in the economy, not what we believe is used. A theoretical debate is useful for shedding light on human choices. However, institutions, such as the stock market, must be carefully described for what they are because their mechanisms deeply influence agents’ choices!

In short, Keynes was like a biologist: What is killing a virus today is unlikely to work tomorrow because the virus will evolve into something different and therefore unknown. But to understand the virus evolution you need to carefully study the environment. As an observer, you must constantly be open to reviewing your analysis and starting from scratch!
It is not surprising, then, that at the core of his main contribution, The General Theory [1], he spent a chapter, the twelfth one titled “The State of Long-Term Expectation” [2] depicting the structures of the financial markets, at his time.
Is this detailed description still useful for us?
Yes, it is for at least two reasons:

A. Keynes acknowledged a key factor: The stock market runs on a fuel called uncertainty. The only sure element you can take for granted in this market is that all participants share the same burden: the inability to forecast the future. This implies that predictions on monetary streams that determine the fundamental value of a share, are not certain, and this is due to our inability to forecast the events that will determine the future firm business environment. There is only one way to narrow down our uncertainty about a share price: You must forget your own estimation about the fundamental value of a share and focus on what the other participants think its value is.
Alternatively, the idea goes like this: You can easily have in your hands, the most sophisticated price forecast mechanism on Earth, but if the other participants do not share your view, the price today and its evolution ahead are likely to differ from your forecast. The market is a social institution, not an abstract mechanism in which quantities or values are judged by an impartial actioner: To cope with it, you need to foresee others’ estimates, because, ultimately, these are the factors that will determine prices-thus values-and their dynamics.

Basically, the central message underlying the Keynesian beautiful contest analogy is twofold: on the one hand, the likely winner in the stock market game will be the one who evaluates where the majority thinks the “right” prices are better than others. On the other hand, as a participant, you have to live with your time; that is, investment ideas and strategies that were valid in the past are likely to fail today simply because those procedures become common knowledge among participants [3]. Today financial literature often refers to this process by using the term “model decay”: basically, the underlying idea is always the same, a winning strategy will be always copied and “arbitraging” so to become “standard”. On that side, there is nothing more frustrating and useless than the nostalgia of old methods and investment procedures. Given that the majority of the participants are interested in maximizing their returns, past successful ideas are not gone, they are integrated in the background, routinely applied by various participants. Therefore, they are just unfitted to be alpha generator.
Now, the active fund manager, who offers an investment vehicle that promises to beat a given benchmark index, must tackle the market as closely as he/she can, and, simultaneously, he/she must depart from it, by a series of “discretionary” choices [4].
It is this departure, this willingness to seize opportunities, albeit strictly observing others, which represents the fund manager’s real challenge and added value. Given this duality, the active fund’s success will entirely depend on two main factors:

(i) What others are doing, which is basically the way in which agents are selecting, judging stocks and finally acting given the set of opportunities at their disposal.

(ii) The pool of opportunities available for “departing” from the market, which in the case of a fund means from its chosen benchmark.
Presently, as an example of (i), we can definitely consider the current huge investment by active fund managers in AI-ML.
Those procedures for detecting opportunities are not yet commoditized, so they are viewed like a promising signaling source of unexploited actions or informational participant gaps. Today, a fund manager is ready to do a stringent AI-ML inquiry of the data available then use those results to bet on the abilities of the market; that is, ultimately, the abilities of other participants to reabsorb the gaps, which would imply to receive a reward as first mover.
Clearly, as time goes by, the standardization of procedures and data will continue on and, thus these opportunities will decrease. Investment in AI-ML-Big Data sounds like a valid decision but you should not count on it too much in the medium or long term.
Despite having a larger potential, AI-ML-Big Data waves are similar-on that side- to the high frequency trading wave, as well as any generalization of decision process in the stock-picking domain: time means standardization and it will always work against new techniques. Ultimately, we should always bear in mind the words of Ben Graham’- a Keynesian fellow, so to speak- who, just before he died (1976), was asked whether a detailed analysis of individual stocks, a tactic he became famous for, was still a rewarding option: “This was a rewarding activity, say, 40 years ago, when our textbook was first published. But the situation has changed a great deal since then.” [5]

We cannot pretend to offer an investment vehicle without considering today’s market forces-i.e., chiefly information on used-, technologies and procedures: Many ways exist by which you can select the opportunities, and those methods are constantly evolving, you need to be up to date to have a chance of winning your battle against the market. There is no problem on that side. Still, one needs to evaluate the real factor that, whether in the short term or in the long term, represents the main deal of any active fund proposition: the presence of numerous or at least enough opportunities for “departing from the market”, as noted in point (ii) above.

The fund manager’s hope of beating the market is related to having a pool of opportunities “large” enough.
The key queries become the following: Are we sure that today’s pool is “large” enough?

What exactly do we mean by “large”? Is it not more a question of availability of enough “fresh” dynamic stocks in the market? Is not this dynamism the real indicator of stock market health? And finally, if it is the case, when a market becomes less dynamic what happens to the fulfillment of its social role?
We should always remember, as in the case of biological evolution, that if an institutional framework loses its primary function, another institutional arrangement appears and gradually takes its place.
Few are interested in those questions, when, this is exactly where we should start from if we want to judge and evaluate the stock market. The last part of this article is devoted to outlining the answers to these questions and thus depicting the contours and the limits of the new institutional arrangement partially already in place.

B. The beauty contest analogy was encapsulated in a section called “The Inducement to Invest”. Keynes was investigating the motivations to invest. As discussed in point A., Keynes was alerting us that the stock market can easily become a pure sophisticated game among insiders. In this game participants anticipate each other’s moves and subsequent prices’ movements by analyzing all sorts of information. It is precisely in this section that Keynes presented the famous analogy of a stock market appearing as a casino in the public eye: The strategic choices made by the market’s players easily appear incomprehensible to the layperson.
Nonetheless, despite this negative outlook from the market’s outsiders, the system will always manage to provide a price structure, helping to efficiently allocate people’s savings and thus to guarantee cheap capital to the “right” producers-that is, those who are expected to be in line with future social needs-. After all, the key benefit of a capitalistic system is that markets are institutions that allow optimal social choices to be determined and properly financed simultaneously.
Equivalently, economic growth will be dictated by people’s aspirations: In particular, savings will be allocated to generate the capital goods used to produce items, which will be in heavy demand because, a priori, they will increase people’s well-behaving.
What was Keynes’s real worry about this institutional arrangement?
To be honest, I believe that his main concern was more about seeing everyone participating in the game. His view was quite straightforward: Until, the participants were people with enough wealth to occasionally support significant losses, the game was harmless to the economy.
No one is free from investment misjudgments because no one knows the future, what the economy will be like or what will happen, i.e. what will be the main activities ahead. By investing, you are taking a bet, which you can definitely lose!
On that point, the trouble would start only once a number of ordinary people were entering without having a buffer to carry on losses. These participants would borrow to have a say in the market. Rightly, Keynes saw coming the spiral of private debt leverage that would be dangerous because it was borne by weak shoulders. It is no surprise that the brutality and widespread impact of the 1929 stock crash was also due to the burst of a generalized debt spiral (like in 2007-08, by the way). This last aspect is fundamentally one of the core facets of a free economy, and nothing has changed with time: we just changed the scale, the numbers associated with different forms of leverage are just bigger, more out of control and difficult to evaluate in their full, dangerous potential.

Despite this despicable debt-related feature, Keynes was not castigating the system. Clearly, this magnanimous attitude towards the capitalistic economy was on one side, historically based: Keynes had no idea about the cost in terms of negative externalities, e.g. environmental costs, due to an economic growth process sets on “biased” prices. And on the other side, his analysis was more concerned with what to do next and how to repair an economy in trouble.

Generally speaking, excessive euphoria followed by slumps and depressions were part of an overall picture defining the essence of a capitalistic economy. There were definitely measures to limit excesses-e.g., less generalized leverage- and fight slumps- e.g., such as the active intervention of the government. But what matters more is that Keynes was taking the centrality of the stock market for granted. The stock market was fundamentally the right place to guide economic social choices, because a very large set of alternative projects were evaluated and available to investors. To be complete, Keynes was also taking for granted another aspect: Firms had the possibility of financing their projects using debt, but this way was not view as a valid long-term solution. Basically, for long-term projects an entrepreneur was almost forced to consider the stock market solution. As a side note, we should stress that, at least in several developed countries in the last few decades, firms’ debt financing has been excessively fiscally favored compared to an equity financing scheme.
At his time, any dynamic entrepreneurs were quite certainly at the moment of financing their project sharing their plan with the market by going public.
It is specifically on this point that our world differs substantially from the Keynesian one. The size and the role of this allocation of capital out of the market, i.e. without a public offering on a centralized market place, has totally changed since his time and this has a direct consequence on the number of fresh opportunities in the markets.
Three main factors can be considered to explain this sharply different ecosystem:

1- In Keynes’s time, the stock market was “alive” and at the core of the economy. New firms in need of capital or old ones already quoted were actively using the market, such as by publicly increasing the number of shares or becoming participants. Nowadays, the centrality is not fully gone but is seriously fading away. Indeed, today, when considering the biggest and most develop stock market of all, the US stock market is literally “drying up”. This is due to a series of phenomena that include fewer IPOs notably for very young, small-caps firms (data from OECD Business and Finance Outlook 2015 p. 211-14, and the latest data concerning the last two years confirm the trend), as well as the constant high numbers of M&A, stocks Buy Back operations, and, finally, delisting of small and medium caps [6].

Here, we can easily see a reason for the difficulty of beating the market for an active stock fund. Despite the efforts to find fresh, not yet correctly priced stocks, the bundle of opportunities available is shrinking over time. It is like having to catch a fish in a barrel: It is a lot easier to catch one if the barrel contains plenty of fishes instead of a few. Now, clearly, in the case of the US, we cannot forget that there is a lot of competition to catch the remaining fishes, which drastically reduces the chances of each fund manager getting the right one and then reaching the frantic end of beating the market.
What is really interesting, and rarely commented, on, is that the US story does not generalize so easily in the case of other developed, old, capitalistic economies. In Europe, for instance, we observe that active funds are beating the market more easily. [7]

Personally, we do not believe that such European performance is only due to less heavy competition among fund managers. In our view, this result is also partially explained by a less dramatic “drying up” phenomenon in this part of the Atlantic. The presence of dynamic medium and small firms in European exchanges is less under threat than in the US. For instance, M&A activities are less intense due to national and cultural barriers in Europe.
Besides, the US and Europe differ in another key respect: the organization and professionalism, mainly in terms of money availability, of this out-of-the-market framework. These features constitute our second and third points.

2-In the US we have a huge web of out-of-the-market financial institutions, from Private Equity solutions to Venture Capital and Business Angel structures, which can easily provide important capital injections into new firms. After 2007-08 crisis, the extremely loose FED monetary policy has certainly favored the availability of money out-of-the-market ready to be invested in this start-ups ecosystem. On that note, we cannot exclude that a too brutal end of the loose monetary policy would generate a sharp decrease of this form of financing: the very cautious FED tapering approach can be explained by this concern.
What is clear is that under these conditions, there is no need, then, to go public quickly and even at a late stage, becoming public is no more crucial! To go public is a late option, which is often delayed and seriously considered only once the firm’s founder wants to monetarize part of his/her success.

3-US big firms, more so than European ones, are behaving, more and more like venture capital organizations. In a time when the innovation process has reached a stunning pace, big firms are perhaps, paradoxically, the best equipped to mitigate this breathtaking constraint because they can figuratively build (defense) frameworks around themselves. Big firms become framework builders by surrounding themselves with dynamic pools of start-ups.

The key question remains:

Is this new institutional arrangement a better solution or a worse one from an economic point of view?
This is an extremely difficult question to consider. We can only guess at some insights:

We should always remember that at any point, whether AI and Big Data are available, we do not have any idea about, for instance, the steps and the path that will lead us to our next main technology, which will be the real force that reshapes our own economic system.

It is the Blade Runner paradox, in reference to the original film released in 1982: The hero of this famous science fiction film is communicating with a replica of humankind using telephone boxes!

When we imagine and see the future we think about big changes -replicas, flying cars- but we are less ready to consider a bundle of key technological breakthroughs that revealed as being the real modifier of our lives. The mobile web technologies are the best examples ever on that side because they emerged from unpredictable marriage between new telecommunication infrastructure, phone industry improvements and web-IT technologies all available and released in a short period of time!

This misjudgment is everywhere and thus influences the people and AI machines-which look at the past by definition-in financial markets as well. Building the economic future, is a huge and messy stop-and-go game, in which the final result is also determined by an ultimate big divider: luck. That is to say, often a technological advancement can be achieved because several other small parts are released simultaneously or are available but used for other purposes. It is mainly this shift in purpose, that explains why engineers are extremely creative human beings and also why the time of success of a promising commercial idea is so difficult to forecast.

Now, if we think of a centralized market as a fantastic institutional framework that aggregates the intelligence of all participants, a priori, and despite the noise of excessive speculative games, the “fair” value will be determined and investors may use it as a guide. The share price should fairly reflect a wise, intelligent and rationally based consensus about the future quality of the proposed firm’s production.
A priori, being a young innovative firm early quoted in the stock market implies having its shares available to everyone and submitting to some high quality “standard” in terms of the financial results displayed.
The lottery ticket is available to all participants and, in particular, to the fund manager eager to beat the market.

Now, this is all in theory. In practice, overly harsh short-term concerns, may just prevent innovative entrepreneurs from focusing on their long-term goals. Today, the market is generally asking for money too quickly, and thus entrepreneurs may end in an optimization trap-that is, to paraphrase Alan Perlis- they may be forced to choose between “optimization and evolution[8].
The centrality of the stock market is at risk due to an excessive pressure on short-term objectives. The market is the reign of optimization more than ever before. Firms are devoting their R&D budget to development efforts and a lot less to research plans; everything is done to obtain quick and high shareholders monetary returns, groups are too often seen and treated as money cows, basically.
This is why, research efforts are likely to be better assessed, understood and financed out of the market as well as any projects with long-term prospectus and low return on a short-term period.
However, this framework has at least two major shortcomings:

(i) Being out of the market, implies currently, being unavailable to the majority of people’s savings, savings which they “happily” invest in tracking the market’s momentum, such as by buying an index tracker ETF, which means favor big companies and disparage small ones!
Now, the money to finance the ‘evolution’ is likely to be in short supply due to existential worldwide risks ahead, which could be solved only by collectively embracing and accelerating the effort at ‘evolution’.
On that point, the truly sad part is to see institutional investors (i.e. our pension funds) being prevented, by governments’ too strict risk rules, from diversifying their portfolios on these out-of-the-market activities: not really a sign of an intergenerational solidarity and neither a coherent message, since environmental concerns are endlessly repeated by world-political leaders, with few exceptions of course!

(ii) At the same time, the out-of-the-market allocation process needs to be protected and helped. For instance, if current market players enter on this out-of-market world with the same eagerness and short-term habits, the “evolution” is likely to be restricted lost on lunch money.
Nowadays, in the stock market, we lack real evolution giants: We are financing new digitalized distribution platforms, keeping oil companies among big capitalizations and guarantee stellar quotation to huge AI driven advertising powerhouses like Facebook and, partially, Google while grid energy storage and distribution firms and other innovative de-pollution firms are scattered in companies’ portfolios, such as Tesla, and thus not properly financed and pushed to the front within the system!
I am perhaps exaggerating, but we are too focused on analyzing and pricing assets based on current sales and costs and not enough on investments and plans (with only few exceptions): The future is simply too much undervalued and unseen. If, as it is true, the best way to have an amazing future is to build it then today market activity is just too focus on keeping the present.
The unrealized market crisis lacks an appetite for risk, it is the reign of the optimization, which prevents us to accelerate and push through private investments.
AI/ML or any other digitalized analytic technique will not change this gloomy picture an iota: The spiral of asking firms to distribute more money to push consumption and thus confirm the sales figures the computers are asking for is tragically already realized!
Alternatively, AI will not be the device helping to challenge this “keep the present running” spiral: AI is an amazing tool to detect trends on data and its logical decisions-making is based, basically, on keeping running those trends.
How to solve this problem? Two axes seem feasible.

First, from the political side, we should expect a change on fiscal rules to favor access to out-of-the-market vehicles for both institutional and private investors. Now, this fiscal reform should include all sorts of ethical/long term environmental projects: If our savings are used in these domains we should receive a fiscal discount whether this investment is done in the market or out of it. Meanwhile, all sort of fiscal holes and blind spots which still exist today and benefit polluted activities must be removed. It is definitely time to accelerate the movement and to seriously move into our next economic paradigm.
Second, the private banking industry should reshuffle its offer thanks to these new fiscal environments. On that note, the private banking industry should urgently take onboard and develop the ability to provide as large a spectrum of investments as possible to their clients.
In the future, millennials, will not readily accept only in the market solutions, they are likely to ask hybrid solutions, where classical in the market funds are mixed with (a priori less liquid therefore riskier) out of the market solutions. They may ask for direct participation to those investments, which may cover and satisfy also their ethical standards alongside a more cautious and picky selection on market stocks: The first criteria in choosing how to invest is likely to become durability and environment concerns not just short-term returns.

In any case a pedagogical effort will be required: On a planet, finally recognized by all its wealthy inhabitants as being finite, that is, with finite resources and production capabilities, unhealthy and unrealistic rate of returns on investment should simply be banned. More, if tailored and more accurate selections in and out of market must be defined then the clients should be ready to pay the price.
On that side, we should also see flourish new thematical stock funds devoted to investments in firms whose objectives are long term ones. But, may be, even more importantly, at the level of the regulator we should see some anti-trust applications coming back: Some firms are just too big and they are just preventing free ideas to be properly financed!

This message needs to enter in the financial world once and for all and the proper method is to use the fiscal incentives wisely.
One of the Keynesian implicit messages was also the following: If we are like biologist studying the dynamics of a virus we can definitely try to modify the environment in which the virus is developing itself.
But to modify it we must start thinking. Therefore, we should stop referring at data as a given. We must, instead, constantly challenging them using a critical and skeptical approach: only then we are thinking, that is we are really trying hard to solve a given problem.
Now, governments are the ultimate source of the legal power: Laws and international agreements are powerful weapons if private agents see clearly governments’ will and commitment.
We must abandon, once and for all, the carpe diem-or “après moi le déluge,” to use a famous French expression-ideology, which still defines unfortunately the cornerstone of our collective choices!

In conclusion, we should remember that one of the key roles of the regulator should always be to try to ensure that the right mix of long and short-term considerations are followed by the actors in charge of allocating people’s savings and, therefore, of creating our stock of capital: Historically, if something is unbalanced then the solution is to try to fix it via a direct fiscal intervention.

NOTES:

[1.] The correct and complete title of this book is “The General Theory of Employment, Interest and Money”, Keynes, John Maynard (1936). New York: Harcourt Brace and Co.

[2.] It is very interesting to note that this chapter represents the core of his Book IV, “The Inducement to Invest”: Keynes explicitly recognized the centrality of the role of finance in a capitalistic economy. Essentially, how finance works and its analysis cannot be treated as a minor or technical matter; understanding this is key if we want to have a proper view of the capitalistic ecosystem and of the forces shaping its evolution. Finance, by enhancing ex nihilo creation of capital, is the key place to measure the rhythm of transformation in the economy.

[3.] It is important to note that Keynes lived on a very different financial planet compared to our current situation: For instance, at that time there was no debate about individual stock or a portfolio performance versus a benchmark index. The index was already there, but it was used simply as a way to measure the evolution of the market as a whole. The key problem during that time was to pick the right stock that would guarantee a good return given a certain, often, long-term horizon. Clearly, we can easily guess that in this social environment, short-term returns were less valuable than long-term ones. But, once again, this is part of what we encapsulate with the expression “being there” or handling our own time.

[4.] “Discretionary” has become an ambiguous term nowadays. A fund can be classified as active, stock picking despite following a sort of mechanical “tit-for-tat” stocks selection strategy. A major example is the so called smart beta funds family.

[5.] I have to acknowledge and express my gratitude to the content of the article “The Unsolvable Puzzle” by Morgan Housel, available here http://www.collaborativefund.com/blog/the-unsolvable-puzzle/ , where Graham’s quote was first used.

[6.] For a full discussion, see my last paper: https://www.linkedin.com/feed/update/urn:li:activity:6322720846750326784/

[7.] We refer to a UBS paper “Active vs Passive: Why Does the Myth Persist that Passive Performs Better than Active in Europe?”, https://neo.ubs.com/shared/d1Hfi6ic5mMgrF/

[8.] The original Alan Perlis quote is “optimization hinders evolution

Passive Stock Investment Strategy: An Alternative Analysis of the Phenomenon.

active-plus-passive

Introduction

Since the 2007 financial crisis, the passive investment strategy is on the rise. Each year a new record is established concerning the amount of capital invested using this strategy. Nowadays, the success of the passive funds industry is so important that it starts to be an existential threat to the entire, more classical, active-based Wall Street funds offer. To use a buzzword, the passive funds industry can be defined as a disruption of the classical Wall Street ecosystem: the vigor and speed of this process explains the distinctly stinging tone used by both sides in the ongoing so-called active versus passive debate.
Our contribution must be viewed as an attempt to provide a heterodox approach to this debate, by focusing on a neglected aspect: the analysis of investors’ rational reasons for embracing this allocation strategy.
Usually, an investor is assumed to act according to the rationality hypothesis, which means an aspiration to maximize the expected return given a certain amount of risk, then we can only understand the optimality of the passive choice by acknowledging some deep structural changes.
Our provocative statement is quite straightforward: because two major structural changes in terms of stock market functionality and the capitalist market economy have made the passive strategy optimal. On one hand, agents are specifically choosing this strategy because the stock market is drying up in its function as a main reservoir of investment opportunities; the stock market is shrinking in favor of other, out of this market, solutions. On the other hand, our capitalistic economy is less competitive, which implies that oligopolistic and even monopolistic markets are increasingly the rule with dominating firms either listed in a stock exchange or not. Altogether, these structural changes explain the success of the passive strategy. By focusing too much on analyzing the passive world and its consequences, we are missing the forest for the trees!
Thus, our goal is to bring some light to the forest and to discuss its complexity.
We will proceed as follows.
First, we will offer a short overview of the existing, and endlessly growing, literature about the active versus passive debate.
While the overwhelming negative consequences of excessive passive investment for stock market functioning are well discussed and presented, investor rationales underlying the passive choice are neglected or badly defined. Our first step will be to provide a critical analysis of this neglected element.
Second, we will introduce some challenging questions:
The literature tends to explain the post ’07 success of the passive strategy without referencing any structural changes. But, if it were the case that no structural changes occurred, then why this delay until after the ’07 crisis? At its inception in the mid-1970s the passive index tracking strategy was a success but its success was never so fulgurant during the ’80s or ‘90s than in the last decade. Why? Does this late success tell us something about the changing structure of the global economy?
Third, to best answer those open queries, we will explain the passive strategy’s triumph by referring to two structural factors.
The first factor is what we call the stock market’s drying-up process.
In our view, since the Internet bubble burst at the beginning of this century, the stock market has gradually but definitely lost its centrality as a source of new investment opportunities. Alternatively, the market is drying up because the pool of fresh available opportunities is shrinking and the few remaining opportunities are too stringently hunted based on, often, too harshly short-term considerations. Given this phenomenon, a lambda investor’s opting for a passive investment choice is coherent with it: Investors expect to receive more from big capitalization firms and so disparage small and medium capitalization ones because investors do not believe in small firms’ potential.
Here, investors, sometimes wrongly, believe those rare pearls are too expensive, in terms of analyst fees, to be detected.
The second factor is based on a simple finding: The market web that shapes our capitalistic economy is declining in terms of competition. Several key markets are now organized as either oligopolies or even monopolies. Basically, this absence of competition tends to favor big listed firms and their forecast for future solid earnings: Therefore, the lambda investor’s decision to go passive sounds like a very rational choice.
If the capitalistic features of our economy have fundamentally changed, then it might be normal for some of its traditional frameworks to be, at least partially, disrupted.

I. A short review of the active versus passive literature: Where we stand with this debate.

Recently, the active versus passive debate has become a hot topic, and a long list of studies can be cited.
The bulk of this literature questions the current and growing passive trend: If this trend continues at the same pace, what will be the future of the stock market in a few years’ time? This is the main question and concern.
The stock market scenario under this trend is gloomy and the main reasons can be easily grasped. Two reasons are worth special attention.
1. An overwhelming acceptance of the passive strategy will inevitably generate a huge misallocation of capital. Huge amounts of money going to big firms’ shares is a penalty to small firms’ shares. Here, the argument is quite straightforward: Small but dynamic firms, in which capital is likely to generate its best return, get penalized because their share prices will no longer reflect future potential cash flows.
Thus, the price signal generated by the stock market is distorted and investors’ savings are no longer allocated in an optimal way. To use Renaud de Planta’s words: “That wouldn’t be good for productivity and growth” [1]-All notes at the end of the paper-.
2. If more and more money is invested in passive funds, less and less will be available to managers of active funds. In other words, the more investment in passive funds grows, the less freedom in terms of both capital availability and stock-picking implementation active fund managers have. Here, a scenario where only a few active players remain in the market, becomes plausible. But then, what about the price mechanism under this new extreme regime? What about the efficiency of the price structure so obtained? If most of the shares available are detained under passive rules: how would the price mechanism work if only a few active funds remain?
This is an extreme scenario; the odds of reaching this territory are small. But, still, where is the process likely to end? Will it stop with room enough for active players and, therefore, ensure the survival of a “fair” price mechanism or not? No one, at present can answer those questions: They are nevertheless real threats.
The conclusion is clear, if the widespread passive epidemic continues its progress on the same path, then the stock market price mechanism as we know it will stop working properly. Its ability to deliver an optimal stocks price structure will be compromised and the consequences for the economy will be very costly.
Nonetheless, in depicting this gloomy scenario, no one can explain why the passive strategy’s amazing success story started right after the ’07-’08 crisis. The investors’ rationales, which brings them to choose this strategy instead of an active one, are badly treated and explained.
Indeed, only two main arguments are used to explain this choice.
A. By creating index tracker funds, the passive asset management industry has found a bright way to offer a diversified portfolio at very low fees for the clients. The funds’ holdings mimic those of an index. The way in which a fund manager allocates money among the stocks is easily defined, and depends on the weighting of each company in the index. Now, most of the time, the weighting index’s structure is in function of the size of the company, that is, its market capitalization. There is no real stock-picking mechanism here. The fund manager mimics the existing index, with zero added value but also with zero cost in terms of research to determine the right stock picks. Simultaneously, the passive fund manager offers a very transparent investment vehicle: any investor can easily grasp what an index is and how it works! For instance, it is likely that the transparency has played an important role in plenty of investors’ mind after the Madoff case.
Therefore, the passive offer is cheap compared to a classical active one, it is easy to understand, i.e. highly transparent, and, a priori, liquid.
B. Most investors are already aware of an empirical fact: Most active fund managers, once returns have been calculated after net fees and the medium/long-term have been considered, do not perform better than the market, they are not constantly beating the market. In other words, no one has been shown to be constantly good at picking stocks. The market always has periods in which it (or, better, its proxy represented by a chosen fund’s underlying benchmark index), will beat the active fund’s performance.
Therefore, the stock market, over the medium and long term is quite efficient. If luck is taken out of the equation, then no magic formula exists. Human and machine- algorithm today, more and more AI tomorrow- are condemned to accept some losses in return vis-à-vis the market .
Intuitively, factor A. is crystal clear: Passive investment is attractive because it is cheap, easy to understand and, ceteris paribus, liquid. Nonetheless, this was always so. A simple query then is why the amazing flow towards this form of investment took place the last decade. The question remains basically unanswered. This argument is too short to fully explain what is going on.
Factor B. is trickier and far more complex. Despite being a powerful argument in favor of a passive attitude, its influence is overestimated for two reasons:
1) The major shortcoming of this argument is, as usual, that past performance values do not guarantee future performance values. A priori, in our view, a lambda investor who is now deciding whether to embrace a passive strategy or take an active fund option, is always more concerned about the future than the past: The decision process is based on what the investor can foresee and not on what he or she has seen.
Alternatively, past data and past trends are not at all sure to be repeated! If efficiency is taken seriously and investor’s rationality is acknowledged, then an investment choice will place more emphasis on an investor’s future forecast than on a simple set of past performance data.
2) Stock market efficiency is a difficult subject whose analysis strictly depends on the definition of information used in the market at a certain time. This argument needs to be discussed by considering all the complexity of today’s stock-picking mechanisms. Ultimately, the arrival of first algorithms and now AI is moving all those mechanisms toward a more systematic data-driven process.
Does this evolution ensure more efficiency? What kind of efficiency is likely to result from a generalization of these technologies?
As said in footnote 2 we will soon devote a paper to those questions.
In any case, our intimate belief is that factors A. and B. are insufficient to fully explain the current run toward passive funds.
Why?
Let’s take a closer look at the decision process of an investor who decides to go passive. This should help find an answer.

II. The choice of passivity: An analysis of the lambda investor’s case.

Does a list of the main factors exist that is likely to explain why a lambda investor is choosing a passive strategy? To answer this difficult question, we need to acknowledge that becoming passive implies that our saving is allocated following firms’ weighting in each market index, which usually, because a weighting structure is in function of market capitalization, favors big firms and penalizes small ones.
Now, a passive investor is never fully passive. He or she is active when deciding to buy an index tracker fund and when deciding to leave it: Choosing an index tracking fund implies he or she is acting.
Even though intuitively straightforward the small investor experience is quite paradoxical. In a passive world in which each participant is supposed to not take actions, the lambda investor decides based on a set of information that is, a priori, more substantial than the simple knowledge of the weighting of each stock in an index.
Simultaneously, the investor’s perception and vision of the investment’s expected future returns are, ultimately, encapsulated by those weightings.
Then the problem remains, given those weightings, how can we grasp and understand the investor’s underlying motive?
It might be simply by noticing that buying an index-tracker passive fund means embracing a certain investment philosophy. We are conveying a certain vision of the status of the economy and its future, and we are agreeing to follow a corporate friendly policy.
Here, given the way a stock index tracker is built and the fund’s manager status, the passive choice implies the following:
1) Putting more money in big corporations than in small ones.
2) Taking for granted that your fund manager will not interfere with the way firms are run [2].
Why have these two simple factors become more valuable now than they were back then in the mid-1970s when John Bogle created this type of fund?
The answer is simple, but few commentators have given it serious attention.
Two major changes occurred. On one side our capitalistic economy is less competitive because big firms (either listed or not listed in a stock exchange) are much more powerful and wide spread in several markets and, on the other side, the stock market has faded in its role as a reservoir of value and growth. The stock market is literally drying up!
Equivalently, on one side, we have a first set of justifications arising from a paramount stock exchange evolution, and, on the other side, another set of arguments related to a profound transformation of the entire market’s capitalistic system in which firms conduct business. Let’s start by analyzing the issues related to the evolution of stock exchanges.

III. To understand fully the passive choice, we need to embrace a larger view of the economy: This choice is more rational than it seems.

III. A) Wall Street and its decline: Wall Street fades as a reservoir of small firms’ growth and value, the drying up phenomenon.

As we saw, by going passive an investor is weighting big firms more than medium and small ones. An element is implicit in this sentence: We are referring to listed firms in a stock exchange. Nowadays, more than in the past, quoted firms represent only a fraction of the web of businesses characterizing an occidental economy.
Historically, the total number of publicly traded stocks has been decreasing [3]. Therefore, the pool of firms available to invest in, is shrinking.
This result originates from three sources: α) Some small-to-medium firms prefer delisting from stock exchanges and using private equity financing and venture capital funds; ß) M&As remain historically high after the ’08 crisis; δ) fewer IPOs are taking place; and μ) finally, thanks to an exceptionality long and ultra-loose monetary policy, we got now plenty of zombie quoted firms: their presence is an additional noisy factor when someone is looking for valid investment opportunities in the stock markets [4].
If those elements are considered, the stunning increase of passive investors, can be seen from a new, refreshing perspective:
Investors are not convinced anymore by active strategies because they believe, fewer undervalued companies capable of growth are available in the stock market.
Equivalently, investors are now aware and convinced, perhaps wrongly so, that the firms likely to be dominant in the future are no longer in the pool of quoted small capitalization firms: Those who will be in power, the future champions are outside and they are financed via other channels.
At first glance a powerful rationale explains the passive choice: Why should I spend money, which means paying fees to an active fund manager, to search growth companies among a shrinking set of firms when I know that growth and value are mainly created outside the classical Wall Street frame?
The answer is frank: If I must invest in Wall Street, it is better to distribute my savings, paying a minimum of fees, predominantly among big firms and fractionally, among small and medium ones: this is exactly the main feature of an index-tracker.
Alternatively, the success of the passive strategy is a product of the Wall Street crisis. People are choosing this way of investing because Wall Street is drying up in terms of offering new exciting opportunities. Small capitalizations, therefore those with minimal weighting in an index, are being, at their best, a secondary repository of future stream of value [5].
But why? The answer is twofold.
First, as already mentioned most of the dynamic firms are now operating outside the stock market: A firm does not need to become a stock market insider to be properly financed and therefore prosper.
Second, small capitalized listed firms’ valuations are constantly under stringent data-driven analysis. The generalization of quantitative and AI-driven stock-picking methods implies a constant scrutiny of firm’s results: quarter financial values need to fulfill market’s short-term expectations.
Ultimately, the old mechanism where Wall Street was the center of savings’ allocation process is broken, and it is fading because, basically, too much is asked in respect of short-term results: Being in the market is just too stressful, operating and prospering outside it is just a lot easier.
With that perspective, it is better to remain outside the stock market and wait until the size of the business, e.g. firms’ turnover, efficient chain of value and large distribution web, becomes large enough to, eventually, envision a market public offering.
Here, being part of the market is no more a way to be financed but a nice to have long/medium term goal once the business is well-structured and already successful. Clearly, this is an extreme statement and some cases prove the contrary: still the trend is there. Here, for a firm, entering in the market at maturity, allows to keep, more easily, the Wall Street (short-term) mechanic disciplined.

Let’s now move on and investigate our second series of arguments.

III. B) Big companies are becoming clusters of innovation to master their future and to survey a possible disruption process.

A firm’s environment is characterized by a twofold threat. On one side, day by day, the need to keep pace with the ongoing numerical revolution.
On the other side, is the need to constantly assess the possibility of a frontal disruption, likely to destroy or, at least, seriously reshuffle the entire market.
If all firms are facing this reality, then big firms have an advantage when a defense strategy needs to be defined. They can create a cluster centered around them in which to develop and master innovation.
In this situation, they can use -at least- two available options:
First, a priori, they can dispose of a certain amount of capital that can be injected into startup initiatives. In addition, they can use their web of relationships with venture capital and private equity firms to increase the size of their projects.
Second, they can optimize their efforts using their own research and development divisions, whose budgets can be huge [6].
By so doing, the important firms will try to achieve two goals:
1)
To remain a dominant player in terms of current technology in use in its sector. A big firm protects its market position by endlessly innovating and keeping its degree of efficiency very high. The chain of value is endlessly reviewed, changed, and optimized.
2) To try to anticipate a possible disruptive process, that could modify the nature of the market in which it is operating. The big firm tries to sterilize the disruptor-technology by absorbing it into the existing organization. Therefore, big firms are always ready to absorb one or several startups present in their cluster or to fight fiercely to absorb something out of them. For several senior managers, the firm’s real competition takes place more out of the market than inside the market: A ghostly paramount fear can become more real than a tangible threat.

All in all, we see a new passive lambda rationale here: In a time where the innovation process has reached a stunning pace, big firms are, perhaps, paradoxically, the best equipped to mitigate this breathtaking constraint because they can, figuratively, build (defense) frameworks around themselves.  Once a framework is built, it is likely that it will act as a shield, which will guarantee better resilience in keeping the business afloat in the medium and long term.
A last remark here: Big firms partially behave as venture capital organizations. Now, the final victim of this big firms’ cluster philosophy is the stock market. Indeed, most of the startups in a big company’s framework will end up never quoted. And Why this happens?
Three possible ideas might support an answer here:
α) Enough capital is available outside the stock market to ensure proper development. No need then, to quickly go public and even at a late stage this is no more crucial!
ß) As already stressed in section III.A), by so doing, a startup will just avoid entering a system in which data (e.g. quarterly financial results), are likely to become far more important than plan and vision in conquering its own market.
The Wall Street apparatus is in trouble also because too much, if not everything, is based on short-term objectives tracking: why should young dynamic firms be interested in entering the Wall Street biotope when their goals are medium to long-term ones?
δ) Big firms become framework builders by surrounding themselves with dynamic pools of startups. This clustering attitude allows these important firms to remain in close contact with another huge startup generator: the academic world.
Here, it can be a lot simpler to keep a smooth, non-accounting driven, attitude in firms’ startups if you avoid an IPO: This can be a positive factor when a firm must attract new talent directly from the academic world.

This final remark closes our analysis of the structural changes in the stock exchanges: The Wall Street ecosystem is, in our view, under attack not only because the passive industry is becoming too important, but mainly because less room for trial and error is available to young small to medium firms: collecting capital in a stock exchange is costly, perhaps, too costly.
Let’s move on to the last rationale, that highlights more general aspects of the status of the capitalistic economy we currently live in.

III. C) Most big companies do not operate in a competitive market.

Historically, what are the main forces that have reshaped the entire capitalist system since the financial crisis of 2008?
Most of us would answer by saying the deepening of the globalization phenomenon or the fantastic acceleration in the numerical revolution. Few, would point out the lack of competition characterizing several markets.
Still, in plenty of sectors, few powerful firms are managing most of the supply side of the market. Consequently, in plenty of markets, insiders’ power is becoming progressively concentrated: big firms nowadays are operating mostly in non-competitive environments either oligopolistic or even very close to monopolistic market status- (e.g., Alphabet, to name the most classical example, with its Web search engine)-.
The US case is representative of a general worldwide tendency, as the US capitalistic economy is now under the spell of having numerous key sectors dominated by gigantic entities enjoying partially self-generated barriers to entry.
Do we have proof of this statement?
Sure, we have proof. Just take the example of the tech giants: Where are the real competitors of Google and Facebook or even Microsoft and lately Amazon?
Nowhere, at least in the occidental- (geography matters as we will see in a minute)- economic ecosystem. Similarly, we have the case of big banks: More than 30 big banks existed at the beginning of the ‘90s, so where do we stand almost 30 years later?
Four huge banks are left in the US!
But the same can be said for the automobile, energy, pharma and logistic markets: The list of sectors in which competition has been reduced to a minimum amount is massive. Add to this phenomenon the fact that even outside listed corporations, the famous startups, some of them with amazing turnover and valued billions -the famous unicorns-  are playing, often in oligopolistic markets (see Airbnb and Uber as the main examples).
In short, we can simply agree with the analysis of Chicago Professor Luigi Zingales, who described in detail the historical background of this concentration-process. His work shows how it is precisely this phenomenon that defines the major threat for the American capitalistic model, based on a web of free and competitive markets [7].
Now, we are far less nostalgic than professor Zingales.
Why? Because this evolution is just how big occidental- (once again, geography matters) -firms are very rationally and efficiently responding to two extreme and violent forces:
First, a technological challenge that is a constant existential threat to any business via a possible disruption process.
Second, the never-ending globalization dynamic, which brings out new economic powerhouses, firms, and markets but constantly demands reassessing the chain of value and correlated price/offer positioning.

Alternatively, Zingales’ analysis, which we agree with in several aspects, pays almost no attention to two points:
The fact that an economy, as large and powerful as the United States, is never a closed ecosystem, and simultaneously, that firms nowadays, live under a constant threat of seeing their chains of value and ultimately final offer become obsolete in a very short period.

It remains without a doubt that the lack of competition is a source of inefficiency for the final consumers. Antitrust legislation has been created in all (occidental) capitalistic states to fight this inefficiency.
Why are those policies not triggers?
Most commentators will explain that competition is stunted due to the presence of lobbies at the level of the legislator and the fear of blackmail over losing jobs.
For us, it is a lot more complicated than that, once the globalization factor is taken properly into account:
Antitrust legislation is not triggered because, those big firms are our global champions!
Occidental governments accept the presence of those consumers’ inefficiencies because those firms are engaged in a worldwide competition with other groups outside the country, hence, political apparatus protects them in the country market, to guarantee their full strength externally and to compete in the worldwide market [8].
But why is strength outside a country’s borders so important?
Because, in plenty of cases, the place in which the key markets are, but also where the value chains are physically set and major competitors are, is Asia.
For instance, it is there, not in the west, where most of the world population lives, and it is there, not in the west, where the middle-class is growing-which is the main target for any mass production offer-. Finally, it is there, not in the west, where new major groups grow and planning to conquer traditional occidental owns markets.
Occidental governments simply acknowledge that competing in Asian markets implies huge effort and expenditures because “local” competitors are well protected- often by governmental measures- and, often they operate in mono/oligopolistic internal markets.
And here we see the spiral:
Occidental customers are forced to support inefficient prices- and, therefore, lose their own purchasing power-because their national corporations need to enter in the Asian/global market! Occidental consumers are under the spell of a geopolitical game that is well out of their reach but still cost them-and we are not talking about the cost as workers-.
Moreover, after the crisis, several listed big companies in occidental countries received a gift that was quite expensive for the taxpayers. Several of them are now considered, in one form or another, too big to fail.
If the situation is like this, and it is, we can see a powerful rationale for a lambda investor to go passive:
Big firms are, in principal, heavily protected in terms of barriers to entry- even though the possibility of a technological destruction is always open-. No one’s will trigger -seriously- antitrust policy against them. Big firms are global players ready to play in growth countries and endorsed by a too big to fail policy that ultimately guarantees solvency against almost all risk!
Why under these conditions, would I spend time, energy, and money to find small quoted firms that can ensure future cash flows with minimal risks (this is the basic principle of finance) when I have the opportunity to cheaply diversify my allocations among huge, heavily protected firms with almost no risks?
To be complete, under those conditions, an investor should prefer a blended allocation strategy: a mixed between a passive market one and a direct participation to the outside more dynamic world, e.g. by investing in a venture capital and or in a private equity funds. On that side, a strong asset management structure is likely to become the key edge in the private banking industry: the ability to efficiently pinpoint the right mixture at the right time, under the constraint of a “dynamic” client profile, it is likely to become the leading ingredient for a winner offer in this industry.

To conclude, the success of passive investment is a complex phenomenon that cannot be explained only by referring to low fees or the stock market efficiency:
We just illustrated another way to consider phenomenon and try to trace and discuss all possible consequences.

NOTES:

[1]
Renaud de Planta, “The hidden dangers of passive investing”, FT.com, May 2017.

[2]
We assume that a passive investor accepts a “[…] hands-off approach to investing [provided by a passive fund]: One reason Vanguard is able to charge such low fees is that it doesn’t expend a lot of resources investigating individual companies or meeting with managers. […] Its index-fund managers don’t engage with companies about their businesses.” In Frank Partnoy, September 2017, “Are Index Funds Bad for the Economy?”, The Atlantic.

[3]
Please see the raw numbers presented here https://finance.yahoo.com/news/jp-startup-public-companies-fewer-000000709.html  and here https://www.economist.com/news/business/21721153-company-founders-are-reluctant-go-public-and-takeovers-are-soaring-why-decline

[4]
Concerning M&A deals in the US, please see: https://imaa-institute.org/m-and-a-us-united-states/ , US and European IPO data are presented in the OECD Business and Finance Outlook 2015, p. 210-212.
And finally the notion and the evolution of zombie firms in US stock exchanges is presented here: http://lipperalpha.financial.thomsonreuters.com/2017/09/news-in-charts-the-rise-of-american-zombies/

[5] It is interesting to note that some major university endowment funds have recently decided to pull back from passive ETF exposure, e.g., https://www.reuters.com/article/us-usa-funds-endowment-etfs/largest-u-s-university-endowment-funds-pull-back-on-etf-exposure-idUSKCN1BO2L0 But what to do instead? Well directly invest, in mainly major quoted stocks: The underlying rationale is that returns are more likely to come from owning fewer big firms than from selecting a portfolio of small/medium capitalized firms.

[6] Details of the amazing amounts of R&D expenditures can be seen here: https://www.linkedin.com/feed/update/urn:li:activity:6316546299462193152. Interestingly, not only tech companies are investing huge amounts. The R&D effort covers all major sectors, proving that the concern about how to master this unprecedentedly volatile innovation phase is ubiquitous.

[7] Luigi Zingales, A Capitalism for the People, Recapturing the Lost Genius of American Prosperity, 2012, Basicbook. Other general accounts of this phenomenon can be found here: https://www.theatlantic.com/magazine/archive/2016/10/americas-monopoly-problem/497549/ and here: http://equitablegrowth.org/research-analysis/market-power-in-the-u-s-economy-today/ .

[8] In Switzerland, the classical example is the pharma market: It is a duopolistic situation, explicitly protected by the Swiss political system, which does not complain even though drugs prices are well above the European standards. This is accepted because research and development are still based in Switzerland but also because, everyone knows, R&D is the best way to help this industry fight in international markets. Same can be said about the Swiss big bank and the way they are protected.

A note: Human Intelligence (HI) vs Artificial Intelligence (AI) in the Asset Management (AM) world, why AI is still not good enough.

AIVSHI-MAIN

1.Introduction.
In the previous essay-the addendum essay- we listed three paradoxes likely to prevent a full implementation of an AI wealth allocation decision process in the AM world. As usual, by so doing, we introduced some ambiguous terms and unclear elements, which need to be further developed and discussed. We will concentrate our attention on the most important paradox presented: the stability paradox. This essay’s aim is to show how the discussion of the stability paradox opens a Pandora’s Box: New, tricky issues, such as AI consciousness appears, and testify that the road to reaching a fully satisfactory implementation of AI in the AM world remains long. Our investigation will prove that these problems are unavoidable obstacles, and explaining why a pure AI solution- that is, one which requires no human intervention-is not yet possible in the AM world and will not be for a while.
We will show that the main flaw characterizing a pure AI solution is a methodological one: An AI is a combination of applied mathematical methods-broadly labelled data science-used at an astonishing speed on huge data sets. This science, to quote Chris Anderson, accepts the possibility of developing a “science without theory” as one of its postulation. Because a theoretical narrative is absent, this nevertheless favours the acceptance of black-box solutions, which are far from helping those in the AM industry who need sound, logical and coherent narratives to justify an allocation decision. We believe that moving to a pure AI solution is not yet optimal: At the end of section 5, we will offer the first discussion about why this is the case. Through this discussion, we will introduce the notion of a “dynamic system”, which is likely to be the best approximation of our economic-financial-political world in mathematical terms. If this is the right approximation then we must acknowledge the need for Human Intelligence (HI) theoretically based investigations and results alongside the perfectly valid and needed AI investigation and results.
If we use our own terminology presented in the last essay, this is exactly our main conclusion: current AI cannot pretend to match HI, because AI continues to provide only correct scenarios -constantly anchor on data- and cannot formulate true -holistic- scenarios, or which have this aim. At the contrary, we believe AI becoming, and this will be more and more the case, complementary to HI. We will treat these arguments in sections 5-5.1: The right path is as usual in the middle; indeed, the wise AM provider will be the one who will blend both AI and HI, the real challenge ahead it is to find the right proportions.
Namely, AI is a very powerful device, but it is lacking in, at least, two key areas: creativity and awareness. Due to these two weaknesses, AI cannot yet pretend to reach coherent and logically sound asset management advices.
As such, to reinvent the AM industry in this new century, we warmly suggest looking further into conviction narrative theory than AI; for example, David Tuckett and Milena Nikolic presented this theory in detail in the June 2017, SAGE Journal, or more generally, to the work done by Yuval Harari about how humankind reached its current- technological driven- condition, i.e. his emphasis about our “capacity to elaborate [share] allegorical stories”; and not only focusing in developing AI.

2.The current definition of AI in AM and its main limits: Powerful skills are not enough to define an intelligent device.
Nowadays, AI evangelists are used to present the intelligence of their processes by referring to two main thoughts (I did not bother to quote the exact reference, I saw these two phrases so many times that I have the feeling there is a huge copy & paste contest going on): First, “[I]n 2016, a machine beat the world champion of Go – a game renowned for its subtlety and complexity – for the first time. The victory by the AlphaGo programme, developed by Google DeepMind, has put artificial intelligence (AI) back into the spotlight.”, and second “[The] usage of AI and deep learning will allow us soon to see correlations among time series we will not think of.”
Clearly, these two arguments are marketing statements: They are facts, but they do not define the meaning of the word “intelligence”. In other terms, in these two sentences we do not actually understand why seeing AlphaGo beat a human at Go represents proof of AlphaGo’s intelligence or, similarly, why seeing a device that is able to create and check all sorts of possible correlations among inconceivable numbers of data series- with everything done in a blink of an eye- should define, a useful form of intelligence. In my view, in both cases, we are describing a very powerful, nice and useful tool endowed with paramount skills, which are far more efficient and powerful than those of a single (or even a group of) human (s): However, a machine continues to lack some major features that define HI, and therefore constantly requires the presence of humans at its side.
The next section will discuss these elements in more detail and present three main problems characterizing the current definition of AI, when applied to the AM world.

3.If faced with real economy or the allocation of wealth, which require an understanding of the economic system, how is AI supposed to work?
Let’s go back to the statement about AlphaGo: The device beat a human. But can we interpret this victory as a sign of a device’s intelligence? No, clearly not. Why? Because even though AlphaGo has learned how to play during the game -we are not denying that- it was playing, by definition, within a stable framework. After all, despite the game’s complexity, the game is defined by a framework and a series of rules, limiting, the “creativity” if we can really use this word-more about our usage/definition in section 5-. Each step must be considered to elaborate on the best strategy; that is, what move will be useful to win the game. Fixed rules and fixed framework are taken for granted: The environment is what it is and the “laws” of the game are set in stone.

What about a changing environment? What about having a “game” in which the rules and framework are constantly redrawn or at least appear to be so? Even better, what about an evolving “game”?
For example, if Go were to transform into another game-TroGo– how would AlphaGo-or any AI device- perform?
Here we have at least three fundamental issues that the AI must solve to pretend holding the status of a real, intelligent device:
1) AI needs to be conscious that Go is now TroGo, thus, AI needs to be aware that the old world/game is gone and AI is now living/playing in the new one; 2) AI needs to figure out (fast) the new rules of TroGo to elaborate the optimal choices, and we take for granted that the AI’s final objective is still to win. As such, the fact that the objective remains the same in Go and TroGo is a strong assumption: It is, indeed, easy to think that in a new environment each player’s objective may also change. 3) AI needs to consider a theory, or at least a theoretical narrative, that will explain and justify the choices taken in 2). Ultimately, the AI needs to explain why some choices are executed and some others disregarded. It is important to note that a pure numerical approach, based on the best fitting algorithm for a given historical data sets available is likely to appear poorly structured if a narrative is not simultaneously developed. Indeed, at any given stage, AI will need to convince those outside the specialist realm, who can understand AI’s mathematical discourse, why AI is picking a given move.
Even without entering a deep discussion about these three points, we realise how overcoming these points represent the real AI challenge. Now, I want to stress that full comprehension of these aspects goes beyond my current understanding; I am an economist, after all, and not an AI specialist!
Nevertheless, given that I know the complexities of working as an economist in the AM world, I can explain some simple elements that the AI device should fulfil to deliver a great AM wealth allocation service.

4. AI and science: Why a pattern-seeking device is not good enough.
Let’s consider the problems listed above in order. The first one is amazingly difficult: Figuring out that Go is now TroGo, requires the AI to be consciously aware of the changes in the world in which it lives. Here, we wonder how an AI can determine that the “game” has changed. To answer, we need to, finally, try to roughly grasp what AI really means in the AM world: AI is “just”– it is already a lot- an infinitely skilful and quicker pattern-seeking device -though AI can be a lot more if economic and financial systems are treated as parts of a dynamic system and maths simulation techniques are used accordingly, see points 6&7-. Given the huge amount of historical data sets at its disposal, AI will process like a human (but more quickly), trying to find patterns and correlations among them to establish (statistically robust) links between them. Here, at its essence, nothing is new. This is the basic procedure used in quantitative (quant) finance. Finance algorithms are based on this procedure, but the technical analysis charting is also derived by the same intuition. This is why, computers oversee trigger buy and sell signals in trading desks today-: If a correlation exists, it might be useful! In other words, if a pattern is out there, it is worth discovering and then to using it to make money! James Simons earned a lot of money by bringing these ideas and methodologies into finance.

In our case, however, AI faces a more complex task: In Go, a certain set of patterns was established and seen, now, in TroGo, are these patterns still relevant? In other words, those “ancient” correlations might just cease to be relevant. Meanwhile, the new patterns characterizing TroGo are likely not fully visible yet; the data available does not fully reflect the new patterns-the release frequency of some data can be long-. Moreover, in a trickier way, the relevance issue could concern some data series as well, i.e. the definition of the time series may be unfitted in TroGo! Who is assuring us that some of the data used in Go to depict a certain phenomenon are still those needed to depict the same phenomenon in TroGo? Here, the entire procedure is under the spell of huge threat, the well-known GIGO issue, that is garbage in garbage out.
For instance, before the ’07 crisis in the US job market the unemployment rate was, from both a statistical and an economic standpoint, a trusted indicator of potential upwards pressure on salaries. Nowadays, however, with labour market participation having plummeted and with an unbelievable rate of technical progress, are we still sure that a low unemployment rate is a valid indicator of future pressure on salary and then inflation?
Similarly, with money growth and its relationship to inflation: the correlation seems to be broken! But are we sure? Are we sure to refer at the right money supply aggregate? What about, as in the case of US M3 aggregate, if a series is no more available? And more broadly, which is among all series, the most relevant one in TroGo? In a prediction endeavour, if we are using the wrong one, a possible garbage in garbage out type of error is likely to occur.
In general terms, can we still use these historical relationships?
The data’s relevance is under scrutiny and in doubt!
In other words, the maths on data is fine, and, as I said, it can be a lot more complex than in my simplified description of the existing AI-system dynamics simulations and agent based modelling which are vivid examples of this complexity-. Still, all these methods share an infinite trust on collected raw data but: Raw data are not entities without a life-they transform as well-despite constantly being considered with the same definition. Please note, this kind of consideration would never occur in a pure application of AI within financial markets: For instance, the price of a stock, or other financial market determined instrument prices are the ultimate example of fixed definitions. The relevance issue does not apply in these cases, AI and algorithms are working well, no doubts, with dead “definitions”. By the way, this explains why Warren Buffett is yet out of reach for an AI device: a value investing strategy is not data driven-data are missing-decision process, it is more based on shared visions with senior firm’s management about future market conditions.

At this stage, AI sponsors may tell me: Do not worry, Big Data, is here and this is the new data that will tell us where we stand. But, Big Data is still in its infancy. All serious tech firms in this domain hire plenty of anthropologists and psychologists: To help maths folks better master the data-in particular those related to human “behavior” and choices-, but this procedure will require time. Once again, pretending that this will be done soon to help solve the AI consciousness issue and the related relevance question is just asking for too much too quickly.
Ultimately, I do not see how a historical data based related device- a mathematical machine-, which is what AI is, will be able to alert me if we are in a new economic era. A new era, characterized by new rules, where the present does not follow previously established “patterns” and which require new ways of reading raw data; hence, the relevance problem data “definitions” are living parameters themselves. Let’s take a closer look here.

5.Can AI solves these issues? AI is likely to become a black box which requires, constant HI presence to be optimal.
Now, an easy solution would be to think about an AI device in which a set of thresholds have been set: If those are triggered, the AI will decide to enter a new world.
However, this is not our point:
How can we ensure that those thresholds are being set based on relevant factors, such as those which show changes and which changes matter in the new world?
Who is choosing the right patterns, and which deviation (and how much?) will alert us to the change of the world?
This looks like an endless spiral if a theoretical discourse is not simultaneously build. Only a human can justify a choice, not by referring to historical data, but by providing a new theory and a narrative coherent with it. This is creativity! In other words, to solve the problem and to highlight the right thresholds, the AI should create a theory. This is the only way to solve an issue similar to the chicken and egg paradox! However, a theory needs creativity.
Here is an open question: Is there any real with AI?

I see an easy reply: You can imagine AI devices endowed with deep learning mechanism. However, as far as I known, deep learning would entail a machine extracting information, often in an unsupervised manner, to teach and transform itself. Those new lines of code, might end up being unintelligible to the human programmer who first created the device!
If this is the case, how are we supposed to extract a coherent and intelligible theory which can be presented and used to explain AI’s choices?
Using deep learning, we have-further-proof of the main philosophical principle underlying AI and (partially at least) algorithm usage: I do not know why it works but it works. It solves the issue, so I accept it!
By doing so, you are, implicitly, disparaging HI: You are preferring a device solution-which-no doubts-has worked in the past-to human ideas, which may be slower to elaborate upon, but which can encapsulate both past insights and real creativity!
In other words, you are charging humans with the failures and the need to please and follow, a priori, the machine.
In my view, this is the equivalent of accepting black box solutions for handling and solving problems. This science without theory approach does not really explain a phenomenon. The method is interested in the data, and the data are the phenomenon’s outcomes. The real added value of this activity is to find patterns in these data; thus, once again, a human is viewed as a pattern seeker, AI just reinforces this view. However, this is a huge statement, I prefer to see humans like “root seeker”: When a phenomenon occurs, independently from the pattern the data is associated with, the phenomenon would reveal to be that, humans want to understand why the phenomenon is there. Therefore, applied mathematical researches, the methods used in AI, remain “dry” and unpalatable if someone wants to explain why an event occurs on and not “just” how the event happens!
There is an easy solution available to us. Let’s treat the AI as it should always be treated as help, and, as an amazing source of advice, but never as the final decider when a given phenomenon is analysed.
Our maths, our data and our computers gave these results, but, those results should never fully replace human decision process, which is not only a data driven process, because it is based on critical thinking and theoretical structuring. Nota bene: This holistic approach has a cost, always the same: it needs time. We will discuss these aspects in section 6 and 7. Before some remarks concerning, a hypothetical pure AI solution in the AM world.

5.1. AI vs HI: Some general philosophical considerations of the AM industry and its future, or why pure AI would likely be a failure.
For an economist like me, what matters is grasping what is in front of us, or in this case the likely outcome of having AM exposed to pure AI. As such, the basic question we should consider is: whether the robot advisor alone, in front of a client (we can take IPsoft Amelia) would guarantee a better service, with a high-quality standard and better results, regarding allocations advises?
Sure, Amelia’s AI software, will have the best skills of the planet when considering the past, and the future incorporated via thousands of numeric forecasts, but, she is not and will not be able to be present, here– because Amelia’s consciousness of being present is lacking-as explained above-.
The client’s time will never be Amelia’s time, even if she is endowed with empathy and plenty of other funny gadgets, to mimic a “self”. She will never manage to feel and share a client’s existential struggle of living in the now and this despite allowing her using all client’s Big Data, which are, data, which significance may always be challenged.

Outside of these considerations, the key feature to succeeding through offering pure AI in the AM space is mainly the standardisation (commoditisation) of financial services. Standardisation is nothing new in the AM industry, or in any other industry: It is a basic element that cuts costs and increases margins. From this perspective, AI will be an enhancement, which, by definition, tends to reduce the heterogeneity of the AM’s offering. This is the well-known other side of the standardisation coin: Homogenisation of supply and, in the case of finance, a huge tendency to select few investment vehicles, meaning a high likelihood of ending with some bubble phenomena.
Nonetheless, the standardisation will allow to achieve two main targets: speed and efficiency. At the end of the day, this are the main reasons moving into a pure AI offer: You can deliver quickly, standardised wealth allocation solutions to plenty of client at once.
But, as we had already explained fast allocation advice might become an issue. Namely, the consciousness of the data available and the usage of real creativity, used to describe the current economic status, require time for being encapsulated in a satisfactory and well-structured wealth allocation advice.
Nothing new under the sun, AI can count on all power of cloud computing to deliver its fast service, but this has a cost: Less quality-no real creativity- and less transparency-no consciousness and so no solution for the data relevance problem-. Sadly, an excuse will always be available in the case of AI failure: Either raw data are wrongly collected, so the machine is correct, and the failure is due to the statistical apparatus, or we have still some missing data, and so more data must be collected.
The outcome will always be the same: Few humans willing to think (Warren Buffett’ spirit please lasts forever with us) alongside all others spending their time collecting (soft) data-or even involuntarily producing them- and traditional (hard) data, hoping they will teach us what to do, and this thanks to mathematical machine named pompously AI.
All in all, the AM service quality will continue to depend in the only real variable that matters in the AM industry: The time an AM provider wants to spend before delivering a wealth allocation advice. Here, I really want to be clear: I am not against those actors in the AM world who want to move as soon as possible into a full AI solution. This is their choices, and we are in a free economy, consumers will decide after all, my goal here it is only to stress the overall consequences and possible risks.
In any case, due to these considerations, smart clients and/or clients passionate about finance will ignore an AI based advisory offer-or use it as a simple benchmark at best-: They will create their own offer-trading websites which will provide them more than enough- and they will look for real creativity and freedom. In our view, this phenomenon is already ongoing, and with AI involved, will likely amplify. Others, will move more and more wealth into venture capital, shadow banking and other -heterodox- solutions: In these places, those involved will have the feeling of financing real valued projects and not trends, patterns and all sort of statistical considerations, which they may not consider being part of real finance!
What about the institutional clients of AM in a pure AI world? They are likely to be squeezed between a homogenous offer and a more and more standardised regulations framework, which, may be based, on AI driven metrics soon. The likelihood of having institutional portfolios more exposed to systematic risks will likely increase.

However, are we sure the AM world is going really in this extreme direction, i.e. having a generalized pure AI solution?
We do not believe so. Why? Because, it is important to note that embracing a pure AI solution will imply accepting a world of financial advisories defined without theories, to rephrase Chris Anderson.
Here, data are mastered but not understood or properly challenged-as we said above data definitions are taken for granted, raw data are the real masters-. Theoretical investigation, on the contrary, often challenges either the relevance of the data set when we describe a phenomenon, or the data as it has been defined!
More dramatically, having all those AI numerical results on board may not be enough if the world we are living in is, to use a mathematical expression, a “dynamic system” (details in section 7 bellow) characterized by radical uncertainty. Indeed, in this “system” maths methods on data without theoretical discourse, therefore without awareness, creativity and intuition, are doomed to fail!
In this world, the presence of several theoretical discourses in addition to, and at time substituting, the results provided by data science are imperative. Let’s present these arguments in the last two paragraphs of this essay.

6.What is difficult for AI is normal for HI: So why bother!
Funnily enough, questioning if we are still in a Go game or if we have moved to TroGo is just the normal daily work for an economist in the AM industry. However, the economists starting point is different from the AI one. The first and most basic question is: Am I sure that what I know in terms of ideas, from my learning and experience, is still valid for interpreting the current economy, or should I redraw my knowledge and consider new ideas? This starting point is broad, and it does not contain a direct reference to the data sets in our hands. Nonetheless, it is by observing these data we decide to activate our brains.
Here, an economist’s brain is open to the new environment, and his main query will be: Are the ideas I have in my repertoire useful, or should I add some new ones?
Please note that- and it is not a rare occurrence- these new ideas may not always fit the present data series (the data set could be extremely short). We are nonetheless ready to defend these ideas if the theoretical narrative stands, if it is convincing and it is logically correct.
An economist will use a new concept because it is required to mentally handle (mentally, at least at first) a new puzzling query (or several). The concept will then generate, in turn, a (series of) correlations to be check on the data. These data- call it the world or the nature- is now under stress and scrutiny. We are asking data for proof of whether our ideas are correct, and this is where statistics and maths are supposed to come to the centre of the scene! -This has been the standard plain modern scientific procedure since Galileo, Descartes or Kant to Popper and Kuhn-: Once again, the starting point of a knowledge act is to understand a phenomenon, and an economist wants to see if he can understand why and how an economy is changing! The starting point should not only be the (raw) data value or its patterns.
Afterwards, the economist will focus on some sets of data which are supposed to help to confirm the new theory:
By so doing, the economist is solving the issue of knowing whether we are in Go or in TroGo- the consciousness and the relevance issue pointed out above are solved-: by building a theory and focusing on certain correlations. Clearly, all these steps have a cost: Time. As already said, without time no real great service.
The economist points forward to what is new in the new framework and what really matters, which in this case is understanding the data and justifying them! The hard part of the HI approach can be described in a single sentence: Creativity and constant questioning.
The difference between AI and HI is clear now: Humans are not straight on data trying to please them. Man, first, evaluates what he has available in terms of concepts and theoretical background, once this is set properly, he will attack the world, or rather, data. The explanatory, theoretical discourse is, basically the set of arguments and ideas which will, eventually, be presented to the client: the pedagogical discourse we must deliver in front of clients is always in our minds, data will drive many decisions, but simultaneously, some decisions will be based more on our critical thinking and our consciousness of being in a new economic environment, which is asking new, out of the box, actions.
Our actions always have goals, but this is not always to find a “regularity” within data, which at best explains the past and used to extrapolate into the future: We are always focusing on elements, which, we believe, represent key aspects that define new environments and we ignore others because we believe they are irrelevant in new frameworks. The future is then built from those few elements that we believe are key: A HI has this unbelievable and unreached flexibility, if it is needed we are working using a n dimensional causality space, but if needed, effortlessly, HI shrinks the space to a less dimensional one.
Why this?
Because, the HI is finding the right number of dimensions so to “easily” treat the issue: This is what we call, broadly, an intuition, and which might appear only by being conscious about the need of a reduction in complexity.
Once the intuition there a discourse can be elaborated and a vision shared, all this done without, a priori, constantly looking at data. Our democratic political world works like that, but also any AM advisory team is basically follow the same principle: A client wants that a discourse, a frame, not only defined in terms of data but also full of passion. The future will always bring unexpected events, but it is better to prepare receiving them by taking our passion on board and not only our rational calculus.
HI simultaneously solves the three points presented above, that is, HI is conscious that we are in TroGo, HI learns the new rules associated to the new framework by creating a valid theoretical picture of it, which can be easily shared! Please note: The theoretical picture might be wrong, still it is based on the awareness that something radical need to be done to understand the new framework. The scientific debate (and the analysis of the data) will do the rest about knowing the validity of the theoretical construction.
Indeed, economists talk, communicate and share their views. Being in TroGo is, after all, a possibility among others: What matters here is knowing if the view is shared and agreed upon with others, HI is always a form of social intelligence (SI): This is, by the way, the famous strategic interactions we talked about in our previous essays, and which will be developed in the next one.
Funnily, as a side note, we can even think about the possibility of having several AIs communicating as humans do:
Will this solve anything? How can this be organized when each AI machine is privately owned? What will be the implication when deep learning is present in each AI?
I just believe this is an entirely new Pandora’s box: I will not open this discussion now, may be in the future. Besides, in any case, I do not see AI solving its awareness issue thanks to communication with its pairs.

7.AI vs HI: Why the human touch is definitely needed: We are living in a dynamical system.
As we saw in the previous section, HI has an attribute that AI does not: a real creativity. Ideas can be and are created independently from data sets and some ideas are used if and only if the environment requires them. Now, my long thesis hinges on a simple remark: If Go always remains Go, AI will do the job despite not being fully HI compatible.
Indeed, if Go is Go, it is key to remember that this implies that i) data are defined in an indisputable way and ii) we master those data and we use them, but we do not care to ask ourselves why they are present in the first place. We are in Chris Anderson’s dream world, -the end of any theory world-, in which scientific method is considered as obsolete. In Chris’ dream, we live in a world shaped by data patterns! Everything will then be rationalized, pictured and forecasted, as part of a gigantic- alienating-framework, which can only change based on following patterns or complex data structures: Is this not the best representation of Weber’s Iron Cage?

Fortunately, Anderson’s dream is likely to remain a nightmare (let’s hope forever).
In our view, there are plenty of signals, showing that our economy is likely to be a “dynamic” system to use a mathematics and physicist-based term, in which data are far from being defined in a fixed, stable way. Creativity and the openness to question and theorise via our old scientific approach still makes a lot of sense. That said, a “dynamic” system’s main characteristic is radical uncertainty. Let me quote the excellent FT columnist Wolgang Munchau here:
“The financial crisis turned what outwardly seemed a stable political and financial [economic] environment into what mathematicians and physicists would call a “dynamical” system. The main characteristic of such systems is radical uncertainty. Such systems are not necessarily chaotic-though some may be- but they are certainly unpredictable. You cannot model them with a few equations…Radical uncertainty is a massive challenge, because you can never be sure of much. In particular, you can no longer be certain that you can extrapolate the trends of the past into the future”.
Why are we living in this sort of system? The list is long, so I will write only two main examples, with the rest to come in following essays:

1. If we examine the market economy in developed countries, the price system mechanism from the top (finance) to the bottom (consumption goods and services) is just in full distortion.
At the top, we have phenomena like, zero or negative interest rates due to massive monetary expansion policies which have drowned financial markets and distorted numerous prices. In addition, some central banks invested billions in these markets due to their excessive balance sheets adding further distortions. Furthermore, these phenomena, are likely to be amplified, thanks to passive investment vehicles which push series of prices up and reduce the liquidity of certain underlying instruments. Not a surprise then that some prices appear unrelated given the business plans of the underlying firms.
At the bottom, in a huge paradigm shift, globalisation and technological progress are constantly pushing prices down, offering more for less and making several prices unintelligible in economic terms.
Why do WhatsApp, Google or Facebook services have no price for a consumer? What does this mean? Does this mean they have no value? Do we have a valid theory for this area? How do we economically interpret the raw data coming from these activities? What about service providers, such as Uber, Airbnb and all the constellations defined by the so-called sharing economy? Are we sure there really no evil-to rephrase Google motto- in these new great services, which are reshaping the entire economy?

2.The way in which we work and in which we interact with each other: reflect that everything is changing in those spaces as well: Notions like (un)employment, the job market, career transitions and career paths are constant challenges which are not captured in statistics. Furthermore, the stream of revenue associated with our work is also under threat, with huge repercussions in terms of political instability and social frustration: Where are we going from here forward? What is the added value or marginal productivity (in value terms), of a person working in an e-service firm, when all firms become e-service providers? What are the links with that person’s salary? Is the marginal productivity of labour still a good measure of a worker’s added value and of a worker’s salary?

I will continue to elaborate this list in future essays. Nonetheless, these two aspects are enough to show the dramatic changes we are experiencing. The economy is changing so drastically that we must embrace the dynamic system’s vision of it. This effectively implies more complex and elaborate models in one hand, and, welcomes computer simulation, such as the AI approach to an economy, in the other. However, this approach causes huge and important alternatives to human judgment and analysis because only humans are endowed with the main source of light allowed to go through the dark: creativity and constant, critical questioning.

A methodological addendum to the CIO essay: A discussion about true vs sure approach and the three AI paradoxes in AM.

AI-VS-HI1.Introduction.
Our previous essay was marked by some form of ambiguity. To highlight the importance of the CIO creativity we used (apparently) ambiguous sentences like this: “Human beings are truly creative and this is because they look for true scenarios and not merely correct (sure) ones”. Now, to move forward, we need to acknowledge the meaning of this sentence in a very transparent way: This will allow us to properly introduce the discussion about the strategic interactions among CIOs. Our starting point will be to enrich our allegorical crisis narrative to discuss the assessment of the houses after the fire (the origin of the fire will be discussed in our last essay). These elements will authorize us to reach a twofold conclusion:
a) There are three main paradoxes associated with a pure AI (algorithm) based asset allocation process; we will clearly identify them and discuss; and b) we will, once again, clearly stress the importance of a human factor when asset allocation choices need to be made.

2.Why sure scenarios are not enough or why human touch is so important.
In our previous essay a financial crisis was described as fire declared at the ground floor of two major houses. Then our narrative was mostly concerned with discussing the swift intervention of the fire department, and its consequences. Now, it is time to enrich our story by adding some details about the organisation of the building complex and its ownership. It is important to know that all families living in the houses are renting them: the costs associated with the fire will be met by the owner, first, who will then share this burden with the families. Clearly, the owner wants to assess the real status of the houses after the fire, and so ensure that the houses are solid and resilient. He can gather advice in two different ways:
i) He can call a construction engineer to inspect the houses or ii) he can commit to spending more by allowing, after the engineer’s work, an analysis to be undertaken by a private investigator (a character like Columbo).

At this point, our metaphor becomes crystal clear: if the owner considers i), he is taking for granted that the engineer’s specialist knowledge i.e. his proven expertise about how to evaluate and use data, are enough to guarantee a proper assessment of the houses. The engineer will assess each house by considering a model of the house. The engineer will then collect all sorts of data, such as the materials used to build each house, the dimensions, the quality of the terrain and so on and so forth.  With this data, and by use of calculations, he will then be able to assess the status of each house, as well as to provide a forecast for each house once some of the works have been undertaken. The engineer’s results are sure ones: Given the data and the model, the results are logically, rationally correct (sure).  By contrast, with ii) the owner takes another option: he leaves the door open to receive a more creative analysis in which the technician’s work is acknowledged, but in addition, further remarks and nuances are incorporated. This is because it is humans, not machines, who are living in the building complex. In this case, the Columbo character will spend time talking to the members of the families (and with some firefighters), he will count the number of paintings hung at the walls before the fire, he will evaluate the shape of the new furniture and the reactions of each family members to the new environment and so on. Here, a holistic approach will provide a big picture sort of results: the truth is more likely to be found with this sort of global analysis.

3.The end of the story and why a sure approach is not always the best one.
The above narrative clearly encapsulated at least three main paradoxes when only AI is used and strictly followed:

A. The Stability paradox: If we accept the AI approach, based on modelling and data, we are implicitly accepting the stability of the framework. In other terms, to use our narrative, once the house is reinforced, the laws of material physics must, and will, apply. From an AI perspective, the algorithm is correct if, and only if, the economic laws remain the same. This is, by the way, a huge contradiction, because the AI evangelists are simultaneously telling us that everything is changing and we need to constantly adapt our scenarios. However, key data needed for algorithm scenarios are only available on long time frequency -monthly but more often quarterly-. Here a fix framework is just a paramount constraint, which is imposed by the data themselves and cannot be defeated.
In any case, undermining any model (algorithm) and its sure (correct) results is a form of stability -this also includes the stability of random factors which are considered to create “realistic” predictions-. Now, this stability does not prevent the possibility of a major collapse of the system (another fire). But although the systemic risk exists and it is considered by the AI, the systemic risk and the paradigm shift -e.g. no clearer link exists, nowadays, between money creation and inflation- are minimized and this because they cannot quickly enter in the set of formula defining an algorithm.

Besides, the stability paradox is likely to generate short-term strategies: how? Simply because some principles are taken for granted, like the idea of market efficiency and -closely related- the impossibility of beating the market. Here, the door is open to massively invest in passive vehicles, i.e. ETF index tracker. But, this is pure short-term finance because we are buying a basket of instruments instead of carefully picking one by one: by choosing a basket, we are giving a premium to companies which do not deserve it. Financial markets, like any market, are systems and processes generate (un-distorted) prices, which are guides for efficiently allocating wealth: if prices are distorted because there is this pooling effect, what will this imply in the medium long run? What about the role of creating an unbiased price signalling system (i.e. the hard of a free market economy)? What are the incentives for the firms’ management? What about the good ones versus bad ones? What about the firms’ governance if a section of the stockholders does not care, essentially, about the firms’ actual business?

On the other hand, with the Columbo investigator, the modelling results will not represent the result: the character of the family, their relationship with the neighborhood, and even more importantly, the fear of a future accident might fully change the way in which the (rebuilt house) structure is used and will evolve in time.

Similarly, the true future destiny of an economy does not depend on numerical factors alone. This is a far more complex business: only with a constant pedagogic engagement from CIOs, the asset management customers will end capturing this complexity, here the importance of a robust narrative.

B. The data mining Paradox: It could always be maintained that our first point is valid only because with AI, the engineer does not have enough data. If we want a better, stringent and sure prediction, then we should simply collect even more data (among them the famous “big data”).
But here we enter straight into the data-mining spiral: we need to collect more so we will add data collection devices everywhere (in our story, we will add smoke detectors inside and outside the house). But this will push agents to change their methods of interaction and behavior, which will then in turn require additional data collection to get a better view of their choices and behaviors; and so on and so forth, in an endless cycle. By the way, no one can exclude that the engineer, who suggests collecting more data, may have colluded with the data providers who want to install more data generating devices: Out of metaphor, in our world, it is might be time to critically scrutiny the influence of the GAFAM members in the current Big Data -AI- mania.
The conclusion is simple: no one is taking the time to think anymore, because we are all just spending our time collecting (soft) data and traditional (hard) data, hoping they will teach us what to do. Two funny consequences of this paradox:
a) we are just piling up data hoping that it will tell us something, thanks to the algorithms we apply to it. But by refusing to think, we are stuck in the “algorithm box”, and any algorithm, even with a built-in self-learning device, is a box!
b) funnily enough, we do not evaluate the numbers anymore, and we are losing our critical view of them. A great example is the analysis of the US job market. This spring -almost all commentators said that- the US economy is at near full employment; the data is telling this without ambiguity!
But is this true? What sort of full employment are we actually talking about? Few are ready to recognize that, if it is true that the unemployment rate is low, so it is the participation rate at the job market, i.e. we have a rate that is 4.5% less than at the turn of the century (before the Internet bubble): this is something like 9 million people who are no longer participating in the job market! I guess that’s more than enough, politically, to lose a lot of swing states!

C. The human paradox:  It is important to notice that Columbo is not denying the importance of the work undertaken by the engineer: he knows that this is valuable and he has no problems admitting that this is our best method of understanding where we currently stand. But the challenge lies more in the ability to forecast. Let’s refer to the narrative one last time; the same powerful fire will generate different behaviors in different families. One family is likely to become very cautious, while another will react indifferently. What matters is, once again, that a priori, Columbo’s intuition is not diminished into a (data) box when he is elaborating his ideas.
Said that, we can always await the famous AI singularity and its intelligence (NB: how this concept is defined is pure metaphysics). However, the singularity is not there yet and why, in any case, should this represent a real progress for mankind, this is another open metaphysical debate.

In conclusion, we really do not see any valid reason to be so against human intervention in the asset management domain, if we exclude (and no robot advisor’s evangelist will tell us this) a main element: the cost factor of having human ideas as a plus.
The attitude vis-à-vis the AI analysis and forecast will clearly be a differentiating factor between CIOs, and will partially explain their strategic choices. We are now ready to enter in this last part of the discussion.

Where do we stand with the ’07-’08 crisis, and why is the CIO job so important?

1.Introduction: Storytelling and its insights.

One of the puzzling questions in today’s asset management industry is why the role of Chief Investment Officer (CIO) is still viewed as one of the most important. Indeed, nowadays, artificial intelligence evangelists – believers who trust “only” mathematical algorithms- claim that this job does not have a future: A well calibrated (in terms of past data analysis) software is already allocating assets under management in a more rational and cost-efficient way than humans could ever do as we are more likely to issue misleading interpretations from the analysis of big data sets.

In this first -of two- essays we will shed light on some factors that answer this question. The main theme will be to justify the CIO role by referring to a neglected feature, the CIO’s ability to elaborate and write down allegorical stories. In other words, the neglected aspect of the CIO job resides in their storytelling abilities, the skill at finding the right words to present complex phenomena by creating an allegorical narrative that can easily be shared with clients inside and outside his/her financial institution. This story is key, as it is needed to convince the public about the quality of the allocation scenario offered at a given time and about the timing chosen to modify this allocation. Besides, as we will demonstrate this ability allows to enhance a CIO’s major quality: Creativity.
It is because CIO is a failing creature and error prone, that they will invent scenarios, non-conventional rules and paths: Human creativity is what matters and AI is far to having a gram of that.

2. An example of CIO narrative based on ’07-’08 Crisis or how pointing forward some central facts.

As a starting point, let’s take the role of the CIO who must create a nice and powerful narrative to describe today’s global economic situation.
Now, it has been exactly ten years since the beginning of the worst financial and economic crisis since the Great Depression: What is the best metaphor to describe the current situation?
Without being very original we can build a scenario by referring to the ’07-’08 crisis as a fire declared in a modern residential area, in which different types of houses (i.e. modest and complex ones) have been built closely enough to expect flames propagating from one house to the next.
In this residential complex, the fire simultaneously started at the ground floor of two major houses. The fire department came swiftly, which did not prevent some major pieces of furniture from being fully destroyed in both houses, but saved both houses ‘structures and several items. Few minutes later, a third very complex house was also impacted with a fire problem: We disregard this case because the fire department is still working on it and no clear evaluation of the damages is yet available.
The intervention of the fire department implies two major shortcomings (a third one is presented in my next essay):
A. Given the degree of urgency, the fear of a propagating fire, the amount of water used was too important. Moreover, in the heart of the action, no one evaluated how much water was absorbed by the walls, how much ran away into the basement and whether some finally got absorbed by the ground: These effects would likely weaken the solidity of the buildings, therefore changing their structures.
B. The fire department did not have enough expertise: They saved some items and let others burn without a clear idea of each item’s value and, even more inconveniently, without knowing all the consequences of letting some burn. Moreover, despite the huge effort, we are still unsure whether all the sources of the fire are dead.

3. From a narrative into its analysis: why human creativity matters so much

As stated, our allegory is not very original, but it allows us to highlight two main sets of the crisis’ consequences in simple terms. Let’s now use financial/ economic terminology to obtain an equivalent translation of these two sets:
To fight the crisis, point A, the monetary authority injected too much liquidity (i.e. the famous QE experiment) to avoid a generalized credit crunch: the goal of this liquidity creation was to calm down the financial/banking system. Here, the point is to know when the excess of liquidity and the associated price distortion will eventually be digested. Nevertheless, a major unanswered question remains, even ten years later:  At what speed does the monetary authority need to retreat?
In theory, i.e. rational expectation and general market equilibrium, we all know that any liquidity excesses get corrected -in the long run- thanks to an inflationary process. But, even this long-term wisdom is challenged nowadays because, on the one hand, globalization and technological progress are generating a strong deflationary pressure, and, on the other, the bank industry rules have changed due to the crisis (e.g. new requirements in terms of capital – the Basel agreements-) so that large monetary creation has been prevented.
In this fundamentally new environment, how and when will price increases be registered i.e. at what speed are houses walls are supposed to dry up? Is inflation likely to come soon? And without the inflationary guide how is a central bank supposed to guide itself?
Now, what really matters is not to own a set of definitive answers to all those questions, but instead, to add constantly new puzzling queries, which also implies to assess those new questions so to review and rewrite future scenarios.
The CIO’s minds, not algorithms, are what matters: Humans have a unique ability of re-analyzing and re-interpreting a phenomenon, despite having no new data. It is because we consider our conclusions as being true conclusions that we constantly reevaluate them: An algorithm will provide correct (logic and rational) conclusions given the data, but which cannot be discussed in terms of true or false!

Merely, by referring to his allegorical story and his holistic and critical view of the economy new questions will be created and new future challenges defined to justify both the current asset allocation choices and how to modify them. The key factor is human creativity: Only a human being, a being prone to mistakes, is always ready to rethink and rewrite his narrative of a financial and economic phenomenon.

The intervention, point B, has dramatically reshaped financial markets, without anyone really knowing whether this will assure or prevent some future problems or whether we are now following a path that will ensure better allocation of our savings. For instance, the crisis had serious consequences in terms of public debt (i.e. some needed furniture got substituted to assure a functional ground floor):
How will this fact influence the future allocation of private savings?
Besides, while some investment vehicles have disappeared, others are now dictating the financial tempo in the asset management space, e.g. passive investment vehicles. However, what about a critical analysis of these new instruments? Are they as transparent, cheap and liquid as pretended? What about the ETF’s judgement being biased by roughly 8 years of bull markets in the USA? What will be the long-term consequences of a generalization of these vehicles? Are we out of trouble? Are we sure that no bubbles are currently running in the financial markets?

Once again, to consider and evaluate all these questions we need a more holistic and critical approach, which only the human brain can offer.

4. Conclusion and few steps into our future analysis: Strategic consideration and investment timing.

As we saw above, the main objective of the CIO is to explain why a given asset allocation is chosen at a given time. The narrative is there to prepare and justify further choices. Namely, only time provides the answers to all open questions, so what matters is to implement choices over time that maximise the expected gains once these answers have materialized. As we have already point out these choices are likely to be based on a very holistic and critical approach about the current situation.
Still, and this will represent the essence of my second essay those moves are also dictated by strategic considerations: A CIO is not isolated.
He knows that other CIOs share a close financial and economic outlook, i.e. the data used and economic and financial notions are common knowledge among them: therefore, the timing for a reallocation is more likely to be set by strategic considerations, a variant of Keynesian beauty contest type of game. But then, once again, a robust and well-structured allegorical story may reinforce the chances to be among the winners because the analysis is likely to widespread.

More details will be explained in the second essay. It is the presence of this strategic environment which will represent a further main obstacle in substituting CIOs by machines.

La décentralisation comme solution à la globalisation : Un début d’analyse.

Un constat s’impose après les échéances électorales qui ont émaillé la fin 2016 et le début 2017 : l’utilisation du mot globalisation est devenue un exercice périlleux pour n’importe quel acteur politique, tant ce terme a désormais une connotation négative auprès des électeurs. Pire, lors de débats souvent empreints de populisme et simplifications, le mot devient carrément une insulte à utiliser envers l’adversaire coupable de ne pas vouloir imposer des restrictions à un commerce international devenue source ultime de tout problème.
Pourtant, historiquement, s’il y a bien une caractéristique occidentale, elle est bien, ontologiquement, celle d’être porteur de globalisation.
Si l’origine de l’Occident est bien à rechercher du côté des villes grecques, nous savons en même temps, sans tomber dans un lyrisme déplacé, que ces villes se sont bâties autour d’épopées faites d’insoumissions, d’inquiétudes et in-fine d’insatisfactions. Depuis lors, ces facteurs marquant les cités occidentales ont façonné l’idéologie des élites citadines en les poussant à regarder ailleurs soit sur le plan économique, d’où l’intérêt pour le commerce et l’exploration de nouvelles voies commerciales, soit sur le plan purement intellectuel, ce qui a fait naître l’envie de la recherche philosophico-scientifique.
En conséquence, la ville s’affranchit avec le temps de son environnement proche pour s’intégrer à un (ou plusieurs) réseau(x) structurant des espaces géographiques de plus en plus complexes. Paradoxalement, ce même environnement proche est impacté : d’une côté, l’agriculture doit devenir de plus en plus efficiente afin de pouvoir répondre à ses besoins (et celui de son commerce) et, de l’autre, une partie de ses activités productives (industrielles ou non) est externalisée.

De nos jours, des pans entiers d’Occident luttent pour freiner la globalisation, ce qui est signe d’un changement radical. Il faut donc s’interroger : pourquoi une partie de l’Occident ne veut plus de ce processus mais aussi, et ceci est moins souvent discuté, pourquoi une autre veut (semble tout au moins) continuer à y croire ?

Si on se fie aux résultats électoraux, les (très grandes) villes occidentales sont prêtes à accepter la globalisation. En effet,  c’est sur la base de la « ville occidentale » qui s’est façonné  la mondialisation (stade ultime de la globalisation occidentale) : comme l’université occidentale est devenue paradigme universel pour  promulguer et diffuser le savoir technico-scientifique occidental, les grandes « villes occidentales » sont devenues les structures incontournables aux quatre coins du globe pour soutenir une production et une distribution de biens et services qui sont pensées et réalisées à l’échelle mondiale. Elles ne peuvent pas renier leur nature, qui est d’être devenues les fondements d’une maison nommée mondialisation, et ceci malgré le fait que l’ancrage de ce phénomène n’est plus foncièrement occidental mais tend à suivre la taille des marchés, et à migrer vers les endroits les plus peuplées de la planète (Asie en tête).
On serait donc tenté de conclure que les grandes villes occidentales se complaisent dans ce rôle, fières de faire partie des capitales du monde, et ceci malgré une division intestine, la présence d’une jeunesse révoltée (dernier exemple en date : le vote pour Mélenchon en France, ou Sanders aux Etats-Unis) qui voudrait une meilleure répartition des gains liés à la mondialisation.

Ceci dit, ce qui davantage surprenant est de retrouver du soutien à ce processus hors des murs d’une grande ville, et en conséquence à l’échelle nationale : Pourquoi un pays comme l’Allemagne est plus enclin à accepter ce processus quand tant d’autres voudraient si ardemment revenir en arrière ?
Pour tenter de répondre à cette interrogation, maints facteurs devraient être considérés et la littérature à disposition est déjà étoffée. Cependant, un élément est souvent oublié : en Allemagne, l’esprit de la ville est littéralement entré dans les rouages du système productif, il s’est diffusé de façon capillaire à travers tout le pays.
Plus précisément, dans ce pays, chaque entreprise est très autonome et le pouvoir décisionnel très décentralisé : ceci permet, d’une part, d’être extrêmement flexible concernant les négociations salariales et, d’autre part, de répartir les bénéfices en fonction d’objectifs de moyen à long terme grâce, entre autres, à des syndicats qui siégeant dans les comités de direction.
De plus, l’Allemagne, grâce à cette autonomie productive si loin de toute sorte de planification centralisée, a su garder une diversification productive très forte :
la présence d’un tissu dense de petites et moyennes entreprises très performante permet d’absorber plus facilement des pertes éventuelles d’emplois dans un secteur par l’existence d’autre activités.
En conclusion, le rejet plus au moins marqué de la mondialisation dans la population d’un pays est peut-être, après tout, lié à présence ou non d’une décentralisation forte, à un pouvoir diffus et peu centralisé :
il faudrait y réfléchir dans bien des pays occidentaux.