A methodological addendum to the CIO essay: A discussion about true vs sure approach and the three AI paradoxes in AM.

Our previous essay was marked by some form of ambiguity. To highlight the importance of the CIO creativity we used (apparently) ambiguous sentences like this: “Human beings are truly creative and this is because they look for true scenarios and not merely correct (sure) ones”. Now, to move forward, we need to acknowledge the meaning of this sentence in a very transparent way: This will allow us to properly introduce the discussion about the strategic interactions among CIOs. Our starting point will be to enrich our allegorical crisis narrative to discuss the assessment of the houses after the fire (the origin of the fire will be discussed in our last essay). These elements will authorize us to reach a twofold conclusion:
a) There are three main paradoxes associated with a pure AI (algorithm) based asset allocation process; we will clearly identify them and discuss; and b) we will, once again, clearly stress the importance of a human factor when asset allocation choices need to be made.

2.Why sure scenarios are not enough or why human touch is so important.
In our previous essay a financial crisis was described as fire declared at the ground floor of two major houses. Then our narrative was mostly concerned with discussing the swift intervention of the fire department, and its consequences. Now, it is time to enrich our story by adding some details about the organisation of the building complex and its ownership. It is important to know that all families living in the houses are renting them: the costs associated with the fire will be met by the owner, first, who will then share this burden with the families. Clearly, the owner wants to assess the real status of the houses after the fire, and so ensure that the houses are solid and resilient. He can gather advice in two different ways:
i) He can call a construction engineer to inspect the houses or ii) he can commit to spending more by allowing, after the engineer’s work, an analysis to be undertaken by a private investigator (a character like Columbo).

At this point, our metaphor becomes crystal clear: if the owner considers i), he is taking for granted that the engineer’s specialist knowledge i.e. his proven expertise about how to evaluate and use data, are enough to guarantee a proper assessment of the houses. The engineer will assess each house by considering a model of the house. The engineer will then collect all sorts of data, such as the materials used to build each house, the dimensions, the quality of the terrain and so on and so forth.  With this data, and by use of calculations, he will then be able to assess the status of each house, as well as to provide a forecast for each house once some of the works have been undertaken. The engineer’s results are sure ones: Given the data and the model, the results are logically, rationally correct (sure).  By contrast, with ii) the owner takes another option: he leaves the door open to receive a more creative analysis in which the technician’s work is acknowledged, but in addition, further remarks and nuances are incorporated. This is because it is humans, not machines, who are living in the building complex. In this case, the Columbo character will spend time talking to the members of the families (and with some firefighters), he will count the number of paintings hung at the walls before the fire, he will evaluate the shape of the new furniture and the reactions of each family members to the new environment and so on. Here, a holistic approach will provide a big picture sort of results: the truth is more likely to be found with this sort of global analysis.

3.The end of the story and why a sure approach is not always the best one.
The above narrative clearly encapsulated at least three main paradoxes when only AI is used and strictly followed:

A. The Stability paradox: If we accept the AI approach, based on modelling and data, we are implicitly accepting the stability of the framework. In other terms, to use our narrative, once the house is reinforced, the laws of material physics must, and will, apply. From an AI perspective, the algorithm is correct if, and only if, the economic laws remain the same. This is, by the way, a huge contradiction, because the AI evangelists are simultaneously telling us that everything is changing and we need to constantly adapt our scenarios. However, key data needed for algorithm scenarios are only available on long time frequency -monthly but more often quarterly-. Here a fix framework is just a paramount constraint, which is imposed by the data themselves and cannot be defeated.
In any case, undermining any model (algorithm) and its sure (correct) results is a form of stability -this also includes the stability of random factors which are considered to create “realistic” predictions-. Now, this stability does not prevent the possibility of a major collapse of the system (another fire). But although the systemic risk exists and it is considered by the AI, the systemic risk and the paradigm shift -e.g. no clearer link exists, nowadays, between money creation and inflation- are minimized and this because they cannot quickly enter in the set of formula defining an algorithm.

Besides, the stability paradox is likely to generate short-term strategies: how? Simply because some principles are taken for granted, like the idea of market efficiency and -closely related- the impossibility of beating the market. Here, the door is open to massively invest in passive vehicles, i.e. ETF index tracker. But, this is pure short-term finance because we are buying a basket of instruments instead of carefully picking one by one: by choosing a basket, we are giving a premium to companies which do not deserve it. Financial markets, like any market, are systems and processes generate (un-distorted) prices, which are guides for efficiently allocating wealth: if prices are distorted because there is this pooling effect, what will this imply in the medium long run? What about the role of creating an unbiased price signalling system (i.e. the hard of a free market economy)? What are the incentives for the firms’ management? What about the good ones versus bad ones? What about the firms’ governance if a section of the stockholders does not care, essentially, about the firms’ actual business?

On the other hand, with the Columbo investigator, the modelling results will not represent the result: the character of the family, their relationship with the neighborhood, and even more importantly, the fear of a future accident might fully change the way in which the (rebuilt house) structure is used and will evolve in time.

Similarly, the true future destiny of an economy does not depend on numerical factors alone. This is a far more complex business: only with a constant pedagogic engagement from CIOs, the asset management customers will end capturing this complexity, here the importance of a robust narrative.

B. The data mining Paradox: It could always be maintained that our first point is valid only because with AI, the engineer does not have enough data. If we want a better, stringent and sure prediction, then we should simply collect even more data (among them the famous “big data”).
But here we enter straight into the data-mining spiral: we need to collect more so we will add data collection devices everywhere (in our story, we will add smoke detectors inside and outside the house). But this will push agents to change their methods of interaction and behavior, which will then in turn require additional data collection to get a better view of their choices and behaviors; and so on and so forth, in an endless cycle. By the way, no one can exclude that the engineer, who suggests collecting more data, may have colluded with the data providers who want to install more data generating devices: Out of metaphor, in our world, it is might be time to critically scrutiny the influence of the GAFAM members in the current Big Data -AI- mania.
The conclusion is simple: no one is taking the time to think anymore, because we are all just spending our time collecting (soft) data and traditional (hard) data, hoping they will teach us what to do. Two funny consequences of this paradox:
a) we are just piling up data hoping that it will tell us something, thanks to the algorithms we apply to it. But by refusing to think, we are stuck in the “algorithm box”, and any algorithm, even with a built-in self-learning device, is a box!
b) funnily enough, we do not evaluate the numbers anymore, and we are losing our critical view of them. A great example is the analysis of the US job market. This spring -almost all commentators said that- the US economy is at near full employment; the data is telling this without ambiguity!
But is this true? What sort of full employment are we actually talking about? Few are ready to recognize that, if it is true that the unemployment rate is low, so it is the participation rate at the job market, i.e. we have a rate that is 4.5% less than at the turn of the century (before the Internet bubble): this is something like 9 million people who are no longer participating in the job market! I guess that’s more than enough, politically, to lose a lot of swing states!

C. The human paradox:  It is important to notice that Columbo is not denying the importance of the work undertaken by the engineer: he knows that this is valuable and he has no problems admitting that this is our best method of understanding where we currently stand. But the challenge lies more in the ability to forecast. Let’s refer to the narrative one last time; the same powerful fire will generate different behaviors in different families. One family is likely to become very cautious, while another will react indifferently. What matters is, once again, that a priori, Columbo’s intuition is not diminished into a (data) box when he is elaborating his ideas.
Said that, we can always await the famous AI singularity and its intelligence (NB: how this concept is defined is pure metaphysics). However, the singularity is not there yet and why, in any case, should this represent a real progress for mankind, this is another open metaphysical debate.

In conclusion, we really do not see any valid reason to be so against human intervention in the asset management domain, if we exclude (and no robot advisor’s evangelist will tell us this) a main element: the cost factor of having human ideas as a plus.
The attitude vis-à-vis the AI analysis and forecast will clearly be a differentiating factor between CIOs, and will partially explain their strategic choices. We are now ready to enter in this last part of the discussion.

1 thought on “A methodological addendum to the CIO essay: A discussion about true vs sure approach and the three AI paradoxes in AM.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s