In the previous essay-the addendum essay- we listed three paradoxes likely to prevent a full implementation of an AI wealth allocation decision process in the AM world. As usual, by so doing, we introduced some ambiguous terms and unclear elements, which need to be further developed and discussed. We will concentrate our attention on the most important paradox presented: the stability paradox. This essay’s aim is to show how the discussion of the stability paradox opens a Pandora’s Box: New, tricky issues, such as AI consciousness appears, and testify that the road to reaching a fully satisfactory implementation of AI in the AM world remains long. Our investigation will prove that these problems are unavoidable obstacles, and explaining why a pure AI solution- that is, one which requires no human intervention-is not yet possible in the AM world and will not be for a while.
We will show that the main flaw characterizing a pure AI solution is a methodological one: An AI is a combination of applied mathematical methods-broadly labelled data science-used at an astonishing speed on huge data sets. This science, to quote Chris Anderson, accepts the possibility of developing a “science without theory” as one of its postulation. Because a theoretical narrative is absent, this nevertheless favours the acceptance of black-box solutions, which are far from helping those in the AM industry who need sound, logical and coherent narratives to justify an allocation decision. We believe that moving to a pure AI solution is not yet optimal: At the end of section 5, we will offer the first discussion about why this is the case. Through this discussion, we will introduce the notion of a “dynamic system”, which is likely to be the best approximation of our economic-financial-political world in mathematical terms. If this is the right approximation then we must acknowledge the need for Human Intelligence (HI) theoretically based investigations and results alongside the perfectly valid and needed AI investigation and results.
If we use our own terminology presented in the last essay, this is exactly our main conclusion: current AI cannot pretend to match HI, because AI continues to provide only correct scenarios -constantly anchor on data- and cannot formulate true -holistic- scenarios, or which have this aim. At the contrary, we believe AI becoming, and this will be more and more the case, complementary to HI. We will treat these arguments in sections 5-5.1: The right path is as usual in the middle; indeed, the wise AM provider will be the one who will blend both AI and HI, the real challenge ahead it is to find the right proportions.
Namely, AI is a very powerful device, but it is lacking in, at least, two key areas: creativity and awareness. Due to these two weaknesses, AI cannot yet pretend to reach coherent and logically sound asset management advices.
As such, to reinvent the AM industry in this new century, we warmly suggest looking further into conviction narrative theory than AI; for example, David Tuckett and Milena Nikolic presented this theory in detail in the June 2017, SAGE Journal, or more generally, to the work done by Yuval Harari about how humankind reached its current- technological driven- condition, i.e. his emphasis about our “capacity to elaborate [share] allegorical stories”; and not only focusing in developing AI.
2.The current definition of AI in AM and its main limits: Powerful skills are not enough to define an intelligent device.
Nowadays, AI evangelists are used to present the intelligence of their processes by referring to two main thoughts (I did not bother to quote the exact reference, I saw these two phrases so many times that I have the feeling there is a huge copy & paste contest going on): First, “[I]n 2016, a machine beat the world champion of Go – a game renowned for its subtlety and complexity – for the first time. The victory by the AlphaGo programme, developed by Google DeepMind, has put artificial intelligence (AI) back into the spotlight.”, and second “[The] usage of AI and deep learning will allow us soon to see correlations among time series we will not think of.”
Clearly, these two arguments are marketing statements: They are facts, but they do not define the meaning of the word “intelligence”. In other terms, in these two sentences we do not actually understand why seeing AlphaGo beat a human at Go represents proof of AlphaGo’s intelligence or, similarly, why seeing a device that is able to create and check all sorts of possible correlations among inconceivable numbers of data series- with everything done in a blink of an eye- should define, a useful form of intelligence. In my view, in both cases, we are describing a very powerful, nice and useful tool endowed with paramount skills, which are far more efficient and powerful than those of a single (or even a group of) human (s): However, a machine continues to lack some major features that define HI, and therefore constantly requires the presence of humans at its side.
The next section will discuss these elements in more detail and present three main problems characterizing the current definition of AI, when applied to the AM world.
3.If faced with real economy or the allocation of wealth, which require an understanding of the economic system, how is AI supposed to work?
Let’s go back to the statement about AlphaGo: The device beat a human. But can we interpret this victory as a sign of a device’s intelligence? No, clearly not. Why? Because even though AlphaGo has learned how to play during the game -we are not denying that- it was playing, by definition, within a stable framework. After all, despite the game’s complexity, the game is defined by a framework and a series of rules, limiting, the “creativity” if we can really use this word-more about our usage/definition in section 5-. Each step must be considered to elaborate on the best strategy; that is, what move will be useful to win the game. Fixed rules and fixed framework are taken for granted: The environment is what it is and the “laws” of the game are set in stone.
What about a changing environment? What about having a “game” in which the rules and framework are constantly redrawn or at least appear to be so? Even better, what about an evolving “game”?
For example, if Go were to transform into another game-TroGo– how would AlphaGo-or any AI device- perform?
Here we have at least three fundamental issues that the AI must solve to pretend holding the status of a real, intelligent device:
1) AI needs to be conscious that Go is now TroGo, thus, AI needs to be aware that the old world/game is gone and AI is now living/playing in the new one; 2) AI needs to figure out (fast) the new rules of TroGo to elaborate the optimal choices, and we take for granted that the AI’s final objective is still to win. As such, the fact that the objective remains the same in Go and TroGo is a strong assumption: It is, indeed, easy to think that in a new environment each player’s objective may also change. 3) AI needs to consider a theory, or at least a theoretical narrative, that will explain and justify the choices taken in 2). Ultimately, the AI needs to explain why some choices are executed and some others disregarded. It is important to note that a pure numerical approach, based on the best fitting algorithm for a given historical data sets available is likely to appear poorly structured if a narrative is not simultaneously developed. Indeed, at any given stage, AI will need to convince those outside the specialist realm, who can understand AI’s mathematical discourse, why AI is picking a given move.
Even without entering a deep discussion about these three points, we realise how overcoming these points represent the real AI challenge. Now, I want to stress that full comprehension of these aspects goes beyond my current understanding; I am an economist, after all, and not an AI specialist!
Nevertheless, given that I know the complexities of working as an economist in the AM world, I can explain some simple elements that the AI device should fulfil to deliver a great AM wealth allocation service.
4. AI and science: Why a pattern-seeking device is not good enough.
Let’s consider the problems listed above in order. The first one is amazingly difficult: Figuring out that Go is now TroGo, requires the AI to be consciously aware of the changes in the world in which it lives. Here, we wonder how an AI can determine that the “game” has changed. To answer, we need to, finally, try to roughly grasp what AI really means in the AM world: AI is “just”– it is already a lot- an infinitely skilful and quicker pattern-seeking device -though AI can be a lot more if economic and financial systems are treated as parts of a dynamic system and maths simulation techniques are used accordingly, see points 6&7-. Given the huge amount of historical data sets at its disposal, AI will process like a human (but more quickly), trying to find patterns and correlations among them to establish (statistically robust) links between them. Here, at its essence, nothing is new. This is the basic procedure used in quantitative (quant) finance. Finance algorithms are based on this procedure, but the technical analysis charting is also derived by the same intuition. This is why, computers oversee trigger buy and sell signals in trading desks today-: If a correlation exists, it might be useful! In other words, if a pattern is out there, it is worth discovering and then to using it to make money! James Simons earned a lot of money by bringing these ideas and methodologies into finance.
In our case, however, AI faces a more complex task: In Go, a certain set of patterns was established and seen, now, in TroGo, are these patterns still relevant? In other words, those “ancient” correlations might just cease to be relevant. Meanwhile, the new patterns characterizing TroGo are likely not fully visible yet; the data available does not fully reflect the new patterns-the release frequency of some data can be long-. Moreover, in a trickier way, the relevance issue could concern some data series as well, i.e. the definition of the time series may be unfitted in TroGo! Who is assuring us that some of the data used in Go to depict a certain phenomenon are still those needed to depict the same phenomenon in TroGo? Here, the entire procedure is under the spell of huge threat, the well-known GIGO issue, that is garbage in garbage out.
For instance, before the ’07 crisis in the US job market the unemployment rate was, from both a statistical and an economic standpoint, a trusted indicator of potential upwards pressure on salaries. Nowadays, however, with labour market participation having plummeted and with an unbelievable rate of technical progress, are we still sure that a low unemployment rate is a valid indicator of future pressure on salary and then inflation?
Similarly, with money growth and its relationship to inflation: the correlation seems to be broken! But are we sure? Are we sure to refer at the right money supply aggregate? What about, as in the case of US M3 aggregate, if a series is no more available? And more broadly, which is among all series, the most relevant one in TroGo? In a prediction endeavour, if we are using the wrong one, a possible garbage in garbage out type of error is likely to occur.
In general terms, can we still use these historical relationships?
The data’s relevance is under scrutiny and in doubt!
In other words, the maths on data is fine, and, as I said, it can be a lot more complex than in my simplified description of the existing AI-system dynamics simulations and agent based modelling which are vivid examples of this complexity-. Still, all these methods share an infinite trust on collected raw data but: Raw data are not entities without a life-they transform as well-despite constantly being considered with the same definition. Please note, this kind of consideration would never occur in a pure application of AI within financial markets: For instance, the price of a stock, or other financial market determined instrument prices are the ultimate example of fixed definitions. The relevance issue does not apply in these cases, AI and algorithms are working well, no doubts, with dead “definitions”. By the way, this explains why Warren Buffett is yet out of reach for an AI device: a value investing strategy is not data driven-data are missing-decision process, it is more based on shared visions with senior firm’s management about future market conditions.
At this stage, AI sponsors may tell me: Do not worry, Big Data, is here and this is the new data that will tell us where we stand. But, Big Data is still in its infancy. All serious tech firms in this domain hire plenty of anthropologists and psychologists: To help maths folks better master the data-in particular those related to human “behavior” and choices-, but this procedure will require time. Once again, pretending that this will be done soon to help solve the AI consciousness issue and the related relevance question is just asking for too much too quickly.
Ultimately, I do not see how a historical data based related device- a mathematical machine-, which is what AI is, will be able to alert me if we are in a new economic era. A new era, characterized by new rules, where the present does not follow previously established “patterns” and which require new ways of reading raw data; hence, the relevance problem data “definitions” are living parameters themselves. Let’s take a closer look here.
5.Can AI solves these issues? AI is likely to become a black box which requires, constant HI presence to be optimal.
Now, an easy solution would be to think about an AI device in which a set of thresholds have been set: If those are triggered, the AI will decide to enter a new world.
However, this is not our point:
How can we ensure that those thresholds are being set based on relevant factors, such as those which show changes and which changes matter in the new world?
Who is choosing the right patterns, and which deviation (and how much?) will alert us to the change of the world?
This looks like an endless spiral if a theoretical discourse is not simultaneously build. Only a human can justify a choice, not by referring to historical data, but by providing a new theory and a narrative coherent with it. This is creativity! In other words, to solve the problem and to highlight the right thresholds, the AI should create a theory. This is the only way to solve an issue similar to the chicken and egg paradox! However, a theory needs creativity.
Here is an open question: Is there any real with AI?
I see an easy reply: You can imagine AI devices endowed with deep learning mechanism. However, as far as I known, deep learning would entail a machine extracting information, often in an unsupervised manner, to teach and transform itself. Those new lines of code, might end up being unintelligible to the human programmer who first created the device!
If this is the case, how are we supposed to extract a coherent and intelligible theory which can be presented and used to explain AI’s choices?
Using deep learning, we have-further-proof of the main philosophical principle underlying AI and (partially at least) algorithm usage: I do not know why it works but it works. It solves the issue, so I accept it!
By doing so, you are, implicitly, disparaging HI: You are preferring a device solution-which-no doubts-has worked in the past-to human ideas, which may be slower to elaborate upon, but which can encapsulate both past insights and real creativity!
In other words, you are charging humans with the failures and the need to please and follow, a priori, the machine.
In my view, this is the equivalent of accepting black box solutions for handling and solving problems. This science without theory approach does not really explain a phenomenon. The method is interested in the data, and the data are the phenomenon’s outcomes. The real added value of this activity is to find patterns in these data; thus, once again, a human is viewed as a pattern seeker, AI just reinforces this view. However, this is a huge statement, I prefer to see humans like “root seeker”: When a phenomenon occurs, independently from the pattern the data is associated with, the phenomenon would reveal to be that, humans want to understand why the phenomenon is there. Therefore, applied mathematical researches, the methods used in AI, remain “dry” and unpalatable if someone wants to explain why an event occurs on and not “just” how the event happens!
There is an easy solution available to us. Let’s treat the AI as it should always be treated as help, and, as an amazing source of advice, but never as the final decider when a given phenomenon is analysed.
Our maths, our data and our computers gave these results, but, those results should never fully replace human decision process, which is not only a data driven process, because it is based on critical thinking and theoretical structuring. Nota bene: This holistic approach has a cost, always the same: it needs time. We will discuss these aspects in section 6 and 7. Before some remarks concerning, a hypothetical pure AI solution in the AM world.
5.1. AI vs HI: Some general philosophical considerations of the AM industry and its future, or why pure AI would likely be a failure.
For an economist like me, what matters is grasping what is in front of us, or in this case the likely outcome of having AM exposed to pure AI. As such, the basic question we should consider is: whether the robot advisor alone, in front of a client (we can take IPsoft Amelia) would guarantee a better service, with a high-quality standard and better results, regarding allocations advises?
Sure, Amelia’s AI software, will have the best skills of the planet when considering the past, and the future incorporated via thousands of numeric forecasts, but, she is not and will not be able to be present, here– because Amelia’s consciousness of being present is lacking-as explained above-.
The client’s time will never be Amelia’s time, even if she is endowed with empathy and plenty of other funny gadgets, to mimic a “self”. She will never manage to feel and share a client’s existential struggle of living in the now and this despite allowing her using all client’s Big Data, which are, data, which significance may always be challenged.
Outside of these considerations, the key feature to succeeding through offering pure AI in the AM space is mainly the standardisation (commoditisation) of financial services. Standardisation is nothing new in the AM industry, or in any other industry: It is a basic element that cuts costs and increases margins. From this perspective, AI will be an enhancement, which, by definition, tends to reduce the heterogeneity of the AM’s offering. This is the well-known other side of the standardisation coin: Homogenisation of supply and, in the case of finance, a huge tendency to select few investment vehicles, meaning a high likelihood of ending with some bubble phenomena.
Nonetheless, the standardisation will allow to achieve two main targets: speed and efficiency. At the end of the day, this are the main reasons moving into a pure AI offer: You can deliver quickly, standardised wealth allocation solutions to plenty of client at once.
But, as we had already explained fast allocation advice might become an issue. Namely, the consciousness of the data available and the usage of real creativity, used to describe the current economic status, require time for being encapsulated in a satisfactory and well-structured wealth allocation advice.
Nothing new under the sun, AI can count on all power of cloud computing to deliver its fast service, but this has a cost: Less quality-no real creativity- and less transparency-no consciousness and so no solution for the data relevance problem-. Sadly, an excuse will always be available in the case of AI failure: Either raw data are wrongly collected, so the machine is correct, and the failure is due to the statistical apparatus, or we have still some missing data, and so more data must be collected.
The outcome will always be the same: Few humans willing to think (Warren Buffett’ spirit please lasts forever with us) alongside all others spending their time collecting (soft) data-or even involuntarily producing them- and traditional (hard) data, hoping they will teach us what to do, and this thanks to mathematical machine named pompously AI.
All in all, the AM service quality will continue to depend in the only real variable that matters in the AM industry: The time an AM provider wants to spend before delivering a wealth allocation advice. Here, I really want to be clear: I am not against those actors in the AM world who want to move as soon as possible into a full AI solution. This is their choices, and we are in a free economy, consumers will decide after all, my goal here it is only to stress the overall consequences and possible risks.
In any case, due to these considerations, smart clients and/or clients passionate about finance will ignore an AI based advisory offer-or use it as a simple benchmark at best-: They will create their own offer-trading websites which will provide them more than enough- and they will look for real creativity and freedom. In our view, this phenomenon is already ongoing, and with AI involved, will likely amplify. Others, will move more and more wealth into venture capital, shadow banking and other -heterodox- solutions: In these places, those involved will have the feeling of financing real valued projects and not trends, patterns and all sort of statistical considerations, which they may not consider being part of real finance!
What about the institutional clients of AM in a pure AI world? They are likely to be squeezed between a homogenous offer and a more and more standardised regulations framework, which, may be based, on AI driven metrics soon. The likelihood of having institutional portfolios more exposed to systematic risks will likely increase.
However, are we sure the AM world is going really in this extreme direction, i.e. having a generalized pure AI solution?
We do not believe so. Why? Because, it is important to note that embracing a pure AI solution will imply accepting a world of financial advisories defined without theories, to rephrase Chris Anderson.
Here, data are mastered but not understood or properly challenged-as we said above data definitions are taken for granted, raw data are the real masters-. Theoretical investigation, on the contrary, often challenges either the relevance of the data set when we describe a phenomenon, or the data as it has been defined!
More dramatically, having all those AI numerical results on board may not be enough if the world we are living in is, to use a mathematical expression, a “dynamic system” (details in section 7 bellow) characterized by radical uncertainty. Indeed, in this “system” maths methods on data without theoretical discourse, therefore without awareness, creativity and intuition, are doomed to fail!
In this world, the presence of several theoretical discourses in addition to, and at time substituting, the results provided by data science are imperative. Let’s present these arguments in the last two paragraphs of this essay.
6.What is difficult for AI is normal for HI: So why bother!
Funnily enough, questioning if we are still in a Go game or if we have moved to TroGo is just the normal daily work for an economist in the AM industry. However, the economists starting point is different from the AI one. The first and most basic question is: Am I sure that what I know in terms of ideas, from my learning and experience, is still valid for interpreting the current economy, or should I redraw my knowledge and consider new ideas? This starting point is broad, and it does not contain a direct reference to the data sets in our hands. Nonetheless, it is by observing these data we decide to activate our brains.
Here, an economist’s brain is open to the new environment, and his main query will be: Are the ideas I have in my repertoire useful, or should I add some new ones?
Please note that- and it is not a rare occurrence- these new ideas may not always fit the present data series (the data set could be extremely short). We are nonetheless ready to defend these ideas if the theoretical narrative stands, if it is convincing and it is logically correct.
An economist will use a new concept because it is required to mentally handle (mentally, at least at first) a new puzzling query (or several). The concept will then generate, in turn, a (series of) correlations to be check on the data. These data- call it the world or the nature- is now under stress and scrutiny. We are asking data for proof of whether our ideas are correct, and this is where statistics and maths are supposed to come to the centre of the scene! -This has been the standard plain modern scientific procedure since Galileo, Descartes or Kant to Popper and Kuhn-: Once again, the starting point of a knowledge act is to understand a phenomenon, and an economist wants to see if he can understand why and how an economy is changing! The starting point should not only be the (raw) data value or its patterns.
Afterwards, the economist will focus on some sets of data which are supposed to help to confirm the new theory:
By so doing, the economist is solving the issue of knowing whether we are in Go or in TroGo- the consciousness and the relevance issue pointed out above are solved-: by building a theory and focusing on certain correlations. Clearly, all these steps have a cost: Time. As already said, without time no real great service.
The economist points forward to what is new in the new framework and what really matters, which in this case is understanding the data and justifying them! The hard part of the HI approach can be described in a single sentence: Creativity and constant questioning.
The difference between AI and HI is clear now: Humans are not straight on data trying to please them. Man, first, evaluates what he has available in terms of concepts and theoretical background, once this is set properly, he will attack the world, or rather, data. The explanatory, theoretical discourse is, basically the set of arguments and ideas which will, eventually, be presented to the client: the pedagogical discourse we must deliver in front of clients is always in our minds, data will drive many decisions, but simultaneously, some decisions will be based more on our critical thinking and our consciousness of being in a new economic environment, which is asking new, out of the box, actions.
Our actions always have goals, but this is not always to find a “regularity” within data, which at best explains the past and used to extrapolate into the future: We are always focusing on elements, which, we believe, represent key aspects that define new environments and we ignore others because we believe they are irrelevant in new frameworks. The future is then built from those few elements that we believe are key: A HI has this unbelievable and unreached flexibility, if it is needed we are working using a n dimensional causality space, but if needed, effortlessly, HI shrinks the space to a less dimensional one.
Because, the HI is finding the right number of dimensions so to “easily” treat the issue: This is what we call, broadly, an intuition, and which might appear only by being conscious about the need of a reduction in complexity.
Once the intuition there a discourse can be elaborated and a vision shared, all this done without, a priori, constantly looking at data. Our democratic political world works like that, but also any AM advisory team is basically follow the same principle: A client wants that a discourse, a frame, not only defined in terms of data but also full of passion. The future will always bring unexpected events, but it is better to prepare receiving them by taking our passion on board and not only our rational calculus.
HI simultaneously solves the three points presented above, that is, HI is conscious that we are in TroGo, HI learns the new rules associated to the new framework by creating a valid theoretical picture of it, which can be easily shared! Please note: The theoretical picture might be wrong, still it is based on the awareness that something radical need to be done to understand the new framework. The scientific debate (and the analysis of the data) will do the rest about knowing the validity of the theoretical construction.
Indeed, economists talk, communicate and share their views. Being in TroGo is, after all, a possibility among others: What matters here is knowing if the view is shared and agreed upon with others, HI is always a form of social intelligence (SI): This is, by the way, the famous strategic interactions we talked about in our previous essays, and which will be developed in the next one.
Funnily, as a side note, we can even think about the possibility of having several AIs communicating as humans do:
Will this solve anything? How can this be organized when each AI machine is privately owned? What will be the implication when deep learning is present in each AI?
I just believe this is an entirely new Pandora’s box: I will not open this discussion now, may be in the future. Besides, in any case, I do not see AI solving its awareness issue thanks to communication with its pairs.
7.AI vs HI: Why the human touch is definitely needed: We are living in a dynamical system.
As we saw in the previous section, HI has an attribute that AI does not: a real creativity. Ideas can be and are created independently from data sets and some ideas are used if and only if the environment requires them. Now, my long thesis hinges on a simple remark: If Go always remains Go, AI will do the job despite not being fully HI compatible.
Indeed, if Go is Go, it is key to remember that this implies that i) data are defined in an indisputable way and ii) we master those data and we use them, but we do not care to ask ourselves why they are present in the first place. We are in Chris Anderson’s dream world, -the end of any theory world-, in which scientific method is considered as obsolete. In Chris’ dream, we live in a world shaped by data patterns! Everything will then be rationalized, pictured and forecasted, as part of a gigantic- alienating-framework, which can only change based on following patterns or complex data structures: Is this not the best representation of Weber’s Iron Cage?
Fortunately, Anderson’s dream is likely to remain a nightmare (let’s hope forever).
In our view, there are plenty of signals, showing that our economy is likely to be a “dynamic” system to use a mathematics and physicist-based term, in which data are far from being defined in a fixed, stable way. Creativity and the openness to question and theorise via our old scientific approach still makes a lot of sense. That said, a “dynamic” system’s main characteristic is radical uncertainty. Let me quote the excellent FT columnist Wolgang Munchau here:
“The financial crisis turned what outwardly seemed a stable political and financial [economic] environment into what mathematicians and physicists would call a “dynamical” system. The main characteristic of such systems is radical uncertainty. Such systems are not necessarily chaotic-though some may be- but they are certainly unpredictable. You cannot model them with a few equations…Radical uncertainty is a massive challenge, because you can never be sure of much. In particular, you can no longer be certain that you can extrapolate the trends of the past into the future”.
Why are we living in this sort of system? The list is long, so I will write only two main examples, with the rest to come in following essays:
1. If we examine the market economy in developed countries, the price system mechanism from the top (finance) to the bottom (consumption goods and services) is just in full distortion.
At the top, we have phenomena like, zero or negative interest rates due to massive monetary expansion policies which have drowned financial markets and distorted numerous prices. In addition, some central banks invested billions in these markets due to their excessive balance sheets adding further distortions. Furthermore, these phenomena, are likely to be amplified, thanks to passive investment vehicles which push series of prices up and reduce the liquidity of certain underlying instruments. Not a surprise then that some prices appear unrelated given the business plans of the underlying firms.
At the bottom, in a huge paradigm shift, globalisation and technological progress are constantly pushing prices down, offering more for less and making several prices unintelligible in economic terms.
Why do WhatsApp, Google or Facebook services have no price for a consumer? What does this mean? Does this mean they have no value? Do we have a valid theory for this area? How do we economically interpret the raw data coming from these activities? What about service providers, such as Uber, Airbnb and all the constellations defined by the so-called sharing economy? Are we sure there really no evil-to rephrase Google motto- in these new great services, which are reshaping the entire economy?
2.The way in which we work and in which we interact with each other: reflect that everything is changing in those spaces as well: Notions like (un)employment, the job market, career transitions and career paths are constant challenges which are not captured in statistics. Furthermore, the stream of revenue associated with our work is also under threat, with huge repercussions in terms of political instability and social frustration: Where are we going from here forward? What is the added value or marginal productivity (in value terms), of a person working in an e-service firm, when all firms become e-service providers? What are the links with that person’s salary? Is the marginal productivity of labour still a good measure of a worker’s added value and of a worker’s salary?
I will continue to elaborate this list in future essays. Nonetheless, these two aspects are enough to show the dramatic changes we are experiencing. The economy is changing so drastically that we must embrace the dynamic system’s vision of it. This effectively implies more complex and elaborate models in one hand, and, welcomes computer simulation, such as the AI approach to an economy, in the other. However, this approach causes huge and important alternatives to human judgment and analysis because only humans are endowed with the main source of light allowed to go through the dark: creativity and constant, critical questioning.