In the last few years the perennial debate about technological progress has gone beyond “generic technology”, focusing on robotics and artificial intelligence and their potential impact. Such debate has mostly taken the shape of a feud between two opposing factions, with one painting a paradise-like future where humans will spend all the time engrossed in their favourite leisure activities with robots doing all the work and the other a hell-like future where humans will be jobless and slaves to some kind of Übermensch and his thinking robots army. It is more plausible than not that Aristotle will be proven right once again, that also in this case “μέσον τε καὶ ἄριστον”, that “in medio stat virtus” (as Romans would put it), that the shape the future will take is going to lie somewhere in between these two visions.
Erik Brynjolfsson and Andrew McAfee, two professors at MIT and directors of the MIT Initiative On The Digital Economy, with their 2014 book The Second Machine Age stay more on the hopeful side of things, while Martin Ford, with his book Rise Of The Robots, which won the prestigious Financial Times and McKinsey Business Book of the Year Award in 2015, stays more on the gloomy one.
The Second Machine Age it’s all about three fundamental conclusions. First, digital technologies, whose core are hardware, software and networks, are turning the world we live in upside down and we are only just beginning. Second, these transformations will be profoundly beneficial, in that variety and volume of “consumption” will increase. Third, digitization will bring along some challenges: just like there has never been a better time to have the right education and skills to create and capture the value that these technologies create, so they argue, there has also never been a worse time to develop ordinary abilities and skills as computers and robots are becoming better by the second in turning those jobs into ashes.
Brynjolfsson and McAfee make numerous examples of digital technologies who are changing the landscape, from self-driving cars to the dawn of 3D printing, passing through IBM’s Watson who beat the human champion of a game called Jeopardy! and instantaneous translation between multiple languages. In order to grasp what lies behind such changes and what’s to come, one needs to understand the three fundamental characteristics of technological progress in the twenty-first century: that is exponential, digital and combinatorial.
The famous Moore’s law is the first building block. This is not a classic law of physics, as it is more about the results that computers’ industry engineers and scientists have been able to accomplish: the number of transistors in a chip doubles every two years.
In no other domain such sustained progress can be seen. The splendour of recent technological development is that all things like microphones, cameras and accelerometers that, for example, are in everybody’s smartphone, have been turned from analog to digital. In other words, they have become chips.
And, as such, Moore’s law applies, with the progress we can all appreciate every day. The best part? That we are only at the beginning of the curve’ upward tilt.
As most are aware, the technological change we are witnessing is digital in its nature. What this means it’s actually pretty straightforward but it’s still worth making it clear beyond any doubt: it’s the process of turning all kinds of information and media, ranging from text to video passing through sounds and photos into binary code which is, after all, the language of computers and alike. Digital information has two important characteristics: first, it’s non-rival (an economic term meaning that the consumption by one agent does not hinder the consumption by another one) and, second, it has marginal costs of reproduction close to zero. This allows for a usage on an unfathomable scale.
We have the world in our hands, essentially, and the cost for that is that of a smartphone.
What can come out of more than 7 billion brains (we are slowly getting to the point where everybody in this world will have a smartphone) having access to such a skyrocketing amount of data and information is anyone’s guess.
Bob Gordon, an eminent economist at Northwestern University, published in 2016 an immense tome named The Rise And Fall Of American Growth, where he argues, in a nutshell, that we have run out of significant innovation (he does so by looking at productivity numbers and comparing these across different periods over the last 150 years) and that no, Facebook is not the steam engine or electricity. The latters are what economists call General Purpose Technologies (GPT), which are technologies that abide by three criteria: pervasive, improving over time and able to spawn new innovations. The two MIT scientists distance themselves from Gordon’s position arguing, instead, in favour of a GPT classification for ICT as well. Which side to take is up to each reader.
At any rate, these progresses have sparked the creation of real and useful AI and the connection of almost everybody on this planet on a common network.
Both events are watershed moment in the history of mankind. While both events will create plenty of bounty, as it is easy to guess, spreads are also going to increase (unless something is done about those). Three groups will reap the bounty: those who have invested in non-human capital, in human capital and superstars with super talents. Fundamentally, Brynjolfsson and McAfee suggest that instead of racing against the machines, we should try and run with the machines: it’s the skills, stupid (right, Bill?).
While I said that Rise Of The Robots is a more pessimistic book, there is a fundamental agreement between the two books: that ICT is indeed different from anything that has ever come before it. The reasons given are slightly different but the fundamentals are there. Apart from that, the books diverge in their assessment. For example, in the second chapter Ford looks back at the last 30 years and discusses the stagnant wages, the decline of labour’s share of total income and the increase of profits share of total income, the decline in labour force participation, the diminishing job creation rate paired with soaring long-term unemployment, inequality, polarization and rise of part-time jobs, the decline of income and underemployment of college graduates.
Ford also makes sure to make the case that not even the “right skills” and the “right education” might be enough: machine learning and deep learning mechanisms are coming for knowledge-based jobs too. A recent Oxford study stated that, possibly, close to 50% of jobs might be subject to automation in a not-so-distant future. The “no-one-is-safe” message is repeated time and time again and Ford dedicates chapters to possible turmoil in higher education and healthcare. Not even those sectors, filled with post-graduate educated workers, will be immune to ICT. Arguably, the most attention-grabbing chapter is the one dedicated to super intelligence and the Singularity. While he shows that truly thinking machines, nanotechnology and, most of all, singularity seem to be still pie-in-the-sky stuff, he does not dismiss them entirely, discussing, instead, possible consequences and scenarios in case of realization.
Both books neither exhaust the debate on twenty-first century-like technological change nor do they provide some easy-to-accomplish and satisfying policy prescriptions. But they do make the reader think. Both sides have their own merits and reading only one of the two would surely be insufficient, as both works contain fascinating insights. They do ask some very important questions. And this is what, ultimately, matters. After all, Francois-Marie Arouet, known to most as Voltaire, hardly a simpleton, famously stated “judge a man by his questions, not his answers”.