Advances in artificial intelligence and machine learning have raised fears of large-scale job losses. And while labor-market adaptation is likely to stave off permanent high unemployment, it cannot be counted on to prevent a sharp rise in inequality.
Perhaps no single aspect of the digital revolution has received more attention than the effect of automation on jobs, work, employment, and incomes. There is at least one very good reason for that – but it is probably not the one most people would cite.
Using machines to augment productivity is nothing new. In so far as any tool is a machine, humans have been doing it for most of our short history on this planet. But, since the first Industrial Revolution – when steam power and mechanization produced a huge, sustained increased in productivity – this process has gone into overdrive.
Not everyone welcomed this transition. Many worried that reduced demand for human labor would lead to permanently high unemployment. But that didn’t happen. Instead, rising productivity and incomes bolstered demand, and thus economic activity. Over time, labor markets adapted in terms of skills, and eventually working hours declined, as the income-leisure balance shifted.
And yet, as augmentation of human labor gives way to automation – with machines performing a growing number of tasks autonomously in the information, control, and transactions segments of the economy – fears of large-scale job losses are again proliferating. After all, white- and blue-collar jobs involving mostly routine – that is, easily codified – tasks have been disappearing at an accelerating rate, especially since 2000. Because many of these jobs occupied the middle of the income distribution, this process has fueled job and income polarization.
As in the nineteenth century, however, labor markets are adapting. At first, displaced workers may seek new employment in jobs requiring their pre-existing skills. But, facing limited opportunities, they soon begin pursuing jobs with lower (or easily attainable) skill requirements, including part-time jobs in the internet-enabled gig economy, even if it means accepting a lower income.
Over time, a growing number of workers begin investing in acquiring skills that are in demand in non-routine, higher-paying job categories. This is generally a more time-consuming process, though it has been accelerated in some countries, including the United States, by initiatives involving government, businesses, and educational institutions.
But, even with institutional support mechanisms, access to skills development is usually far from equitable. Only those with sufficient time and financial resources can make the needed investment, and in a highly unequal society, many workers are excluded from this group. Against this background, we should probably be worried less about large-scale permanent unemployment and more about an uptick in inequality and its social and political ramifications.
To be sure, technological adaptation may reduce the magnitude of the skills-acquisition problem. After all, markets reward innovations that make digital equipment and systems easier to use. For example, the graphical user interface, which enables us to interact with electronic devices via visual indicator representations, is now so pervasive that we take it for granted. As such intuitive approaches are applied to increasingly complex technological processes, the need for re-training – and, thus, the digital revolution’s distributional impact – will be diminished.
Progress on artificial intelligence will also have an impact. Until about ten years ago, automation relied on the codification of tasks: machines are programmed with a set of instructions that reproduce the logic of human decision-making. But what about tasks that cannot be distilled into a series of logical, predefined steps? From understanding natural language to recognizing objects visually, a surprisingly large number of activities – even ostensibly simple ones – fit into this category. This has kept many jobs “safe” from automation, but not for much longer, owing to advances in machine learning.
Machine learning is essentially very sophisticated pattern recognition. Using large pools of data and massive computing power, machines learn to do things we cannot code. They do this using examples rather than rules-based logic. Advances in machine learning have opened vast new areas of automation: robotics, autonomous vehicles, and scanning technical medical literature for key articles. In many areas – such as pattern recognition in genetics and biomedical science – machines not only become capable of replacing human workers; in certain respects, their capabilities dwarf those of any human.
This is better news than it may seem. Yes, far more tasks and subtasks will be reallocated to machines. But the purpose and end point of the digital revolution must be to turn automation of work into digital augmentation. And when machines perform tasks humans cannot, augmentation is precisely what we are getting.