macroresilience

resilience, not stability

Archive for March, 2011

Advances in Technology and Artificial Intelligence: Implications for Education and Employment

with 10 comments

In a recent article, Paul Krugman pointed out the fallacies in the widely held belief that more education for all will lead to better jobs, lower unemployment and reduced inequality in the economy. The underlying thesis in Krugman’s argument (drawn from Autor, Levy and Murnane)  is fairly straightforward and compelling: Advances in computerisation do not increase the demand for all “skilled” labour. Instead they reduce the demand for routine tasks, including many tasks that we currently perceive as skilled and require significant formal education for a human being to carry out effectively.

This post is my take on what advances in technology, in particular artificial intelligence, imply for the nature of employment and education in our economy. In a nutshell, advances in artificial intelligence and robotics means that the type of education and employment that has been dominant throughout the past century is now almost obsolete. The routine jobs of 20th century manufacturing and services that were so amenable to creating mass employment are increasingly a thing of the past. This does not imply that college education is irrelevant. But it does imply that our current educational system, which is geared towards imparting routine and systematic skills and knowledge, needs a radical overhaul.

As Autour et al note, routine human tasks have gradually been replaced by machinery and technology  since atleast the advent of the Industrial Revolution. What has changed in the last twenty years with the advent of computerisation is that the sphere of human activities that can be replaced by technology has broadened significantly. But there are still some significant holes. The skills that Autour et al identify as complementary rather than substitutable by computerisation are those that have proved most challenging for AI scientists to replicate. The inability to automate many tasks that require human sensory and motor skills is an example of what AI researchers call Moravec’s Paradox. Hans Moravec identified that it is much easier to engineer apparently complex computational tasks such as the ability to play chess than it is to engineer the sensorimotor ability of a one-year old child. In a sense, computers find it harder to mimic some of our animalistic skills and relatively easy to mimic many of our abilities that we have long thought of as separating us from other animals. Moravec’s paradox explains why many manual jobs such as driving a car have so far resisted automation. At the same time AI has also found it hard to engineer the ability to perform some key non-routine cognitive tasks such as the ability to generate creative and novel solutions under conditions of significant irreducible uncertainty.

One of the popular misconceptions about the limits of AI/technology is the notion that the engineered alternative must mimic the human skillset completely in order to replace it. In many tasks the human method may not be the only way or even the best way to achieve the task. For example, the Roomba and subsumption architectures do not need to operate like a human being to get the job done. Similarly, a chess program can compete with a human player even though the brute-force method of the computer has very little in common with the pattern-recognising, intuitive method of the grandmaster. Moreover, automating and replacing human intervention frequently involves a redesign of the operating environment in which the task is performed to reduce uncertainty, so that the underlying task can be transformed into a routine and automatable one. Herbert Simon identified this long ago when he noted: “If we want an organism or mechanism to behave effectively in a complex and changing environment, we can design into it adaptive mechanisms that allow it to respond flexibly to the demands the environment places on it. Alternatively, we can try to simplify and stabilize the environment. We can adapt organism to environment or environment to organism”. To hazard a guess, the advent of the “car that drives itself” will probably involve a significant redesign of the design and rules of our roads.

This redesign of the work environment to reduce uncertainty lies at the heart of the Taylorist/Fordist logic that brought us the assembly line production system and has now been applied to many white-collar office jobs. Of course this uncertainty is not eliminated. As Richard Langlois notes, it is “pushed up the hierarchy to be dealt with by adaptable and less-specialized humans” or in many cases, it can even be pushed out of the organisation itself. Either way, what is indisputable is that for the vast majority of employees whether on an assembly line in FoxConn or in a call center in India, the job content is strictly codified and routine. Ironically, this very process of transforming a job description into one amenable to mass employment itself means that the job is that much more likely to be automated in the future as the sphere of activities that are thwarted by Moravec’s paradox reduces. For example, we may prefer competent customer service from our bank but have long since reconciled ourselves to sub-standard customer service as the price we pay for cheap banking. Once we have replaced the “tacit knowledge” of the “expert” customer service agent with an inexperienced agent who needs to be provided with clear rules, we are that much closer to replacing the agent in the process altogether.

The implication of my long-winded argument is that even Moravec’s paradox will not shield otherwise close-to-routine activities from automation in the long run. That leaves us with employment opportunities necessarily being concentrated in significantly non-routine tasks (cognitive or otherwise) that are hard to replicate effectively through computational means. It is easy to understand why the generation of novel and creative solutions is difficult to replicate in a systematic manner but this is not the only class of activities that falls under this umbrella. Also relevant are many activities that require what Hubert and Stuart Dreyfus call expert know-how. In their study of skill acquisition and training that was to form the basis of their influential critique of AI, they note that as one moves from being a novice at an activity to being an expert, the role of rules and algorithms in guiding our actions diminishes to be replaced with an intuitive tacit understanding. As Hubert Dreyfus notes, “a chess grandmaster not only sees the issues in a position almost immediately, but the right response just pops into his or her head.”

The irony of course is that the Taylorist logic of the last century has been focused so precisely on eliminating the need for such expert know-how, in the process driving our educational system to de-emphasise the same. What we need is not so much more education as a radically different kind of education. Frank Levy himself made this very point in an article a few years ago but the need to overhaul our industrial-age education system has been most eloquently championed by Sir Ken Robinson [1,2]. To say that our educational system needs to focus on “creativity” is not to claim that we all need to become artists and scientists. Creativity here is defined as simply the ability to explore effectively rather than follow a algorithmic routine, a role that many of our current methods of “teaching” are not set up to achieve. It applies as much to the intuitive, unpredictable nature of biomedical research detailed by James Austin as it does to the job of an expert motorcycle mechanic that Matthew Crawford describes so eloquently. The need to move beyond a simple, algorithmic level of expertise is not one driven by sentiment but increasingly by necessity as the scope of tasks that can be performed by AI agents expands.   A corollary of this line of thought is that jobs that can provide “mass” employment will likely be increasingly hard to find. This does not mean that full employment is impossible, simply that any job that is routine enough to employ a large number of people doing a very similar role is likely to be automated sooner or later.

 

Bookmark and Share

Written by Ashwin Parameswaran

March 15th, 2011 at 1:43 pm