macroresilience

resilience, not stability

Archive for May, 2013

Employment In A World Where Androids Can Dream Of Electric Sheep

with 9 comments

If a robot could do everything that a human could, then why would any human be employed? The pragmatist would respond that robots still cannot do everything that a human being can (e.g. sensory and motor skills). Some would even argue that robots will never match the creative skills of a human being. But it is often taken for granted that if robots were equivalent to humans in an objective sense, then there would be no demand for human “work”. Is this assumption correct?

In Philip K. Dick’s novel ‘Do Androids Dream of Electric Sheep?’, androids and synthetic animals are almost indistinguishable from human beings and real animals. Yet every human being wants a “real” animal despite the fact that a real animal costs much more than an artificial animal that can do everything that the “natural” animal can. A real ostrich costs $30,000 and an equivalent synthetic ostrich costs $800 but everyone wants the real thing. Real animals are prized not for their perfection but for their imperfection. The sloppiness and disorder of real life is so highly valued that fake animals have a “disease circuit” that simulates biological illness when their circuits malfunction.

Dick’s vision is a perfect analogy for the dynamics of value in the near-automated economy. Even in a world where the human contribution has little objective value, it has subjective value in the economy. And this subjective value comes not from its perfection but from its imperfection, its sloppiness, its humanness. Even in a world where androids can dream of electric sheep, technological unemployment can be avoided.

In many respects, we already live in such a world. Isn’t much of the demand for organic food simply a desire for food that has been grown by local human beings rather than distant machines? Isn’t the success of Kickstarter driven by our desire to consume goods and services from people we know rather than from bureaucratic, “robotic” corporate organisations?

However even if the human contribution is not an expert contribution, it must be a uniquely human contribution. Unfortunately our educational system is geared to produce automatons, mediocre imitations of androids rather than superior, or even average, human beings.

Bookmark and Share

Written by Ashwin Parameswaran

May 13th, 2013 at 8:51 pm

Deskilling and The Cul-de-Sac of Near Perfect Automation

with 5 comments

One of the core ideas in my essay ‘People Make Poor Monitors For Computers’ was the deskilling of human operators whose sole responsibility is to monitor automated systems. The ability of the automated system to deal with most scenarios on ‘auto-pilot’ results in a deskilled human operator whose skill level never rises above that of a novice and who is ill-equipped to cope with the rare but inevitable instances when the system fails. As James Reason notes1 (emphasis mine) :

Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills. One of the consequences of automation, therefore, is that operators become de-skilled in precisely those activities that justify their marginalised existence. But when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions. Duncan (1987, p. 266) makes the same point: “The more reliable the plant, the less opportunity there will be for the operator to practise direct intervention, and the more difficult will be the demands of the remaining tasks requiring operator intervention.”

‘Humans monitoring near-autonomous systems’ is not just one way to make a system more automated. It is in fact the most common strategy to increase automation within complex domains. For example, drone warfare largely consists of providing robots with increasing autonomy such that “the human operator is only responsible for the most strategic decisions, with robots making every tactical choice”2.

But if this model of automation deskills the human operator, then why does anyone choose it in the first place? The answer is that the deskilling and the fragility that comes with it is not an instantaneous phenomenon. The first-generation automated system piggy backs upon the existing expertise of the human operators who have become experts by operating within a less-automated domain. In fact expert human operators are often the most eager to automate away parts of their role and are most comfortable with a monitoring role. The experience of having learnt on less automated systems gives them adequate domain expertise to manage only the strategic decisions and edge cases.

The fragility arises when the second-generation human operators who have no experience of ever having practised routine tactical activities and interventions have to take over the monitoring role. This problem can be mitigated by retaining the less-automated domain as a learning tool to train new human operators. But in many domains, there is no substitute for the real thing and most of the learning happens ‘on the job’. Certainly this is true of financial markets or trading and it is almost certainly true for combat/war. Derivative traders who have spent most of their careers hacking away at simple tool-like models can usually sense when their complex pricing/trading system is malfunctioning. But what about the novice trader who has spent his entire career working with a complex, illegible system?

In some domains like finance and airplane automation, this problem is already visible. But there are many other domains in which we can expect the same pattern to arise in the future. An experienced driver today is probably competent enough to monitor a self-driving car but what about a driver twenty years from today who will likely not have spent any meaningful amount of time driving a manual car? An experienced teacher today is probably good enough to extract good results from a classroom where so much of the process of instruction and evaluation are automated but what about the next generation of teachers? An experienced soldier or pilot with years of real combat experience is probably competent enough to manage a fleet of drones but what about the next generation of combat soldiers whose only experience of warfare is through a computer screen?

Near-autonomous systems are perfect for ‘machine learning’ but almost useless for ‘human learning’. The system generates increasing amounts of data to improve the performance of the automated component within the system. But the system cannot provide the practise and experience that are required to enable human expertise.

Automation is often seen as a way to avoid ‘irrational’ or sloppy human errors. By deskilling the human operator, this justification becomes a self-fulfilling prophecy. By making it harder for the human operator to achieve expertise, the proportion of apparently irrational errors increases. Failures are inevitably taken as evidence of human failure upon which the system is made even more automated thus further exacerbating the problem of deskilling.

The delayed deskilling of the human operators also means that the transition to a near-automated system is almost impossible to reverse. By definition, simply reverting back to the old less-automated, tool-like system actually makes things worse as the second-generation human operators have no experience with using these tools. Compared to carving out an increased role for the now-deskilled human operator, more automation always looks like the best option. If we eventually get to the dream of perfectly autonomous robotic systems, then the deskilling may be just a temporary blip. But what if we never get to the perfectly autonomous robotic system?

Note: Apart from ‘People Make Poor Monitors For Computers’, ‘The Control Revolution And Its Discontents’ also touches upon similar topics but within the broader context of how this move to near-perfectly algorithmic systems fits into the ‘Control Revolution’.


  1. ‘Human Error’ by James Reason (1990), pg 180. ↩

  2. ‘Robot Futures’ by Illah Reza Nourbakhsh (2013), pg 76. ↩

Bookmark and Share

Written by Ashwin Parameswaran

May 9th, 2013 at 5:35 pm