macroresilience

resilience, not stability

Archive for the ‘Artificial Intelligence’ Category

Explaining The Neglect of Doug Engelbart’s Vision: The Economic Irrelevance of Human Intelligence Augmentation

with 8 comments

Doug Engelbart’s work was driven by his vision of “augmenting the human intellect”:

By “augmenting human intellect” we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.

Alan Kay summarised the most common argument as to why Engelbart’s vision never came to fruition1:

Engelbart, for better or for worse, was trying to make a violin…most people don’t want to learn the violin.

This explanation makes sense within the market for mass computing. Engelbart was dismissive about the need for computing systems to be easy-to-use. And ease-of-use is everything in the mass market. Most people do not want to improve their skills at executing a task. They want to minimise the skill required to execute a task. The average photographer would rather buy an easy-to-use camera than teach himself how to use a professional camera. And there’s nothing wrong with this trend.

But why would this argument hold for professional computing? Surely a professional barista would be incentivised to become an expert even if it meant having to master a difficult skill and operate a complex coffee machine? Engelbart’s dismissal of the need for computing systems to be easy-to-use was not irrational. As Stanislav Datskovskiy argues, Engelbart’s primary concern was that the computing system should reward learning. And Engelbart knew that systems that were easy to use the first time around did not reward learning in the long run. There is no meaningful way in which anyone can be an expert user of most easy-to-use mass computing systems. And surely professional users need to be experts within their domain?

The somewhat surprising answer is: No, they do not. From an economic perspective, it is not worthwhile to maximise the skill of the human user of the system. What matters and needs to be optimised is total system performance. In the era of the ‘control revolution’, optimising total system performance involves making the machine smarter and the human operator dumber. Choosing to make your computing systems smarter and your employees dumber also helps keep costs down. Low-skilled employees are a lot easier to replace than highly skilled employees.

The increasing automation of the manufacturing sector has led to the progressive deskilling of the human workforce. For example, below is a simplified version of the empirical relationship between mechanisation and human skill that James Bright documented in 1958 (via Harry Braverman’s ‘Labor and Monopoly Capital’). However, although human performance has suffered, total system performance has improved dramatically and the cost of running the modern automated system is much lower than the preceding artisanal system.

AUTOMATION AND DESKILLING OF THE HUMAN OPERATOR

Since the advent of the assembly line, the skill level required by manufacturing workers has reduced. And in the era of increasingly autonomous algorithmic systems, the same is true of “information workers”. For example, since my time working within the derivatives trading businesses of investment banks, banks have made a significant effort to reduce the amount of skill and know-how required to price and trade financial derivatives. Trading systems have been progressively modified so that as much knowledge as possible is embedded within the software.

Engelbart’s vision runs counter to the overwhelming trend of the modern era. Moreover, as Thierry Bardini argues in his fascinating book, Engelbart’s vision was also neglected within his own field which was much more focused on ‘artificial intelligence’ rather than ‘intelligence augmentation’. The best description of the ‘artificial intelligence’ program that eventually won the day was given by J.C.R. Licklider in his remarkably prescient paper ‘Man-Computer Symbiosis’ (emphasis mine):

As a concept, man-computer symbiosis is different in an important way from what North has called “mechanically extended man.” In the man-machine systems of the past, the human operator supplied the initiative, the direction, the integration, and the criterion. The mechanical parts of the systems were mere extensions, first of the human arm, then of the human eye….

In one sense of course, any man-made system is intended to help man….If we focus upon the human operator within the system, however, we see that, in some areas of technology, a fantastic change has taken place during the last few years. “Mechanical extension” has given way to replacement of men, to automation, and the men who remain are there more to help than to be helped. In some instances, particularly in large computer-centered information and control systems, the human operators are responsible mainly for functions that it proved infeasible to automate…They are “semi-automatic” systems, systems that started out to be fully automatic but fell short of the goal.

Licklider also correctly predicted that the interim period before full automation would be long and that for the foreseeable future, man and computer would have to work together in “intimate association”. And herein lies the downside of the neglect of Engelbart’s program. Although computers do most tasks, we still need skilled humans to monitor them and take care of unusual scenarios which cannot be fully automated. And humans are uniquely unsuited to a role where they exercise minimal discretion and skill most of the time but nevertheless need to display heroic prowess when things go awry. As I noted in an earlier essay, “the ability of the automated system to deal with most scenarios on ‘auto-pilot’ results in a deskilled human operator whose skill level never rises above that of a novice and who is ill-equipped to cope with the rare but inevitable instances when the system fails”.

In other words, ‘people make poor monitors for computers’. I have illustrated this principle in the context of airplane pilots and derivatives traders but Atul Varma finds an equally relevant example in the ‘near fully-automated’ coffee machine which is “comparatively easy to use, and makes fine drinks at the push of a button—until something goes wrong in the opaque innards of the machine”. Thierry Bardini quips that arguments against Engelbart’s vision always boiled down to the same objection – let the machine do the work! But in a world where machines do most of the work, how do humans become skilled enough so that they can take over during the inevitable emergency when the machine breaks down?

Bookmark and Share

Written by Ashwin Parameswaran

July 8th, 2013 at 3:54 pm

Employment In A World Where Androids Can Dream Of Electric Sheep

with 9 comments

If a robot could do everything that a human could, then why would any human be employed? The pragmatist would respond that robots still cannot do everything that a human being can (e.g. sensory and motor skills). Some would even argue that robots will never match the creative skills of a human being. But it is often taken for granted that if robots were equivalent to humans in an objective sense, then there would be no demand for human “work”. Is this assumption correct?

In Philip K. Dick’s novel ‘Do Androids Dream of Electric Sheep?’, androids and synthetic animals are almost indistinguishable from human beings and real animals. Yet every human being wants a “real” animal despite the fact that a real animal costs much more than an artificial animal that can do everything that the “natural” animal can. A real ostrich costs $30,000 and an equivalent synthetic ostrich costs $800 but everyone wants the real thing. Real animals are prized not for their perfection but for their imperfection. The sloppiness and disorder of real life is so highly valued that fake animals have a “disease circuit” that simulates biological illness when their circuits malfunction.

Dick’s vision is a perfect analogy for the dynamics of value in the near-automated economy. Even in a world where the human contribution has little objective value, it has subjective value in the economy. And this subjective value comes not from its perfection but from its imperfection, its sloppiness, its humanness. Even in a world where androids can dream of electric sheep, technological unemployment can be avoided.

In many respects, we already live in such a world. Isn’t much of the demand for organic food simply a desire for food that has been grown by local human beings rather than distant machines? Isn’t the success of Kickstarter driven by our desire to consume goods and services from people we know rather than from bureaucratic, “robotic” corporate organisations?

However even if the human contribution is not an expert contribution, it must be a uniquely human contribution. Unfortunately our educational system is geared to produce automatons, mediocre imitations of androids rather than superior, or even average, human beings.

Bookmark and Share

Written by Ashwin Parameswaran

May 13th, 2013 at 8:51 pm

Deskilling and The Cul-de-Sac of Near Perfect Automation

with 5 comments

One of the core ideas in my essay ‘People Make Poor Monitors For Computers’ was the deskilling of human operators whose sole responsibility is to monitor automated systems. The ability of the automated system to deal with most scenarios on ‘auto-pilot’ results in a deskilled human operator whose skill level never rises above that of a novice and who is ill-equipped to cope with the rare but inevitable instances when the system fails. As James Reason notes1 (emphasis mine) :

Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills. One of the consequences of automation, therefore, is that operators become de-skilled in precisely those activities that justify their marginalised existence. But when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions. Duncan (1987, p. 266) makes the same point: “The more reliable the plant, the less opportunity there will be for the operator to practise direct intervention, and the more difficult will be the demands of the remaining tasks requiring operator intervention.”

‘Humans monitoring near-autonomous systems’ is not just one way to make a system more automated. It is in fact the most common strategy to increase automation within complex domains. For example, drone warfare largely consists of providing robots with increasing autonomy such that “the human operator is only responsible for the most strategic decisions, with robots making every tactical choice”2.

But if this model of automation deskills the human operator, then why does anyone choose it in the first place? The answer is that the deskilling and the fragility that comes with it is not an instantaneous phenomenon. The first-generation automated system piggy backs upon the existing expertise of the human operators who have become experts by operating within a less-automated domain. In fact expert human operators are often the most eager to automate away parts of their role and are most comfortable with a monitoring role. The experience of having learnt on less automated systems gives them adequate domain expertise to manage only the strategic decisions and edge cases.

The fragility arises when the second-generation human operators who have no experience of ever having practised routine tactical activities and interventions have to take over the monitoring role. This problem can be mitigated by retaining the less-automated domain as a learning tool to train new human operators. But in many domains, there is no substitute for the real thing and most of the learning happens ‘on the job’. Certainly this is true of financial markets or trading and it is almost certainly true for combat/war. Derivative traders who have spent most of their careers hacking away at simple tool-like models can usually sense when their complex pricing/trading system is malfunctioning. But what about the novice trader who has spent his entire career working with a complex, illegible system?

In some domains like finance and airplane automation, this problem is already visible. But there are many other domains in which we can expect the same pattern to arise in the future. An experienced driver today is probably competent enough to monitor a self-driving car but what about a driver twenty years from today who will likely not have spent any meaningful amount of time driving a manual car? An experienced teacher today is probably good enough to extract good results from a classroom where so much of the process of instruction and evaluation are automated but what about the next generation of teachers? An experienced soldier or pilot with years of real combat experience is probably competent enough to manage a fleet of drones but what about the next generation of combat soldiers whose only experience of warfare is through a computer screen?

Near-autonomous systems are perfect for ‘machine learning’ but almost useless for ‘human learning’. The system generates increasing amounts of data to improve the performance of the automated component within the system. But the system cannot provide the practise and experience that are required to enable human expertise.

Automation is often seen as a way to avoid ‘irrational’ or sloppy human errors. By deskilling the human operator, this justification becomes a self-fulfilling prophecy. By making it harder for the human operator to achieve expertise, the proportion of apparently irrational errors increases. Failures are inevitably taken as evidence of human failure upon which the system is made even more automated thus further exacerbating the problem of deskilling.

The delayed deskilling of the human operators also means that the transition to a near-automated system is almost impossible to reverse. By definition, simply reverting back to the old less-automated, tool-like system actually makes things worse as the second-generation human operators have no experience with using these tools. Compared to carving out an increased role for the now-deskilled human operator, more automation always looks like the best option. If we eventually get to the dream of perfectly autonomous robotic systems, then the deskilling may be just a temporary blip. But what if we never get to the perfectly autonomous robotic system?

Note: Apart from ‘People Make Poor Monitors For Computers’, ‘The Control Revolution And Its Discontents’ also touches upon similar topics but within the broader context of how this move to near-perfectly algorithmic systems fits into the ‘Control Revolution’.


  1. ‘Human Error’ by James Reason (1990), pg 180. ↩

  2. ‘Robot Futures’ by Illah Reza Nourbakhsh (2013), pg 76. ↩

Bookmark and Share

Written by Ashwin Parameswaran

May 9th, 2013 at 5:35 pm

The Control Revolution And Its Discontents

with 20 comments

One of the key narratives on this blog is how the Great Moderation and the neo-liberal era has signified the death of truly disruptive innovation in much of the economy. When macroeconomic policy stabilises the macroeconomic system, every economic actor is incentivised to take on more macroeconomic systemic risks and shed idiosyncratic, microeconomic risks. Those that figured out this reality early on and/or had privileged access to the programs used to implement this macroeconomic stability, such as banks and financialised corporates, were the big winners – a process that is largely responsible for the rise in inequality during this period. In such an environment the pace of disruptive product innovation slows but the pace of low-risk process innovation aimed at cost-reduction and improving efficiency flourishes. therefore we get the worst of all worlds – the Great Stagnation combined with widespread technological unemployment.

This narrative naturally begs the question: when was the last time we had a truly disruptive Schumpeterian era of creative destruction. In a previous post looking at the evolution of the post-WW2 developed economic world, I argued that the so-called Golden Age was anything but Schumpeterian – As Alexander Field has argued, much of the economic growth till the 70s was built on the basis of disruptive innovation that occurred in the 1930s. So we may not have been truly Schumpeterian for at least 70 years. But what about the period from at least the mid 19th century till the Great Depression? Even a cursory reading of economic history gives us pause for thought – after all wasn’t a significant part of this period supposed to be the Gilded Age of cartels and monopolies which sounds anything but disruptive.

I am now of the opinion that we have never really had any long periods of constant disruptive innovation – this is not a sign of failure but simply a reality of how complex adaptive systems across domains manage the tension between efficiency,robustness, evolvability and diversity. What we have had is a subverted control revolution where repeated attempts to achieve and hold onto an efficient equilibrium fail. Creative destruction occurs despite our best efforts to stamp it out. In a sense, disruption is an outsider to the essence of the industrial and post-industrial period of the last two centuries, the overriding philosophy of which is automation and algorithmisation aimed at efficiency and control. And much of our current troubles are a function of the fact that we have almost perfected the control project.

The operative word and the source of our problems is “almost”. Too many people look at the transition from the Industrial Revolution to the Algorithmic Revolution as a sea-change in perspective. But in reality, the current wave of reducing everything to a combination of “data & algorithm” and tackling every problem with more data and better algorithms is the logical end-point of the control revolution that started in the 19th century. The difference between Ford and Zara is overrated – Ford was simply the first step in a long process that focused on systematising each element of the industrial process (production,distribution,consumption) but also crucially putting in place a feedback loop between each element. In some sense, Zara simply follows a much more complex and malleable algorithm than Ford did but this algorithm is still one that is fundamentally equilibriating (not disruptive) and focused on introducing order and legibility into a fundamentally opaque environment via a process that reduces human involvement and discretion by replacing intuitive judgements with rules and algorithms. Exploratory/disruptive innovation on the other hand is a disequilibriating force that is created by entrepreneurs and functions outside this feedback/control loop. Both processes are important – the longer period of the gradual shedding of diversity and homogenisation in the name of efficiency as well as the periodic “collapse” that shakes up the system and puts it eventually on the path to a new equilibrium.

Of course, control has been a aim of western civilisation for a lot longer but it was only in the 19th century that the tools of control were good enough for this desire to be implemented in any meaningful sense. And even more crucially, as James Beniger has argued, it was only in the last 150 years that the need for large-scale control arose. And now the tools and technologies in our hands to control and stabilise the economy are more powerful than they’ve ever been, likely too powerful.

If we had perfect information and everything could be algorithmised right now i.e. if the control revolution had been perfected, then the problem disappears. Indeed it is arguable that the need for disruption in the innovation process no longer exists. If we get to a world where radical uncertainty has been eliminated, then the problem of systemic fragility is moot and irrelevant. It is easy to rebut the stabilisation and control project by claiming that we cannot achieve this perfect world.

But even if the techno-utopian project can achieve all that it claims it can, the path matters. We need to make it there in one piece. The current “algorithmic revolution” is best viewed as a continuation of the process through which human beings went from being tool-users to minders and managers of automated systems. The current transition is simply one where the many of these algorithmic and automated systems can essentially run themselves with human beings simply performing the role of supervisors who only need to intervene in extraordinary circumstances. Therefore, it would seem logical that the same process of increased productivity that has occurred during the modern era of automation will continue during the creation of the “vast,automatic and invisible” ‘second economy’. However there are many signs that this may not be the case. What has made things better till now and has been genuine “progress” may make things worse in higher doses and the process of deterioration can be quite dramatic.

The Uncanny Valley on the Path towards “Perfection”

In 1970, Masahiro Mori coined the term ‘uncanny valley’ to denote the phenomenon that “as robots appear more humanlike, our sense of their familiarity increases until we come to a valley”. When robots are almost but not quite human-like, they invoke a feeling of revulsion rather than empathy. As Karl McDorman notes, “Mori cautioned robot designers not to make the second peak their goal — that is, total human likeness — but rather the first peak of humanoid appearance to avoid the risk of their robots falling into the uncanny valley.”

A similar valley exists in the path of increased automation and algorithmisation. Much of the discussion in this section of the post builds upon concepts I explored via a detailed case study in a previous post titled ‘People Make Poor Monitors for Computers’.

The 21st century version of the control project i.e. the algorithmic project consists of two components:
1. More Data – ‘Big Data’.
2. Better and more comprehensive Algorithm.

The process goes hand in hand therefore with increased complexity and crucially, poorer and less intuitive feedback for the human operator. This results in increased fragility and a system prone to catastrophic breakdowns. The typical solution chosen is either further algorithmisation i.e. an improved algorithm and more data and if necessary increased slack and redundancy. This solution exacerbates the problem of feedback and temporarily pushes the catastrophic scenario further out to the tail but it does not eliminate it. Behavioural adaptation by human agents to the slack and the “better” algorithm can make a catastrophic event as likely as it was before but with a higher magnitude. But what is even more disturbing is that this cycle of increasing fragility can occur even without any such adaptation. This is the essence of the fallacy of the ‘defence in depth’ philosophy that lies at the core of most fault-tolerant algorithmic designs that I discussed in my earlier postthe increased “safety” of the automated system allows the build up of human errors without any feedback available from deteriorating system performance.

A thumb rule to get around this problem is to use slack only in those domains where failure is catastrophic and to prioritise feedback when failure is not critical and cannot kill you. But in an uncertain environment, this rule is very difficult to manage. How do you really know that a particular disturbance will not kill you? Moreover the loop of automation -> complexity -> redundancy endogenously turns a non-catastrophic event into one with catastrophic consequences.

This is a trajectory which is almost impossible to reverse once it has gone beyond a certain threshold without undergoing an interim collapse. The easy short-term fix is always to make a patch to the algorithm, get more data and build in some slack if needed. An orderly rollback is almost impossible due to the deskilling of the human workforce and risk of collapse due to other components in the system having adapted to new reality. Even simply reverting to the old more tool-like system makes things a lot worse because the human operators are no longer experts at using those tools – the process of algorithmisation has deskilled the human operator. Moreover, the endogenous nature of this buildup of complexity eventually makes the system fundamentally illegible to the human operator – a phenomenon that is ironic given that the fundamental aim of the control revolution is to increase legibility.

The Sweet Spot Before the Uncanny Valley: Near-Optimal Yet Resilient

Although it is easy to imagine the characteristics of an inefficient and dramatically sub-optimal system that is robust, complex adaptive systems operate at a near-optimal efficiency that is also resilient. Efficiency is not only important due to the obvious reality that resources are scarce but also because slack at the individual and corporate level is a significant cause of unemployment. Such near-optimal robustness in both natural and economic systems is not achieved with simplistically diverse agent compositions or with significant redundancies or slack at agent level.

Diversity and redundancy carry a cost in terms of reduced efficiency. Precisely due to this reason, real-world economic systems appear to exhibit nowhere near the diversity that would seem to ensure system resilience. Rick Bookstaber noted recently, that capitalist competition if anything seems to lead to a reduction in diversity. As Youngme Moon’s excellent book ‘Different’ lays out, competition in most markets seems to result in less diversity, not more. We may have a choice of 100 brands of toothpaste but most of us would struggle to meaningfully differentiate between them.

Similarly, almost all biological and ecological complex adaptive systems are a lot less diverse and contain less pure redundancy than conventional wisdom would expect. Resilient biological systems tend to preserve degeneracy rather than simple redundancy and resilient ecological systems tend to contain weak links rather than naive ‘law of large numbers’ diversity. The key to achieving resilience with near-optimal configurations is to tackle disturbances and generate novelty/innovation with an an emergent systemic response that reconfigures the system rather than simply a localised response. Degeneracy and weak links are key to such a configuration. The equivalent in economic systems is a constant threat of new firm entry.

The viewpoint which emphasises weak links and degeneracy also implies that it is not the keystone species and the large firms that determine resilience but the presence of smaller players ready to reorganise and pick up the slack when an unexpected event occurs. Such a focus is further complicated by the fact that in a stable environment, the system may become less and less resilient with no visible consequences – weak links may be eliminated, barriers to entry may progressively increase etc with no damage done to system performance in the stable equilibrium phase. Yet this loss of resilience can prove fatal when the environment changes and can leave the system unable to generate novelty/disruptive innovation. This highlights the folly of statements such as ‘what’s good for GM is good for America’. We need to focus not just on the keystone species, but on the fringes of the ecosystem.

 THE UNCANNY VALLEY AND THE SWEET SPOT

The Business Cycle in the Uncanny Valley – Deterioration of the Median as well as the Tail

Many commentators have pointed out that the process of automation has coincided with a deskilling of the human workforce. For example, below is a simplified version of the relation between mechanisation and skill required by the human operator that James Bright documented in 1958 (via Harry Braverman’s ‘Labor and Monopoly Capital’). But till now, it has been largely true that although human performance has suffered, the performance of the system has gotten vastly better. If the problem was just a drop in human performance while the system got better, our problem is less acute.

AUTOMATION AND DESKILLING OF THE HUMAN OPERATOR

But what is at stake is a deterioration in system performance – it is not only a matter of being exposed to more catastrophic setbacks. Eventually mean/median system performance deteriorates as more and more pure slack and redundancy needs to be built in at all levels to make up for the irreversibly fragile nature of the system. The business cycle is an oscillation between efficient fragility and robust inefficiency. Over the course of successive cycles, both poles of this oscillation get worse which leads to median/mean system performance falling rapidly at the same time that the tails deteriorate due to the increased illegibility of the automated system to the human operator.

THE UNCANNY VALLEY BUSINESS CYCLE

The Visible Hand and the Invisible Foot, Not the Invisible Hand

The conventional economic view of the economy is one of a primarily market-based equilibrium punctuated by occasional shocks. Even the long arc of innovation is viewed as a sort of benign discovery of novelty without any disruptive consequences. The radical disequilibrium view (which I have been guilty of espousing in the past) is one of constant micro-fragility and creative destruction. However, the history of economic evolution in the modern era has been quite different – neither market-based equilibrium nor constant disequilibrium, but a series of off-market attempts to stabilise relations outside the sphere of the market combined with occasional phase transitions that bring about dramatic change. The presence of rents is a constant and the control revolution has for the most part succeeded in preserving the rents of incumbents, barring the occasional spectacular failure. It is these occasional “failures” that have given us results that in some respect resemble those that would have been created by a market over the long run.

As Bruce Wilder puts it (sourced from 1, 2, 3 and mashed up together by me):

The main problem with the standard analysis of the “market economy”, as well as many variants, is that we do not live in a “market economy”. Except for financial markets and a few related commodity markets, markets are rare beasts in the modern economy. The actual economy is dominated by formal, hierarchical, administrative organization and transactions are governed by incomplete contracts, explicit and implied. “Markets” are, at best, metaphors…..
Over half of the American labor force works for organizations employing over 100 people. Actual markets in the American economy are extremely rare and unusual beasts. An economics of markets ought to be regarded as generally useful as a biology of cephalopods, amid the living world of bones and shells. But, somehow the idealized, metaphoric market is substituted as an analytic mask, laid across a vast variety of economic relations and relationships, obscuring every important feature of what actually is…..
The elaborate theory of market price gives us an abstract ideal of allocative efficiency, in the absence of any firm or household behaving strategically (aka perfect competition). In real life, allocative efficiency is far less important than achieving technical efficiency, and, of course, everyone behaves strategically.
In a world of genuine uncertainty and limitations to knowledge, incentives in the distribution of income are tied directly to the distribution of risk. Economic rents are pervasive, but potentially beneficial, in that they provide a means of stable structure, around which investments can be made and production processes managed to achieve technical efficiency.
In the imaginary world of complete information of Econ 101, where markets are the dominant form of economic organizations, and allocative efficiency is the focus of attention, firms are able to maximize their profits, because they know what “maximum” means. They are unconstrained by anything.
In the actual, uncertain world, with limited information and knowledge, only constrained maximization is possible. All firms, instead of being profit-maximizers (not possible in a world of uncertainty), are rent-seekers, responding to instituted constraints: the institutional rules of the game, so to speak. Economic rents are what they have to lose in this game, and protecting those rents, orients their behavior within the institutional constraints…..
In most of our economic interactions, price is not a variable optimally digesting information and resolving conflict, it is a strategic instrument, held fixed as part of a scheme of administrative control and information discovery……The actual, decentralized “market” economy is not coordinated primarily by market prices—it is coordinated by rules. The dominant relationships among actors is not one of market exchange at price, but of contract: implicit or explicit, incomplete and contingent.

James Beniger’s work is the definitive document on how the essence of the ‘control revolution’ has been an attempt to take economic activity out of the sphere of direct influence of the market. But that is not all – the long process of algorithmisation over the last 150 years has also, wherever possible, replaced implicit rules/contracts and principal-agent relationships with explicit processes and rules. Beniger also notes that after a certain point, the increasing complexity of the system is an endogenous phenomenon i.e. further iterations are aimed at controlling the control process itself. As I have illustrated above, after a certain threshold, the increasing complexity, fragility and deterioration in performance becomes a self-fulfilling positive feedback process.

Although our current system bears very little resemblance to the market economy of the textbook, there was a brief period during the transition from the traditional economy to the control economy during the early part of the 19th century when this was the case. 26% of all imports into the United States in 1827 sold in an auction. But the displacement of traditional controls (familial ties) with the invisible hand of market controls was merely a transitional phase, soon to be displaced by the visible hand of the control revolution.

The Soviet Project, Western Capitalism and High Modernity

Communism and Capitalism are both pillars of the high-modernist control project. The signature of modernity is not markets, but technocratic control projects. Capitalism has simply done it in a manner that is more easily and more regularly subverted. It is the occasional failure of the control revolution that is the source of the capitalist economy’s long-run success. Conversely, the failure of the Soviet Project was due to its too successful adherence and implementation of the high-modernist ideal. The significance of the threat from crony capitalism is a function of the fact that by forming a coalition and partnership of the corporate and state control projects, it enables the implementation of the control revolution to be that much more effective.

The Hayekian argument of dispersed knowledge and its importance in seeking equilibrium is not as important as it seems in explaining why the Soviet project failed. As Joseph Berliner has illustrated, the Soviet economy did not fail to reach local equilibria. Where it failed so spectacularly was in extracting itself out of these equilibria. The dispersed knowledge argument is open to the riposte that better implementation of the control revolution will eventually overcome these problems – indeed much of the current techno-utopian version of the control revolution is based on this assumption. It is a weak argument for free enterprise, a much stronger argument for which is the need to maintain a system that retains the ability to reinvent itself and find a new, hitherto unknown trajectory via the destruction of the incumbents combined with the emergence of the new. Where the Soviet experiment failed is that it eliminated the possibility of failure, that Berliner called the ‘invisible foot’. The success of the free enterprise system has been built not upon the positive incentive of the invisible hand but the negative incentive of the invisible foot to counter the visible hand of the control revolution. It is this threat and occasional realisation of failure and disorder that is the key to maintaining system resilience and evolvability.

 

 

Notes:

  • Borrowing from Beniger, control here simply means “purposive influence towards a predetermined goal”. Similarly, equilibrium in this context is best defined as a state in which economic agents are not forced to change their routines, theories and policies.
  • On the uncanny valley, I wrote a similar post on why perfect memory does not lead to perfect human intelligence. Even if a computer benefits from more data and better memory, we may not. And the evidence suggests that the deterioration in human performance is steepest in the zone close to “perfection”.
  • An argument similar to my assertion on the misconception of a free enterprise economy as a market economy can be made about the nature of democracy. Rather than as a vehicle that enables the regular expression of the political will of the electorate, democracy may be more accurately thought of as the ability to effect a dramatic change when the incumbent system of plutocratic/technocratic rule diverges too much from popular opinion. As always, stability and prevention of disturbances can cause the eventual collapse to be more catastrophic than it needs to be.
  • Although James Beniger’s ‘Control Revolution’ is the definitive reference, Antoine Bousquet’s book ‘The Scientific Way of Warfare’ on the similar revolution in military warfare is equally good. Bousquet’s book highlights the fact that the military is often the pioneer of the key projects of the control revolution and it also highlights just how similar the latest phase of this evolution is to early phases – the common desire for control combined with its constant subversion by reality. Most commentators assume that the threat to the project is external – by constantly evolving guerrilla warfare for example. But the analysis of the uncanny valley suggests that an equally great threat is endogenous – of increasing complexity and illegibility of the control project itself. Bousquet also explains how the control revolution is a child of the modern era and the culmination of the philosophy of the Enlightenment.
  • Much of the “innovation” of the control revolution was not technological but institutional – limited liability, macroeconomic stabilisation via central banks etc.
  • For more on the role of degeneracy in biological systems and how it enables near-optimal resilience, this paper by James Whitacre and Axel Bender is excellent.
Bookmark and Share

Written by Ashwin Parameswaran

February 21st, 2012 at 5:38 pm

People Make Poor Monitors for Computers

with 55 comments

In the early hours of June 1st 2009, Air France Flight 447 crashed into the Atlantic Ocean. Till the black boxes of AF447 were recovered in April 2011, the exact circumstances of the crash remained a mystery. The most widely accepted explanation for the disaster attributes a large part of the blame to human error when faced with a partial but not fatal systems failure. Yet a small but vocal faction blames the disaster and others like it on the increasingly automated nature of modern passenger airplanes.

This debate bears an uncanny resemblance to a similar debate as to the causes of the financial crisis – many commentators blame the persistently irrational nature of human judgement for the recurrence of financial crises. Others such as Amar Bhide blame the unwise deference to imperfect financial models over human judgement. In my opinion, both perspectives miss the true dynamic. These disasters are not driven by human error or systems error alone but by fatal flaws in the interaction between human intelligence and complex, near fully-automated systems.

In a recent article drawing upon the black box transcripts, Jeff Wise attributes the crash primarily to a “simple but persistent mistake on the part of one of the pilots”. According to Wise, the co-pilot reacted to the persistent stall warning by “pulling back on the stick, the exact opposite of what he must do to recover from the stall”.

But there are many hints that the story is nowhere near as simple. As Peter Garrison notes :

every pilot knows that to recover from a stall you must get the nose down. But because a fully developed stall in a large transport is considered highly unlikely, and because in IFR air traffic vertical separation, and therefore control of altitude, is important, transport pilots have not been trained to put the nose down when they hear the stall warning — which heralds, after all, not a fully developed stall, but merely an approaching one. Instead, they have been trained to increase power and to “fly out of the stall” without losing altitude. Perhaps that is what the pilot flying AF447 intended. But the airplane was already too deeply stalled, and at too high an altitude, to recover with power alone.

The patterns of the AF447 disaster are not unique. As Chris Sorensen observes, over 50 commercial aircrafts have crashed in “loss-of-control” accidents in the last five years, a trend for which there is no shortage of explanations:

Some argue that the sheer complexity of modern flight systems, though designed to improve safety and reliability, can overwhelm even the most experienced pilots when something actually goes wrong. Others say an increasing reliance on automated flight may be dulling pilots’ sense of flying a plane, leaving them ill-equipped to take over in an emergency. Still others question whether pilot-training programs have lagged behind the industry’s rapid technological advances.

But simply invoking terms such as “automation addiction” or blaming disasters on irrational behaviour during times of intense stress does not get at the crux of the issue.

People Make Poor Monitors for Computers

Airplane automation systems are not the first to discover the truth in the comment made by David Jenkins that “computers make great monitors for people, but people make poor monitors for computers.” As James Reason observes in his seminal book ‘Human Error’:

We have thus traced a progression from where the human is the prime mover and the computer the slave to one in which the roles are very largely reversed. For most of the time, the operator’s task is reduced to that of monitoring the system to ensure that it continues to function within normal limits. The advantages of such a system are obvious; the operator’s workload is substantially reduced, and the [system] performs tasks that the human can specify but cannot actually do. However, the main reason for the human operator’s continued presence is to use his still unique powers of knowledge-based reasoning to cope with system emergencies. And this is a task peculiarly ill-suited to the particular strengths and weaknesses of human cognition…..

most operator errors arise from a mismatch between the properties of the system as a whole and the characteristics of human information processing. System designers have unwittingly created a work situation in which many of the normally adaptive characteristics of human cognition (its natural heuristics and biases) are transformed into dangerous liabilities.

As Jeff Wise notes, it is impossible to stall an Airbus in most conditions. AF447 however went into a state known as ‘alternate law’ which most pilots have never experienced where the airplane could be stalled:

“You can’t stall the airplane in normal law,” says Godfrey Camilleri, a flight instructor who teaches Airbus 330 systems to US Airways pilots….But once the computer lost its airspeed data, it disconnected the autopilot and switched from normal law to “alternate law,” a regime with far fewer restrictions on what a pilot can do. “Once you’re in alternate law, you can stall the airplane,” Camilleri says….It’s quite possible that Bonin had never flown an airplane in alternate law, or understood its lack of restrictions. According to Camilleri, not one of US Airway’s 17 Airbus 330s has ever been in alternate law. Therefore, Bonin may have assumed that the stall warning was spurious because he didn’t realize that the plane could remove its own restrictions against stalling and, indeed, had done so.

This inability of the human operator to fill in the gaps in a near-fully automated system was identified by Lisanne Bainbridge as one of the ironies of automation which James Reason summarised:

the same designer who seeks to eliminate human beings still leaves the operator “to do the tasks which the designer cannot think how to automate” (Bainbridge,1987, p.272). In an automated plant, operators are required to monitor that the automatic system is functioning properly. But it is well known that even highly motivated operators cannot maintain effective vigilance for anything more than quite short periods; thus, they are demonstrably ill-suited to carry out this residual task of monitoring for rare, abnormal events. In order to aid them, designers need to provide automatic alarm signals. But who decides when these automatic alarms have failed or been switched off?

As Robert Charette notes, the same is true for airplane automation:

operators are increasingly left out of the loop, at least until something unexpected happens. Then the operators need to get involved quickly and flawlessly, says Raja Parasuraman, professor of psychology at George Mason University in Fairfax, Va., who has been studying the issue of increasingly reliable automation and how that affects human performance, and therefore overall system performance. ”There will always be a set of circumstances that was not expected, that the automation either was not designed to handle or other things that just cannot be predicted,” explains Parasuraman. So as system reliability approaches—but doesn’t quite reach—100 percent, ”the more difficult it is to detect the error and recover from it,” he says…..In many ways, operators are being asked to be omniscient systems administrators who are able to jump into the middle of a situation that a complex automated system can’t or wasn’t designed to handle, quickly diagnose the problem, and then find a satisfactory and safe solution.

Stored Routines Are Not Effective in Rare Situations

As James Reason puts it:

the main reason why humans are retained in systems that are primarily controlled by intelligent computers is to handle ‘non-design’ emergencies. In short, operators are there because system designers cannot foresee all possible scenarios of failure and hence are not able to provide automatic safety devices for every contingency. In addition to their cosmetic value, human beings owe their inclusion in hazardous systems to their unique, knowledge-based ability to carry out ‘on-line’ problem solving in novel situations. Ironically, and notwithstanding the Apollo 13 astronauts and others demonstrating inspired improvisation, they are not especially good at it; at least not in the conditions that usually prevail during systems emergencies. One reason for this is that stressed human beings are strongly disposed to employ the effortless, parallel, preprogrammed operations of highly specialised, low-level processors and their associated heuristics. These stored routines are shaped by personal history and reflect the recurring patterns of past experience……

Why do we have operators in complex systems? To cope with emergencies. What will they actually use to deal with these problems? Stored routines based on previous interactions with a specific environment. What, for the most part, is their experience within the control room? Monitoring and occasionally tweaking the plant while it performs within safe operating limits. So how can they perform adequately when they are called upon to reenter the control loop? The evidence is that this task has become so alien and the system so complex that, on a significant number of occasions, they perform badly.

Wise again identifies this problem in the case of AF447:

While Bonin’s behavior is irrational, it is not inexplicable. Intense psychological stress tends to shut down the part of the brain responsible for innovative, creative thought. Instead, we tend to revert to the familiar and the well-rehearsed. Though pilots are required to practice hand-flying their aircraft during all phases of flight as part of recurrent training, in their daily routine they do most of their hand-flying at low altitude—while taking off, landing, and maneuvering. It’s not surprising, then, that amid the frightening disorientation of the thunderstorm, Bonin reverted to flying the plane as if it had been close to the ground, even though this response was totally ill-suited to the situation.

Deskilling From Automation

As James Reason observes:

Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills. One of the consequences of automation, therefore, is that operators become de-skilled in precisely those activities that justify their marginalised existence. But when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions. Duncan (1987, p. 266) makes the same point: “The more reliable the plant, the less opportunity there will be for the operator to practise direct intervention, and the more difficult will be the demands of the remaining tasks requiring operator intervention.”

Opacity and Too Much Information of Uncertain Reliability

Wise captures this problem and its interaction with a human who has very little experience in managing the crisis scenario:

Over the decades, airliners have been built with increasingly automated flight-control functions. These have the potential to remove a great deal of uncertainty and danger from aviation. But they also remove important information from the attention of the flight crew. While the airplane’s avionics track crucial parameters such as location, speed, and heading, the human beings can pay attention to something else. But when trouble suddenly springs up and the computer decides that it can no longer cope—on a dark night, perhaps, in turbulence, far from land—the humans might find themselves with a very incomplete notion of what’s going on. They’ll wonder: What instruments are reliable, and which can’t be trusted? What’s the most pressing threat? What’s going on? Unfortunately, the vast majority of pilots will have little experience in finding the answers.

A similar scenario occurred in the case of the Qantas-owned A380 which took off from Singapore in November 2010:

Shortly after takeoff from Singapore, one of the hulking A380’s four engines exploded and sent pieces of the engine cowling raining down on an Indonesian island. The blast also damaged several of the A380’s key systems, causing the unsuspecting flight crew to be bombarded with no less than 54 different warnings and error messages—so many that co-pilot Matt Hicks later said that, at one point, he held his thumb over a button that muted the cascade of audible alarms, which threatened to distract Capt. Richard De Crespigny and the rest of the feverishly working flight crew. Luckily for passengers, Qantas Flight 32 had an extra two pilots in the cockpit as part of a training exercise, all of whom pitched in to complete the nearly 60 checklists required to troubleshoot the various systems. The wounded plane limped back to Singapore Changi Airport, where it made an emergency landing.

Again James Reason captures the essence of the problem:

One of the consequences of the developments outlined above is that complex, tightly-coupled and highly defended systems have become increasingly opaque to the people who manage, maintain and operate them. This opacity has two aspects: not knowing what is happening and not understanding what the system can do. As we have seen, automation has wrought a fundamental change in the roles people play within certain high-risk technologies. Instead of having ‘hands on’ contact with the process, people have been promoted “to higher-level supervisory tasks and to long-term maintenance and planning tasks” (Rasmussen, 1988). In all cases, these are far removed from the immediate processing. What direct information they have is filtered through the computer-based interface. And, as many accidents have demonstrated, they often cannot find what they need to know while, at the same time, being deluged with information they do not want nor know how to interpret.

Absence of Intuitive Feedback

Among others, Hubert and Stuart Dreyfus have shown that human expertise relies on an intuitive and tacit understanding of the situation rather than a rule-bound and algorithmic understanding. The development of intuitive expertise depends upon the availability of clear and intuitive feedback which complex, automated systems are often unable to provide.

In AF447, when the co-pilot did push forward on the stick (the “correct” response), the behaviour of the stall warning was exactly the opposite of what he would have intuitively expected:

At one point the pilot briefly pushed the stick forward. Then, in a grotesque miscue unforeseen by the designers of the fly-by-wire software, the stall warning, which had been silenced, as designed, by very low indicated airspeed, came to life. The pilot, probably inferring that whatever he had just done must have been wrong, returned the stick to its climb position and kept it there for the remainder of the flight.

Absence of feedback prevents effective learning but the wrong feedback can have catastrophic consequences.

The Fallacy of Defence in Depth

In complex automated systems, the redundancies and safeguards built into the system also contribute to its opacity. By protecting system performance against single faults, redundancies allow the latent buildup of multiple faults. Jens Rasmussen called this ‘the fallacy of defence in depth’ which James Reason elaborates upon:

the system very often does not respond actively to single faults. Consequently, many errors and faults made by the staff and maintenance personnel do not directly reveal themselves by functional response from the system. Humans can operate with an extremely high level of reliability in a dynamic environment when slips and mistakes have immediately visible effects and can be corrected……Violation of safety preconditions during work on the system will probably not result in an immediate functional response, and latent effects of erroneous acts can therefore be left in the system. When such errors are allowed to be present in a system over a longer period of time, the probability of coincidence of the multiple faults necessary for release of an accident is drastically increased. Analyses of major accidents typically show that the basic safety of the system has eroded due to latent errors.

This is exactly what occurred on Malaysia Airlines Flight 124 in August 2005:

The fault-tolerant ADIRU was designed to operate with a failed accelerometer (it has six). The redundant design of the ADIRU also meant that it wasn’t mandatory to replace the unit when an accelerometer failed. However, when the second accelerometer failed, a latent software anomaly allowed inputs from the first faulty accelerometer to be used, resulting in the erroneous feed of acceleration information into the flight control systems. The anomaly, which lay hidden for a decade, wasn’t found in testing because the ADIRU’s designers had never considered that such an event might occur.

Again, defence-in-depth systems are uniquely unsuited to human expertise as Gary Klein notes:

In a massively defended system, if an accident sneaks through all the defenses, the operators will find it far more difficult to diagnose and correct it. That is because they must deal with all of the defenses, along with the accident itself…..A unit designed to reduce small errors helped to create a large one.

Two Approaches to Airplane Automation: Airbus and Boeing

Although both Airbus and Boeing have adopted the fly-by-wire technology, there are fundamental differences in their respective approaches. Whereas Boeing’s system enforces soft limits that can be overridden at the discretion of the pilot, Airbus’ fly-by-wire system has built-in hard limits that cannot be overridden completely at the pilot’s discretion.

As Simon Calder notes, pilots have raised concerns in the past about Airbus‘ systems being “overly sophisticated” as opposed to Boeing’s “rudimentary but robust” system. But this does not imply that the Airbus approach is inferior. It is instructive to analyse Airbus’ response to pilot demands for a manual override switch that allows the pilot to take complete control:

If we have a button, then the pilot has to be trained on how to use the button, and there are no supporting data on which to base procedures or training…..The hard control limits in the Airbus design provide a consistent “feel” for the aircraft, from the 120-passenger A319 to the 350-passenger A340. That consistency itself builds proficiency and confidence……You don’t need engineering test pilot skills to fly this airplane.

David Evans captures the essence of this philosophy as aimed at minimising the “potential for human error, to keep average pilots within the limits of their average training and skills”.

It is easy to criticise Airbus‘ approach but the hard constraints clearly demand less from the pilot. In the hands of an expert pilot, Boeing’s system may outperform. But if the pilot is a novice, Airbus’ system almost certainly delivers superior results. Moreover, as I discussed earlier in the post, the transition to an almost fully automated system by itself reduces the probability that the human operator can achieve intuitive expertise. In other words, the transition to near-autonomous systems creates a pool of human operators that appear to frequently commit “irrational” errors and is therefore almost impossible to reverse.

 *          *         *

People Make Poor Monitors for Some Financial Models

In earlier post, I analysed Amar Bhide’s argument that a significant causal agent in the financial crisis was the replacement of discretion with models in many areas of finance – for example, banks’ mortgage lending decisions. In his excellent book, ‘A Call for Judgement’, he expands on this argument and amongst other technologies, lays some of the blame for this over-mechanisation of finance on the ubiquitous Black-Scholes-Merton (BSM) formula. Although I agree with much of his book, this thesis is too simplistic.

There is no doubt that BSM has many limitations – amongst the most severe being the assumption of continuous asset price movements, a known and flat volatility surface, and an asset price distribution free of fat tails. But the systemic impact of all these limitations is grossly overstated:

  • BSM and similar models have never been used as “valuation” methods on a large scale in derivatives markets but as a tool which tries to back out an implied volatility and generate useful hedge ratios by taking market prices for options as a given. In other words, volatility plays the role of the “wrong number in the wrong formula to get the right price”.
  • When “simple” BSM-like models are used to price more exotic derivatives, they have a modest role to play. As Emanuel Derman puts it, practitioners use models as “interpolating formulas that take you from known prices of liquid securities to the unknown values of illiquid securities”.

Nevertheless, this does not imply that financial modelling choices have no role to play in determining system resilience. But the role was more subtle and had to do less with the imperfections of the models themselves as with the imperfections of how complex models used to price complex products could be used by human traders.

Since the discovery of the volatility smile, traders have known that the interpolation process to price exotic options requires something more than a simple BSM model. One would assume that traders would want to use a model that was accurate and comprehensive as possible. But this has rarely been the case. Supposedly inferior local volatility models still flourish and even in some of the most complex domains of exotic derivatives, models are still chosen based on their intuitive similarities to a BSM-like approach where the free parameters can be thought of as volatility or correlation e.g. The Libor Market Model.

The choice of intuitive understanding over model accuracy is not unwarranted. As all market practitioners know, there is no such thing as a perfect derivatives pricing model. Paul Wilmott hit the nail on the head when he observed that *“the many improvements on Black-Scholes are rarely improvements, the best that can be said for many of them is that they are just better at hiding their faults. Black-Scholes also has its faults, but at least you can see them”.

However, as markets have evolved, maintaining this balance between intuitive understanding and accuracy has become increasingly difficult:

  • Intuitive yet imperfect models require experienced and expert traders. Scaling up trading volumes of exotic derivatives however requires that pricing and trading systems be pushed out to novice traders as well as non-specialists such as salespeople.
  • With the increased complexity of derivative products, preserving an intuitive yet sufficiently accurate model becomes an almost impossible task.
  • Product complexity combined with the inevitable discretion available to traders when they use simpler models presents significant control challenges and an increased potential for fraud.

In this manner, the same paradoxical evolution that have been observed in nuclear plants and airplane automation is now being experienced in finance. The need to scale up and accommodate complex products necessitates the introduction of complex, unintuitive models in combination with which human intuitive expertise is unable to add any value. In such a system, a novice is often as good as a more experienced operator. The ability of these models to tackle most scenarios on ‘auto-pilot’ results in a deskilled and novice-heavy human component in the system which is ill-equipped to tackle the inevitable occasion when the model fails. The failure is inevitably taken as evidence of human failure upon which the system is made even more automated and more safeguards and redundancies are built into the system. This exacerbates the problem of absence of feedback when small errors occur. The buildup of latent errors again increases and failures become even more catastrophic.

 *          *         *

My focus on airplane automation and financial models is simply illustrative. There are ample signs of this incompatibility between human monitors and near-fully automated systems in other domains as well. For example, Andrew Hill observes:

In developed economies, Lynda Gratton writes in her new book The Shift, “when the tasks are more complex and require innovation or problem solving, substitution [by machines or computers] has not taken place”. This creates a paradox: far from making manufacturers easier to manage, automation can make managers’ jobs more complicated. As companies assign more tasks to machines, they need people who are better at overseeing the more sophisticated workforce and doing the jobs that machines cannot….

The insight that greater process efficiency adds to the pressure on managers is not new. Even Frederick Winslow Taylor – these days more often caricatured as a dinosaur for his time-and-motion studies – pointed out in his century-old The Principles of Scientific Management that imposing a more mechanistic regime on workers would oblige managers to take on “other types of duties which involve new and heavy burdens”…..

There is no doubt Foxconn and its peers will be able to automate their labour-intensive processes. They are already doing so. The big question is how easily they will find and develop managers able to oversee the highly skilled workforce that will march with their robot armies.

This process of integrating human intelligence with artificial intelligence is simply a continuation of the process through which human beings went from being tool-users to minders and managers of automated systems. The current transition is important in that for the first time, many of these algorithmic and automated systems can essentially run themselves with human beings performing the role of supervisors who only need to intervene in extraordinary circumstances. Although it seems logical that the same process of increased productivity that has occurred during the modern ‘Control Revolution’ will continue during the creation of the “vast,automatic and invisible” ‘second economy’, the incompability of human cognition with near-fully automated systems suggests that it may only do so by taking on an increased risk of rare but catastrophic failure.

Bookmark and Share

Written by Ashwin Parameswaran

December 29th, 2011 at 11:58 pm

Innovation, Stagnation and Unemployment

with 18 comments

All economists assert that wants are unlimited. From this follows the view that technological unemployment is impossible in the long run. Yet there are a growing number of commentators (such as Brian Arthur) who insist that increased productivity from automation and improvements in artificial intelligence has a part to play in the current unemployment crisis. At the same time, a growing chorus laments the absence of innovation – Tyler Cowen’s thesis that the recent past has been a ‘Great Stagnation’ is compelling.

But don’t the two assertions contradict each other? Can we have an increase in technological unemployment as well as an innovation deficit? Is the concept of technological unemployment itself valid? Is there anything about the current phase of labour-displacing technological innovation that is different from the past 150 years? To answer these questions, we need a deeper understanding of the dynamics of innovation in a capitalist economy i.e. how exactly has innovation and productivity growth proceeded in a manner consistent with full employment in the past? In the process, I also hope to connect the long-run structural dynamic with the Minskyian business cycle dynamic. It is common to view the structural dynamic of technological change as a sort of ‘deus ex machine’ – if not independent, certainly as a phenomenon that is unconnected with the business cycle. I hope to convince some of you that our choices regarding business cycle stabilisation have a direct bearing on the structural dynamic of innovation. I have touched upon many of these topics in a scattered fashion in previous posts but this post is an attempt to present many of these thoughts in a coherent fashion with all my assumptions explicitly laid out in relation to established macroeconomic theory.

Micro-Foundations

Imperfectly competitive markets are the norm in most modern economies. In instances where economies of scale or network effects dominate, a market may even be oligopolistic or monopolistic (e.g. Google, Microsoft) This assumption is of course nothing new to conventional macroeconomic theory. Where my analysis differs is in viewing the imperfectly competitive process as one that is permanently in disequilibrium. Rents or “abnormal” profits are a persistent feature of the economy at the level of the firm and are not competed away even in the long run. The primary objective of incumbent rent-earners is to build a moat around their existing rents whereas the primary objective of competition from new entrants is not to drive rents down to zero, but to displace the incumbent rent-earner. It is not the absence of rents but the continuous threat to the survival of the incumbent rent-earner that defines a truly vibrant capitalist economy i.e. each niche must be continually contested by new entrants. This does not imply, even if the market for labour is perfectly competitive, that an abnormal share of GDP goes to “capital”. Most new entrants fail and suffer economic losses in their bid to capture economic rents and even a dominant incumbent may lose a significant proportion of past earned rents in futile attempts to defend its competitive position before its eventual demise.

This emphasis on disequilibrium points to the fact that the “optimum” state for a dynamically competitive capitalist economy is one of constant competitive discomfort and disorder. This perspective leads to a dramatically different policy emphasis from conventional theory which universally focuses on increasing positive incentives to economic players and relying on the invisible hand to guide the economy to a better equilibrium. Both Schumpeter and Marx understood the importance of this competitive discomfort for the constant innovative dynamism of a capitalist economy – my point is simply that a universal discomfort of capital is also important to maintain the distributive justice in a capitalist economy. in fact it is the only way to do so without sacrificing the innovative dynamism of the economy.

Competition in monopolistically competitive markets manifests itself through two distinct forms of innovation: exploitation and exploration. Exploitation usually takes the form of what James Utterback identified as process innovation with an emphasis on “real or potential cost reduction, improved product quality, and wider availability, and movement towards more highly integrated and continuous production processes.” As Utterback noted, such innovation is almost always driven by the incumbent firms. Exploitation is an act of optimisation under a known distribution i.e. it falls under the domain of homo economicus. In the language of fitness landscapes, exploitative process innovation is best viewed as competition around a local peak. On the other hand, exploratory product innovation (analogous to what Utterback identified as product innovation) occurs under conditions of significant irreducible uncertainty. Exploration is aimed at finding a significantly higher peak on the fitness landscape and as Utterback noted, is almost always driven by new entrants (For a more detailed explanation of incumbent preference for exploitation and organisational rigidity, see my earlier post).

An Investment Theory of the Business Cycle

Soon after publishing the ‘General Theory’, Keynes summarised his thesis as follows: “given the psychology of the public, the level of output and employment as a whole depends on the amount of investment. I put it in this way, not because this is the only factor on which aggregate output depends, but because it is usual in a complex system to regard as the causa causans that factor which is most prone to sudden and wide fluctuation.” In Keynes‘ view, the investment decision was undertaken in a condition of irreducible uncertainty, “influenced by our views of the future about which we know so little”. Just how critical the level of investment is in maintaining full employment is highlighted by GLS Shackle in his interpretation of Keynes’ theory: “In a money-using society which wishes to save some of the income it receives in payment for its productive efforts, it is not possible for the whole (daily or annual) product to be sold unless some of it is sold to investors and not to consumers. Investors are people who put their money on time-to-come. But they do not have to be investors. They can instead be liquidity-preferrers; they can sweep up their chips from the table and withdraw. If they do, they will give no employment to those who (in face of society’s propensity to save) can only be employed in making investment goods, things whose stream of usefulness will only come out over the years to come.”

If we accept this thesis, then it is no surprise that the post–2008 recovery has been quite so anaemic. Investment spending has remained low throughout the developed world, nowhere more so than in the United Kingdom. What makes this low level of investment even more surprising is the strength of the rebound in corporate profits and balance sheets – corporate leverage in the United States is as low as it has been for two decades and the proportion of cash in total assets as high as it has been for almost half a century. Specifically, the United States has also experienced an unusual increase in labour productivity during the recession which has exacerbated the disconnect between the recovery in GDP and employment. Some of these unusual patterns have been with us for a much longer time than the 2008 financial crisis. For example, the disconnect between GDP and employment in the United States has been obvious since atleast 1990, and the 2003 recession too saw an unusual rise in labour productivity. The labour market has been slack for at least a decade. It is hard to differ from Paul Krugman’s intuition that the character of post–1980 business cycles has changed. Europe and Japan are not immune from these “structural” patterns either – the ‘corporate savings glut’ has been a problem in the United Kingdom since atleast 2002, and Post-Keynesian economists have been pointing out the relationship between ‘capital accumulation’ and unemployment for a while, even attributing the persistently high unemployment in Europe to a lack of investment. Japan’s condition for the last decade is better described as a ‘corporate savings trap’ rather than a ‘liquidity trap’. Even in Greece, that poster child for fiscal profligacy, the recession is accompanied by a collapse in private sector investment.

A Theory of Business Investment

Business investments can typically either operate upon the scale of operations (e.g. capacity,product mix) or they can change the fundamental character of operations (e.g. changes in process, product). The degree of irreducible uncertainty in capacity and product mix decisions has reduced dramatically in the last half-century. The ability of firms to react quickly and effectively to changes in market conditions has improved dramatically with improvements in production processes and information technology – Zara being a well-researched example. Investments that change the very nature of business operations are what we typically identify as innovations. However, not all innovation decisions are subject to irreducible uncertainty either. In a seminal article, James March distinguished between “the exploration of new possibilities and the exploitation of old certainties. Exploration includes things captured by terms such as search, variation, risk taking, experimentation, play, flexibility, discovery, innovation. Exploitation includes such things as refinement, choice, production, efficiency, selection, implementation, execution.” Exploratory innovation operates under conditions of irreducible uncertainty whereas exploitation is an act of optimisation under a known distribution.

Investments in scaling up operations are most easily influenced by monetary policy initiatives which reduce interest rates and raise asset prices or direct fiscal policy initiatives which operate via the multiplier effect. In recent times, especially in the United States and United Kingdom, the reduction in rates has also directly facilitated the levering up of the consumer balance sheet and a reduction in the interest servicing burden of past consumer debt taken on. The resulting boost to consumer spending and demand also stimulates businesses to invest in expanding capacity. Exploitative innovation requires the presence of price competition within the industry i.e. monopolies or oligopolies have little incentive to make their operations more efficient beyond the price point where demand for their product is essentially inelastic. This sounds like an exceptional case but is in fact very common in critical industries such as finance and healthcare. Exploratory innovation requires not only competition amongst incumbent firms but competition from a constant and robust stream of new entrants into the industry. I outlined the rationale for this in a previous post:

Let us assume a scenario where the entry of new firms has slowed to a trickle, the sector is dominated by a few dominant incumbents and the S-curve of growth is about to enter its maturity/decline phase. To trigger off a new S-curve of growth, the incumbents need to explore. However, almost by definition, the odds that any given act of exploration will be successful is small. Moreover, the positive payoff from any exploratory search almost certainly lies far in the future. For an improbable shot at moving from a position of comfort to one of dominance in the distant future, an incumbent firm needs to divert resources from optimising and efficiency-increasing initiatives that will deliver predictable profits in the near future. Of course if a significant proportion of its competitors adopt an exploratory strategy, even an incumbent firm will be forced to follow suit for fear of loss of market share. But this critical mass of exploratory incumbents never comes about. In essence, the state where almost all incumbents are content to focus their energies on exploitation is a Nash equilibrium.
On the other hand, the incentives of any new entrant are almost entirely skewed in favour of exploratory strategies. Even an improbable shot at glory is enough to outweigh the minor consequences of failure. It cannot be emphasised enough that this argument does not depend upon the irrationality of the entrant. The same incremental payoff that represents a minor improvement for the incumbent is a life-changing event for the entrepreneur. When there exists a critical mass of exploratory new entrants, the dominant incumbents are compelled to follow suit and the Nash equilibrium of the industry shifts towards the appropriate mix of exploitation and exploration.

A Theory of Employment

My fundamental assertion is that a constant and high level of uncertain, exploratory investment is required to maintain a sustainable and resilient state of full employment. And as I mentioned earlier, exploratory investment driven by product innovation requires a constant threat from new entrants.

Long-run increases in aggregate demand require product innovation. As Rick Szostak notes:

While in the short run government spending and investment have a role to play, in the long run it is per capita consumption that must rise in order for increases in per capita output to be sustained…..the reason that we consume many times more than our great-grandparents is not to be found for the most part in our consumption of greater quantities of the same items which they purchased…The bulk of the increase in consumption expenditures, however, has gone towards goods and services those not-too-distant forebears had never heard of, or could not dream of affording….Would we as a society of consumers/workers have striven as hard to achieve our present incomes if our consumption bundle had only deepened rather than widened? Hardly. It should be clear to all that the tremendous increase in per capita consumption in the past century would not have been possible if not for the introduction of a wide range of different products. Consumers do not consume a composite good X. Rather, they consume a variety of goods, and at some point run into a steeply declining marginal utility from each. As writers as diverse as Galbraith and Marshall have noted, if declining marginal utility exists with respect to each good it holds over the whole basket of goods as well…..The simple fact is that, in the absence of the creation of new goods, aggregate demand can be highly inelastic, and thus falling prices will have little effect on output.

Therefore, when cost-cutting and process optimisation in an industry enables a product to be sold at a lower cost, the economy may not be able to reorganise back to full employment with simply an increased demand for that particular product. In the early stages of a product when demand is sufficiently elastic, process innovation can increase employment. But as the product ages, process improvements have a steadily negative effect on employment.

Eventually, a successful reorganisation back to full employment entails creating demand for new products. If such new products were simply an addition to the set of products that we consumed, disruption would be minimal. But almost any significant new product that arises from exploratory investment also destroys an old product. The tablet cannibalises the netbook, the smartphone cannibalises the camera etc. This of course is the destruction in Schumpeter’s creative destruction. It is precisely because of this cannibalistic nature of exploratory innovation that established incumbents rarely engage in it, unless compelled to do so by the force of new entrants. Burton Klein put it well: “ firms involved in such competition must compare two risks: the risk of being unsuccessful when promoting a discovery or bringing about an innovation versus the risk of having a market stolen away by a competitor: the greater the risk that a firm’s rivals take, the greater must be the risks to which must subject itself for its own survival.” Even when new firms enter a market at a healthy pace, it is rare that incumbent firms are successful at bringing about disruptive exploratory changes. When the pace of dynamic competition is slow, incumbents can choose to simply maintain slack and wait for any promising new technology to emerge which it can buy up rather than risking investment in some uncertain new technology.

We need exploratory investment because this expansion of the economy into its ‘adjacent possible’ does not derive its thrust from the consumer but from the entrepreneur. In other words, new wants are not demanded by the consumers but are instead created by entrepreneurs such as Steve Jobs. In the absence of dynamic competition from new entrants, wants remain limited.

In essence, this framework incorporates technological innovation into a distinctly “Chapter 12” Keynesian view of the business cycle. Although my views are far removed from macroeconomic orthodoxy, they are not quite so radical that they have no precedents whatsoever. My views can be seen as a simple extension of Burton Klein’s seminal work outlined in his books ‘Dynamic Economics’ and ‘Prices, wages, and business cycles: a dynamic theory’. But the closest parallels to this explanation can be found in Rick Szostak’s book ‘Technological innovation and the Great Depression’. Szostak uses an almost identical rationale to explain unemployment during the Great Depression, “how an abundance of labor-saving production technology coupled with a virtual absence of new product innovation could affect consumption, investment and the functioning of the labor market in such a way that a large and sustained contraction in employment would result.”

As I have hinted at in a previous post, this is not a conventional “structural” explanation of unemployment. Szostak explains the difference: “An alternative technological argument would be that the skills required of the workforce changed more rapidly in the interwar period than did the skills possessed by the workforce. Thus, there were enough jobs to go around; workers simply were not suited to them, and a painful decade of adjustment was required…I argue that in fact there simply were not enough jobs of any kind available.” In other words, this is a partly technological explanation for the shortfall in aggregate demand.

The Invisible Foot and New Firm Entry

The concept of the “Invisible Foot” was introduced by Joseph Berliner as a counterpoint to Adam Smith’s “Invisible Hand” to explain why innovation was so hard in the centrally planned Soviet economy:

Adam Smith taught us to think of competition as an “invisible hand” that guides production into the socially desirable channels….But if Adam Smith had taken as his point of departure not the coordinating mechanism but the innovation mechanism of capitalism, he may well have designated competition not as an invisible hand but as an invisible foot. For the effect of competition is not only to motivate profit-seeking entrepreneurs to seek yet more profit but to jolt conservative enterprises into the adoption of new technology and the search for improved processes and products. From the point of view of the static efficiency of resource allocation, the evil of monopoly is that it prevents resources from flowing into those lines of production in which their social value would be greatest. But from the point of view of innovation, the evil of monopoly is that it enables producers to enjoy high rates of profit without having to undertake the exacting and risky activities associated with technological change. A world of monopolies, socialist or capitalist, would be a world with very little technological change.” 

For disruptive innovation to persist, the invisible foot needs to be “applied vigorously to the backsides of enterprises that would otherwise have been quite content to go on producing the same products in the same ways, and at a reasonable profit, if they could only be protected from the intrusion of competition”Burton Klein’s great contribution along with Gunnar Eliasson was to highlight the critical importance of entry of new firms in maintaining the efficacy of the invisible foot. Klein believed that

the degree of risk taking is determined by the robustness of dynamic competition, which mainly depends on the rate of entry of new firms. If entry into an industry is fairly steady, the game is likely to have the flavour of a highly competitive sport. When some firms in an industry concentrate on making significant advances that will bear fruit within several years, others must be concerned with making their long-run profits as large as possible, if they hope to survive. But after entry has been closed for a number of years, a tightly organised oligopoly will probably emerge in which firms will endeavour to make their environments highly predictable in order to make their environments highly predictable in order to make their short-run profits as large as possible….Because of new entries, a relatively concentrated industry can remain highly dynamic. But, when entry is absent for some years, and expectations are premised on the future absence of entry, a relatively concentrated industry is likely to evolve into a tight oligopoly. In particular, when entry is long absent, managers are likely to be more and more narrowly selected; and they will probably engage in such parallel behaviour with respect to products and prices that it might seem that the entire industry is commanded by a single general!

This argument does not depend on incumbent firms leaving money on the table – on the contrary, they may redouble their attempts at cost reduction via process innovation in times of deficient demand. Rick Szostak documents how “despite the availability of a massive amount of inexpensive labour, process innovation would continue in the 1930s. Output per man-hour in manufacturing rose by 25% in the 1930s…..national output was higher in 1939 than in 1929, while employment was over two million less.”

Macroeconomic Policy and Exploratory Product Innovation

Monetary policy has been the preferred cure for insufficient aggregate demand throughout and since the Great Moderation. The argument goes that lower real rates, inflation and higher asset prices will increase investment via Tobin’s Q and increase consumption via the wealth effect and reduction in rewards to savings, all bound together in the virtuous cycle of the multiplier. If monetary policy is insufficient, fiscal policy may be deployed with a focus on either directly increasing aggregate demand or providing businesses with supply-side incentives such as tax cuts.

There is a common underlying theme to all of the above policy options – they focus on the question “how do we make businesses want to invest?” i.e. on positively incentivising incumbent business and startups and trusting that the invisible hand will do the rest. In the context of exploratory investments, the appropriate question is instead “how do we make businesses have to invest?” i.e. on compelling incumbent firms to invest in speculative projects in order to defend their rents or lose out to new entrants if they fail to do so. But the problem isn’t just that these policies are ineffectual. Many of the policies that focus on positive incentives weaken the competitive discomfort from the invisible foot by helping to entrench the competitive position of incumbent corporates and reducing their incentive to engage in exploratory investment. It is in this context that interventions such as central bank purchase of assets and fiscal stimulus measures that dole out contracts to the favoured do permanent harm to the economy.

The division that matters from the perspective of maintaining the appropriate level of exploratory investment and product innovation is not monetary vs fiscal but the division between existing assets and economic interests and new firms/entrepreneurs. Almost all monetary policy initiatives focus on purchasing existing assets from incumbent firms or reducing real rates for incumbent banks and their clients. A significant proportion of fiscal policy does the same. The implicit assumption is, as Nick Rowe notes, that there is “high substitutability between old and new investment projects, so the previous owners of the old investment projects will go looking for new ones with their new cash”. This assumption does not hold in the case of exploratory investments – asset-holders will likely chase after a replacement asset but this asset will likely be an existing investment project, not a new one. The result of the intervention will be an increase in prices of such assets but it will not feed into any “real” new investment activity. In other words, the Tobin’s q effect is negligible for exploratory investments in the short run and in fact negative in the long run as the accumulated effect of rents derived from monetary and fiscal intervention reduces the need for incumbent firms to engage in such speculative investment.

A Brief History of the Post-WW2 United States Macroeconomy

In this section, I’m going to use the above framework to make sense of the evolution of the macroeconomy in the United States after WW2. The framework is relevant for post–70s Europe and Japan as well which is why the ‘investment deficit problem’ afflicts almost the entire developed world today. But the details differ quite significantly especially with regards to the distributional choices made in different countries.

The Golden Age

The 50s and the 60s are best characterised as a period of “order for all” characterised by as Bill Lazonick put it, “oligopolistic competition, career employment with one company, and regulated financial markets”. The ‘Golden Age’ delivered prosperity for a few reasons:

  • As Minsky noted, the financial sector had only just begun the process of adapting to and circumventing regulations designed to constrain and control it. As a result, the Fed had as much control over credit creation and bank policies as it would ever have.
  • The pace of both product and process innovation had slowed down significantly in the real economy, especially in manufacturing. Much of the productivity growth came from product innovations that had already been made prior to WW2. As Alexander Field explains (on the slowdown in manufacturing TFP): “Through marketing and planned obsolescence, the disruptive force of technological change – what Joseph Schumpeter called creative destruction – had largely been domesticated, at least for a time. Whereas large corporations had funded research leading to a large number of important innovations during the 1930s, many critics now argued that these behemoths had become obstacles to transformative innovation, too concerned about the prospect of devaluing rent-yielding income streams from existing technologies. Disruptions to the rank order of the largest U.S. industrial corporations during this quarter century were remarkably few. And the overall rate of TFP growth within manufacturing fell by more than a percentage point compared with the 1930s and more than 3.5 percentage points compared with the 1920s.”
  • Apart from the fact that the economy had to catch up to earlier product innovation, the dominant position of the US in the global economy post WW2 limited the impact from foreign competition.

It was this peculiar confluence of factors that enabled a system of “order and stability for all” without triggering a complete collapse in productivity or financial instability – a system where both labour and capital were equally strong and protected and shared in the rents available to all.

Stagflation

The 70s are best described as the time when this ordered, stabilised system could not be sustained any longer.

  • By the late 60s, the financial sector had adapted to the regulatory environment. Innovations such as Fed Funds market and the Eurodollar market gradually came into being such that by the late 60s, credit creation and bank lending were increasingly difficult for the Fed to control. Reserves were no longer a binding constraint on bank operations.
  • The absence of real competition either on the basis of price or from new entrants meant that both process and product innovation were low just like during the Golden Age but the difference was that there were no more low-hanging fruit to pick from past product innovations. Therefore, a secular slowdown in productivity took hold.
  • The rest of world had caught up and foreign competition began to intensify.

As Burton Klein noted, “competition provides a deterrent to wage and price increases because firms that allow wages to increase more rapidly than productivity face penalties in the form of reduced profits and reduced employment”. In the absence of adequate competition, demand is inelastic and there is little pressure to reduce costs. As the level of price/cost competition reduces, more and more unemployment is required to keep inflation under control. Even worse, as Klein noted, it only takes the absence of competition in a few key sectors for the disease to afflict the entire economy. Controlling overall inflation in the macroeconomy when a few key sectors are sheltered from competitive discomfort requires monetary action that will extract a disproportionate amount of pain from the remainder of the economy. Stagflation is the inevitable consequence in a stabilised economy suffering from progressive competitive sclerosis.

The “Solution”

By the late 70s, the pressures and conflicts of the system of “order for all” meant that change was inevitable. The result was what is commonly known as the neoliberal revolution. There are many different interpretations of this transition. To right-wing commentators, neoliberalism signified a much-needed transition towards a free-market economy. Most left-wing commentators lament the resultant supremacy of capital over labour and rising inequality. For some, the neoliberal era started with Paul Volcker having the courage to inflict the required pain to break the back of inflationary forces and continued with central banks learning the lessons of the past which gave us the Great Moderation.

All these explanations are relevant but in my opinion, they are simply a subset of a larger and simpler explanation. The prior economic regime was a system where both the invisible hand and the invisible foot were shackled – firms were protected but their profit motive was also shackled by the protection provided to labour. The neoliberal transition unshackled the invisible hand (the carrot of the profit motive) without ensuring that all key sectors of the economy were equally subject to the invisible foot (the stick of failure and losses and new firm entry). Instead of tackling the root problem of progressive competitive and democratic sclerosis and cronyism, the neoliberal era provided a stop-gap solution. “Order for all” became “order for the classes and disorder for the masses”. As many commentators have noted, the reality of neoliberalism is not consistent with the theory of classical liberalism. Minsky captured the hypocrisy well: “Conservatives call for the freeing of markets even as their corporate clients lobby for legislation that would institutionalize and legitimize their market power; businessmen and bankers recoil in horror at the prospect of easing entry into their various domains even as technological changes and institutional evolution make the traditional demarcations of types of business obsolete. In truth, corporate America pays lip service to free enterprise and extols the tenets of Adam Smith, while striving to sustain and legitimize the very thing that Smith abhorred – state-mandated market power.”

The critical component of this doctrine is the emphasis on macroeconomic and financial sector stabilisation implemented primarily through monetary policy focused on the banking and asset price channels of policy transmission:
Any significant fall in asset prices (especially equity prices) has been met with a strong stimulus from the Fed i.e. the ‘Greenspan Put’. In his plea for increased quantitative easing via purchase of agency MBS, Joe Gagnon captured the logic of this policy: ““This avalanche of money would surely push up stock prices, push down bond yields, support real estate prices, and push up the value of foreign currencies. All of these financial developments would stimulate US economic activity.” In other words, prop up asset prices and the real economy will mend itself.
Similarly, Fed and Treasury policy has ensured that none of the large banks can fail. In particular, bank creditors have been shielded from any losses. The argument is that allowing banks to fail will cripple the flow of credit to the real economy and result in a deflationary collapse that cannot be offset by conventional monetary policy alone. This is the logic for why banks were allowed access to a panoply of Federal Reserve liquidity facilities at the height of the crisis. In other words, prop up the banks and the real economy will mend itself.

In this increasingly financialised economy, “the increased market-sensitivity combined with the macro-stabilisation commitment encourages low-risk process innovation and discourages uncertain and exploratory product innovation.” This tilt towards exploitation/cost-reduction without exploration kept inflation in check but it also implied a prolonged period of sub-par wage growth and a constant inability to maintain full employment unless the consumer or the government levered up. For the neo-liberal revolution to sustain a ‘corporate welfare state’ in a democratic system, the absence of wage growth necessitated an increase in household leverage for consumption growth to be maintained. The monetary policy doctrine of the Great Moderation exacerbated the problem of competitive sclerosis and the investment deficit but it also provided the palliative medicine that postponed the day of reckoning. The unshackling of the financial sector was a necessary condition for this cure to work its way through the economy for as long as it did.

It is this focus on the carrot of higher profits that also triggered the widespread adoption of high-powered incentives such as stock options and bonuses to align manager and stockholder incentives. When the risk of being displaced by innovative new entrants is low, high-powered managerial incentives help to tilt the focus of the firm towards a focus on process innovation and cost reduction, optimisation of leverage etc. From the stockholders and the managers’ perspective, the focus on short-term profits is a feature, not a bug.

The Dénouement

So long as unemployment and consumption could be propped up by increasing leverage from the consumer and/or the state, the long-run shortage in exploratory product innovation and the stagnation in wages could be swept under the rug and economic growth could be maintained. But there is every sign that the household sector has reached a state of peak debt and the financial system has reached its point of peak elasticity. The policy that worked so well during the Great Moderation is now simply focused on preventing the collapse of the cronyist and financialised economy. The system has become so fragile that Minsky’s vision is more correct than ever – an economy at full employment will yo-yo uncontrollably between a state of debt-deflation and high,variable inflation. Instead the goal of full employment seems to have been abandoned in order to postpone the inevitable collapse. This only substitutes an economic fragility with a deeper social fragility.

The aim of full employment is made even harder with the acceleration of process innovation due to advances in artificial intelligence and computerisation. Process innovation gives us technological unemployment while at the same time the absence of exploratory product innovation leaves us stuck in the Great Stagnation.

 

The solution preferred by the left is to somehow recreate the golden age of the 50s and the 60s i.e. order for all. Apart from the impossibility of retrieving the docile financial system of that age (which Minsky understood), the solution of micro-stability for all is an environment of permanent innovative stagnation. The Schumpeterian solution is to transform the system into one of disorder for all, masses and classes alike. Micro-fragility is the key to macro-resilience but this fragility must be felt by all economic agents, labour and capital alike. In order to end the stagnation and achieve sustainable full employment, we need to allow incumbent banks and financialised corporations to collapse and dismantle the barriers to entry of new firms that pervade the economy (e.g. occupational licensing, the patent system). But this does not imply that the macroeconomy should suffer from a deflationary contraction. Deflation can be prevented in a simple and effective manner with a system of direct transfers to individuals as Steve Waldman has outlined. This solution reverses the flow of rents that have exacerbated inequality over the past few decades, as well as tackling the cronyism and demosclerosis that is crippling innovation and preventing full employment.

Bookmark and Share

Written by Ashwin Parameswaran

November 2nd, 2011 at 7:29 pm

Advances in Technology and Artificial Intelligence: Implications for Education and Employment

with 10 comments

In a recent article, Paul Krugman pointed out the fallacies in the widely held belief that more education for all will lead to better jobs, lower unemployment and reduced inequality in the economy. The underlying thesis in Krugman’s argument (drawn from Autor, Levy and Murnane)  is fairly straightforward and compelling: Advances in computerisation do not increase the demand for all “skilled” labour. Instead they reduce the demand for routine tasks, including many tasks that we currently perceive as skilled and require significant formal education for a human being to carry out effectively.

This post is my take on what advances in technology, in particular artificial intelligence, imply for the nature of employment and education in our economy. In a nutshell, advances in artificial intelligence and robotics means that the type of education and employment that has been dominant throughout the past century is now almost obsolete. The routine jobs of 20th century manufacturing and services that were so amenable to creating mass employment are increasingly a thing of the past. This does not imply that college education is irrelevant. But it does imply that our current educational system, which is geared towards imparting routine and systematic skills and knowledge, needs a radical overhaul.

As Autour et al note, routine human tasks have gradually been replaced by machinery and technology  since atleast the advent of the Industrial Revolution. What has changed in the last twenty years with the advent of computerisation is that the sphere of human activities that can be replaced by technology has broadened significantly. But there are still some significant holes. The skills that Autour et al identify as complementary rather than substitutable by computerisation are those that have proved most challenging for AI scientists to replicate. The inability to automate many tasks that require human sensory and motor skills is an example of what AI researchers call Moravec’s Paradox. Hans Moravec identified that it is much easier to engineer apparently complex computational tasks such as the ability to play chess than it is to engineer the sensorimotor ability of a one-year old child. In a sense, computers find it harder to mimic some of our animalistic skills and relatively easy to mimic many of our abilities that we have long thought of as separating us from other animals. Moravec’s paradox explains why many manual jobs such as driving a car have so far resisted automation. At the same time AI has also found it hard to engineer the ability to perform some key non-routine cognitive tasks such as the ability to generate creative and novel solutions under conditions of significant irreducible uncertainty.

One of the popular misconceptions about the limits of AI/technology is the notion that the engineered alternative must mimic the human skillset completely in order to replace it. In many tasks the human method may not be the only way or even the best way to achieve the task. For example, the Roomba and subsumption architectures do not need to operate like a human being to get the job done. Similarly, a chess program can compete with a human player even though the brute-force method of the computer has very little in common with the pattern-recognising, intuitive method of the grandmaster. Moreover, automating and replacing human intervention frequently involves a redesign of the operating environment in which the task is performed to reduce uncertainty, so that the underlying task can be transformed into a routine and automatable one. Herbert Simon identified this long ago when he noted: “If we want an organism or mechanism to behave effectively in a complex and changing environment, we can design into it adaptive mechanisms that allow it to respond flexibly to the demands the environment places on it. Alternatively, we can try to simplify and stabilize the environment. We can adapt organism to environment or environment to organism”. To hazard a guess, the advent of the “car that drives itself” will probably involve a significant redesign of the design and rules of our roads.

This redesign of the work environment to reduce uncertainty lies at the heart of the Taylorist/Fordist logic that brought us the assembly line production system and has now been applied to many white-collar office jobs. Of course this uncertainty is not eliminated. As Richard Langlois notes, it is “pushed up the hierarchy to be dealt with by adaptable and less-specialized humans” or in many cases, it can even be pushed out of the organisation itself. Either way, what is indisputable is that for the vast majority of employees whether on an assembly line in FoxConn or in a call center in India, the job content is strictly codified and routine. Ironically, this very process of transforming a job description into one amenable to mass employment itself means that the job is that much more likely to be automated in the future as the sphere of activities that are thwarted by Moravec’s paradox reduces. For example, we may prefer competent customer service from our bank but have long since reconciled ourselves to sub-standard customer service as the price we pay for cheap banking. Once we have replaced the “tacit knowledge” of the “expert” customer service agent with an inexperienced agent who needs to be provided with clear rules, we are that much closer to replacing the agent in the process altogether.

The implication of my long-winded argument is that even Moravec’s paradox will not shield otherwise close-to-routine activities from automation in the long run. That leaves us with employment opportunities necessarily being concentrated in significantly non-routine tasks (cognitive or otherwise) that are hard to replicate effectively through computational means. It is easy to understand why the generation of novel and creative solutions is difficult to replicate in a systematic manner but this is not the only class of activities that falls under this umbrella. Also relevant are many activities that require what Hubert and Stuart Dreyfus call expert know-how. In their study of skill acquisition and training that was to form the basis of their influential critique of AI, they note that as one moves from being a novice at an activity to being an expert, the role of rules and algorithms in guiding our actions diminishes to be replaced with an intuitive tacit understanding. As Hubert Dreyfus notes, “a chess grandmaster not only sees the issues in a position almost immediately, but the right response just pops into his or her head.”

The irony of course is that the Taylorist logic of the last century has been focused so precisely on eliminating the need for such expert know-how, in the process driving our educational system to de-emphasise the same. What we need is not so much more education as a radically different kind of education. Frank Levy himself made this very point in an article a few years ago but the need to overhaul our industrial-age education system has been most eloquently championed by Sir Ken Robinson [1,2]. To say that our educational system needs to focus on “creativity” is not to claim that we all need to become artists and scientists. Creativity here is defined as simply the ability to explore effectively rather than follow a algorithmic routine, a role that many of our current methods of “teaching” are not set up to achieve. It applies as much to the intuitive, unpredictable nature of biomedical research detailed by James Austin as it does to the job of an expert motorcycle mechanic that Matthew Crawford describes so eloquently. The need to move beyond a simple, algorithmic level of expertise is not one driven by sentiment but increasingly by necessity as the scope of tasks that can be performed by AI agents expands.   A corollary of this line of thought is that jobs that can provide “mass” employment will likely be increasingly hard to find. This does not mean that full employment is impossible, simply that any job that is routine enough to employ a large number of people doing a very similar role is likely to be automated sooner or later.

 

Bookmark and Share

Written by Ashwin Parameswaran

March 15th, 2011 at 1:43 pm