macroresilience

resilience, not stability

Archive for the ‘Rationality’ Category

The Control Revolution And Its Discontents

with 20 comments

One of the key narratives on this blog is how the Great Moderation and the neo-liberal era has signified the death of truly disruptive innovation in much of the economy. When macroeconomic policy stabilises the macroeconomic system, every economic actor is incentivised to take on more macroeconomic systemic risks and shed idiosyncratic, microeconomic risks. Those that figured out this reality early on and/or had privileged access to the programs used to implement this macroeconomic stability, such as banks and financialised corporates, were the big winners – a process that is largely responsible for the rise in inequality during this period. In such an environment the pace of disruptive product innovation slows but the pace of low-risk process innovation aimed at cost-reduction and improving efficiency flourishes. therefore we get the worst of all worlds – the Great Stagnation combined with widespread technological unemployment.

This narrative naturally begs the question: when was the last time we had a truly disruptive Schumpeterian era of creative destruction. In a previous post looking at the evolution of the post-WW2 developed economic world, I argued that the so-called Golden Age was anything but Schumpeterian – As Alexander Field has argued, much of the economic growth till the 70s was built on the basis of disruptive innovation that occurred in the 1930s. So we may not have been truly Schumpeterian for at least 70 years. But what about the period from at least the mid 19th century till the Great Depression? Even a cursory reading of economic history gives us pause for thought – after all wasn’t a significant part of this period supposed to be the Gilded Age of cartels and monopolies which sounds anything but disruptive.

I am now of the opinion that we have never really had any long periods of constant disruptive innovation – this is not a sign of failure but simply a reality of how complex adaptive systems across domains manage the tension between efficiency,robustness, evolvability and diversity. What we have had is a subverted control revolution where repeated attempts to achieve and hold onto an efficient equilibrium fail. Creative destruction occurs despite our best efforts to stamp it out. In a sense, disruption is an outsider to the essence of the industrial and post-industrial period of the last two centuries, the overriding philosophy of which is automation and algorithmisation aimed at efficiency and control. And much of our current troubles are a function of the fact that we have almost perfected the control project.

The operative word and the source of our problems is “almost”. Too many people look at the transition from the Industrial Revolution to the Algorithmic Revolution as a sea-change in perspective. But in reality, the current wave of reducing everything to a combination of “data & algorithm” and tackling every problem with more data and better algorithms is the logical end-point of the control revolution that started in the 19th century. The difference between Ford and Zara is overrated – Ford was simply the first step in a long process that focused on systematising each element of the industrial process (production,distribution,consumption) but also crucially putting in place a feedback loop between each element. In some sense, Zara simply follows a much more complex and malleable algorithm than Ford did but this algorithm is still one that is fundamentally equilibriating (not disruptive) and focused on introducing order and legibility into a fundamentally opaque environment via a process that reduces human involvement and discretion by replacing intuitive judgements with rules and algorithms. Exploratory/disruptive innovation on the other hand is a disequilibriating force that is created by entrepreneurs and functions outside this feedback/control loop. Both processes are important – the longer period of the gradual shedding of diversity and homogenisation in the name of efficiency as well as the periodic “collapse” that shakes up the system and puts it eventually on the path to a new equilibrium.

Of course, control has been a aim of western civilisation for a lot longer but it was only in the 19th century that the tools of control were good enough for this desire to be implemented in any meaningful sense. And even more crucially, as James Beniger has argued, it was only in the last 150 years that the need for large-scale control arose. And now the tools and technologies in our hands to control and stabilise the economy are more powerful than they’ve ever been, likely too powerful.

If we had perfect information and everything could be algorithmised right now i.e. if the control revolution had been perfected, then the problem disappears. Indeed it is arguable that the need for disruption in the innovation process no longer exists. If we get to a world where radical uncertainty has been eliminated, then the problem of systemic fragility is moot and irrelevant. It is easy to rebut the stabilisation and control project by claiming that we cannot achieve this perfect world.

But even if the techno-utopian project can achieve all that it claims it can, the path matters. We need to make it there in one piece. The current “algorithmic revolution” is best viewed as a continuation of the process through which human beings went from being tool-users to minders and managers of automated systems. The current transition is simply one where the many of these algorithmic and automated systems can essentially run themselves with human beings simply performing the role of supervisors who only need to intervene in extraordinary circumstances. Therefore, it would seem logical that the same process of increased productivity that has occurred during the modern era of automation will continue during the creation of the “vast,automatic and invisible” ‘second economy’. However there are many signs that this may not be the case. What has made things better till now and has been genuine “progress” may make things worse in higher doses and the process of deterioration can be quite dramatic.

The Uncanny Valley on the Path towards “Perfection”

In 1970, Masahiro Mori coined the term ‘uncanny valley’ to denote the phenomenon that “as robots appear more humanlike, our sense of their familiarity increases until we come to a valley”. When robots are almost but not quite human-like, they invoke a feeling of revulsion rather than empathy. As Karl McDorman notes, “Mori cautioned robot designers not to make the second peak their goal — that is, total human likeness — but rather the first peak of humanoid appearance to avoid the risk of their robots falling into the uncanny valley.”

A similar valley exists in the path of increased automation and algorithmisation. Much of the discussion in this section of the post builds upon concepts I explored via a detailed case study in a previous post titled ‘People Make Poor Monitors for Computers’.

The 21st century version of the control project i.e. the algorithmic project consists of two components:
1. More Data – ‘Big Data’.
2. Better and more comprehensive Algorithm.

The process goes hand in hand therefore with increased complexity and crucially, poorer and less intuitive feedback for the human operator. This results in increased fragility and a system prone to catastrophic breakdowns. The typical solution chosen is either further algorithmisation i.e. an improved algorithm and more data and if necessary increased slack and redundancy. This solution exacerbates the problem of feedback and temporarily pushes the catastrophic scenario further out to the tail but it does not eliminate it. Behavioural adaptation by human agents to the slack and the “better” algorithm can make a catastrophic event as likely as it was before but with a higher magnitude. But what is even more disturbing is that this cycle of increasing fragility can occur even without any such adaptation. This is the essence of the fallacy of the ‘defence in depth’ philosophy that lies at the core of most fault-tolerant algorithmic designs that I discussed in my earlier postthe increased “safety” of the automated system allows the build up of human errors without any feedback available from deteriorating system performance.

A thumb rule to get around this problem is to use slack only in those domains where failure is catastrophic and to prioritise feedback when failure is not critical and cannot kill you. But in an uncertain environment, this rule is very difficult to manage. How do you really know that a particular disturbance will not kill you? Moreover the loop of automation -> complexity -> redundancy endogenously turns a non-catastrophic event into one with catastrophic consequences.

This is a trajectory which is almost impossible to reverse once it has gone beyond a certain threshold without undergoing an interim collapse. The easy short-term fix is always to make a patch to the algorithm, get more data and build in some slack if needed. An orderly rollback is almost impossible due to the deskilling of the human workforce and risk of collapse due to other components in the system having adapted to new reality. Even simply reverting to the old more tool-like system makes things a lot worse because the human operators are no longer experts at using those tools – the process of algorithmisation has deskilled the human operator. Moreover, the endogenous nature of this buildup of complexity eventually makes the system fundamentally illegible to the human operator – a phenomenon that is ironic given that the fundamental aim of the control revolution is to increase legibility.

The Sweet Spot Before the Uncanny Valley: Near-Optimal Yet Resilient

Although it is easy to imagine the characteristics of an inefficient and dramatically sub-optimal system that is robust, complex adaptive systems operate at a near-optimal efficiency that is also resilient. Efficiency is not only important due to the obvious reality that resources are scarce but also because slack at the individual and corporate level is a significant cause of unemployment. Such near-optimal robustness in both natural and economic systems is not achieved with simplistically diverse agent compositions or with significant redundancies or slack at agent level.

Diversity and redundancy carry a cost in terms of reduced efficiency. Precisely due to this reason, real-world economic systems appear to exhibit nowhere near the diversity that would seem to ensure system resilience. Rick Bookstaber noted recently, that capitalist competition if anything seems to lead to a reduction in diversity. As Youngme Moon’s excellent book ‘Different’ lays out, competition in most markets seems to result in less diversity, not more. We may have a choice of 100 brands of toothpaste but most of us would struggle to meaningfully differentiate between them.

Similarly, almost all biological and ecological complex adaptive systems are a lot less diverse and contain less pure redundancy than conventional wisdom would expect. Resilient biological systems tend to preserve degeneracy rather than simple redundancy and resilient ecological systems tend to contain weak links rather than naive ‘law of large numbers’ diversity. The key to achieving resilience with near-optimal configurations is to tackle disturbances and generate novelty/innovation with an an emergent systemic response that reconfigures the system rather than simply a localised response. Degeneracy and weak links are key to such a configuration. The equivalent in economic systems is a constant threat of new firm entry.

The viewpoint which emphasises weak links and degeneracy also implies that it is not the keystone species and the large firms that determine resilience but the presence of smaller players ready to reorganise and pick up the slack when an unexpected event occurs. Such a focus is further complicated by the fact that in a stable environment, the system may become less and less resilient with no visible consequences – weak links may be eliminated, barriers to entry may progressively increase etc with no damage done to system performance in the stable equilibrium phase. Yet this loss of resilience can prove fatal when the environment changes and can leave the system unable to generate novelty/disruptive innovation. This highlights the folly of statements such as ‘what’s good for GM is good for America’. We need to focus not just on the keystone species, but on the fringes of the ecosystem.

 THE UNCANNY VALLEY AND THE SWEET SPOT

The Business Cycle in the Uncanny Valley – Deterioration of the Median as well as the Tail

Many commentators have pointed out that the process of automation has coincided with a deskilling of the human workforce. For example, below is a simplified version of the relation between mechanisation and skill required by the human operator that James Bright documented in 1958 (via Harry Braverman’s ‘Labor and Monopoly Capital’). But till now, it has been largely true that although human performance has suffered, the performance of the system has gotten vastly better. If the problem was just a drop in human performance while the system got better, our problem is less acute.

AUTOMATION AND DESKILLING OF THE HUMAN OPERATOR

But what is at stake is a deterioration in system performance – it is not only a matter of being exposed to more catastrophic setbacks. Eventually mean/median system performance deteriorates as more and more pure slack and redundancy needs to be built in at all levels to make up for the irreversibly fragile nature of the system. The business cycle is an oscillation between efficient fragility and robust inefficiency. Over the course of successive cycles, both poles of this oscillation get worse which leads to median/mean system performance falling rapidly at the same time that the tails deteriorate due to the increased illegibility of the automated system to the human operator.

THE UNCANNY VALLEY BUSINESS CYCLE

The Visible Hand and the Invisible Foot, Not the Invisible Hand

The conventional economic view of the economy is one of a primarily market-based equilibrium punctuated by occasional shocks. Even the long arc of innovation is viewed as a sort of benign discovery of novelty without any disruptive consequences. The radical disequilibrium view (which I have been guilty of espousing in the past) is one of constant micro-fragility and creative destruction. However, the history of economic evolution in the modern era has been quite different – neither market-based equilibrium nor constant disequilibrium, but a series of off-market attempts to stabilise relations outside the sphere of the market combined with occasional phase transitions that bring about dramatic change. The presence of rents is a constant and the control revolution has for the most part succeeded in preserving the rents of incumbents, barring the occasional spectacular failure. It is these occasional “failures” that have given us results that in some respect resemble those that would have been created by a market over the long run.

As Bruce Wilder puts it (sourced from 1, 2, 3 and mashed up together by me):

The main problem with the standard analysis of the “market economy”, as well as many variants, is that we do not live in a “market economy”. Except for financial markets and a few related commodity markets, markets are rare beasts in the modern economy. The actual economy is dominated by formal, hierarchical, administrative organization and transactions are governed by incomplete contracts, explicit and implied. “Markets” are, at best, metaphors…..
Over half of the American labor force works for organizations employing over 100 people. Actual markets in the American economy are extremely rare and unusual beasts. An economics of markets ought to be regarded as generally useful as a biology of cephalopods, amid the living world of bones and shells. But, somehow the idealized, metaphoric market is substituted as an analytic mask, laid across a vast variety of economic relations and relationships, obscuring every important feature of what actually is…..
The elaborate theory of market price gives us an abstract ideal of allocative efficiency, in the absence of any firm or household behaving strategically (aka perfect competition). In real life, allocative efficiency is far less important than achieving technical efficiency, and, of course, everyone behaves strategically.
In a world of genuine uncertainty and limitations to knowledge, incentives in the distribution of income are tied directly to the distribution of risk. Economic rents are pervasive, but potentially beneficial, in that they provide a means of stable structure, around which investments can be made and production processes managed to achieve technical efficiency.
In the imaginary world of complete information of Econ 101, where markets are the dominant form of economic organizations, and allocative efficiency is the focus of attention, firms are able to maximize their profits, because they know what “maximum” means. They are unconstrained by anything.
In the actual, uncertain world, with limited information and knowledge, only constrained maximization is possible. All firms, instead of being profit-maximizers (not possible in a world of uncertainty), are rent-seekers, responding to instituted constraints: the institutional rules of the game, so to speak. Economic rents are what they have to lose in this game, and protecting those rents, orients their behavior within the institutional constraints…..
In most of our economic interactions, price is not a variable optimally digesting information and resolving conflict, it is a strategic instrument, held fixed as part of a scheme of administrative control and information discovery……The actual, decentralized “market” economy is not coordinated primarily by market prices—it is coordinated by rules. The dominant relationships among actors is not one of market exchange at price, but of contract: implicit or explicit, incomplete and contingent.

James Beniger’s work is the definitive document on how the essence of the ‘control revolution’ has been an attempt to take economic activity out of the sphere of direct influence of the market. But that is not all – the long process of algorithmisation over the last 150 years has also, wherever possible, replaced implicit rules/contracts and principal-agent relationships with explicit processes and rules. Beniger also notes that after a certain point, the increasing complexity of the system is an endogenous phenomenon i.e. further iterations are aimed at controlling the control process itself. As I have illustrated above, after a certain threshold, the increasing complexity, fragility and deterioration in performance becomes a self-fulfilling positive feedback process.

Although our current system bears very little resemblance to the market economy of the textbook, there was a brief period during the transition from the traditional economy to the control economy during the early part of the 19th century when this was the case. 26% of all imports into the United States in 1827 sold in an auction. But the displacement of traditional controls (familial ties) with the invisible hand of market controls was merely a transitional phase, soon to be displaced by the visible hand of the control revolution.

The Soviet Project, Western Capitalism and High Modernity

Communism and Capitalism are both pillars of the high-modernist control project. The signature of modernity is not markets, but technocratic control projects. Capitalism has simply done it in a manner that is more easily and more regularly subverted. It is the occasional failure of the control revolution that is the source of the capitalist economy’s long-run success. Conversely, the failure of the Soviet Project was due to its too successful adherence and implementation of the high-modernist ideal. The significance of the threat from crony capitalism is a function of the fact that by forming a coalition and partnership of the corporate and state control projects, it enables the implementation of the control revolution to be that much more effective.

The Hayekian argument of dispersed knowledge and its importance in seeking equilibrium is not as important as it seems in explaining why the Soviet project failed. As Joseph Berliner has illustrated, the Soviet economy did not fail to reach local equilibria. Where it failed so spectacularly was in extracting itself out of these equilibria. The dispersed knowledge argument is open to the riposte that better implementation of the control revolution will eventually overcome these problems – indeed much of the current techno-utopian version of the control revolution is based on this assumption. It is a weak argument for free enterprise, a much stronger argument for which is the need to maintain a system that retains the ability to reinvent itself and find a new, hitherto unknown trajectory via the destruction of the incumbents combined with the emergence of the new. Where the Soviet experiment failed is that it eliminated the possibility of failure, that Berliner called the ‘invisible foot’. The success of the free enterprise system has been built not upon the positive incentive of the invisible hand but the negative incentive of the invisible foot to counter the visible hand of the control revolution. It is this threat and occasional realisation of failure and disorder that is the key to maintaining system resilience and evolvability.

 

 

Notes:

  • Borrowing from Beniger, control here simply means “purposive influence towards a predetermined goal”. Similarly, equilibrium in this context is best defined as a state in which economic agents are not forced to change their routines, theories and policies.
  • On the uncanny valley, I wrote a similar post on why perfect memory does not lead to perfect human intelligence. Even if a computer benefits from more data and better memory, we may not. And the evidence suggests that the deterioration in human performance is steepest in the zone close to “perfection”.
  • An argument similar to my assertion on the misconception of a free enterprise economy as a market economy can be made about the nature of democracy. Rather than as a vehicle that enables the regular expression of the political will of the electorate, democracy may be more accurately thought of as the ability to effect a dramatic change when the incumbent system of plutocratic/technocratic rule diverges too much from popular opinion. As always, stability and prevention of disturbances can cause the eventual collapse to be more catastrophic than it needs to be.
  • Although James Beniger’s ‘Control Revolution’ is the definitive reference, Antoine Bousquet’s book ‘The Scientific Way of Warfare’ on the similar revolution in military warfare is equally good. Bousquet’s book highlights the fact that the military is often the pioneer of the key projects of the control revolution and it also highlights just how similar the latest phase of this evolution is to early phases – the common desire for control combined with its constant subversion by reality. Most commentators assume that the threat to the project is external – by constantly evolving guerrilla warfare for example. But the analysis of the uncanny valley suggests that an equally great threat is endogenous – of increasing complexity and illegibility of the control project itself. Bousquet also explains how the control revolution is a child of the modern era and the culmination of the philosophy of the Enlightenment.
  • Much of the “innovation” of the control revolution was not technological but institutional – limited liability, macroeconomic stabilisation via central banks etc.
  • For more on the role of degeneracy in biological systems and how it enables near-optimal resilience, this paper by James Whitacre and Axel Bender is excellent.
Bookmark and Share

Written by Ashwin Parameswaran

February 21st, 2012 at 5:38 pm

The Pathology of Stabilisation in Complex Adaptive Systems

with 72 comments

The core insight  of the resilience-stability tradeoff is that stability leads to loss of resilience. Therefore stabilisation too leads to increased systemic fragility. But there is a lot more to it. In comparing economic crises to forest fires and river floods, I have highlighted the common patterns to the process of system fragilisation which eventually leaves the system “manager” in a situation where there are no good options left.

Drawing upon the work of Mancur Olson, I have explored how the buildup of special interests means that stability is self-reinforcing. Once rent-seeking has achieved sufficient scale, “distributional coalitions have the incentive and..the power to prevent changes that would deprive them of their enlarged share of the social output”. But what if we “solve” the Olsonian problem? Would that mitigate the problem of increased stabilisation and fragility? In this post, I will argue that the cycle of fragility and collapse has much deeper roots than any particular form of democracy.

In this analysis, I am going to move away from ecological analogies and instead turn to an example from modern medicine. In particular, I am going to compare the experience and history of psychiatric medication in the second half of the twentieth century to some of the issues we have already looked at in macroeconomic and ecological stabilisation. I hope to convince you that the uncanny similarities in the patterns observed in stabilised systems across such diverse domains are not a coincidence. In fact, the human body provides us with a much closer parallel to economic systems than even ecological systems with respect to the final stages of stabilisation. Most ecological systems collapse sooner simply because the limits to which resources will be spent in an escalating fashion to preserve stability are much smaller. For example, there are limits to the resources that will be deployed to prevent a forest fire, no matter how catastrophic. On the other hand, the resources that will be deployed to prevent collapse of any system that is integral to human beings are much larger.

Even by the standards of this blog, this will be a controversial article. In my discussion of psychiatric medicine I am relying primarily on Robert Whitaker’s excellent but controversial and much-disputed book ‘Anatomy of an Epidemic’. Nevertheless, I want to emphasise that my ultimate conclusions are much less incendiary than those of Whitaker. In the same way that I want to move beyond an explanation of the economic crisis that relies on evil bankers, crony capitalists and self-interested technocrats, I am trying to move beyond an explanation that blames evil pharma and misguided doctors for the crisis in mental health. I am not trying to imply that fraud and rent-seeking does not have a role to play. I am arguing that even if we eliminate them, the aim of a resilient economic and social system would not be realised.

THE PUZZLE

The puzzle of the history of macroeconomic stabilisation post-WW2 can be summarised as follows. Clearly every separate event of macroeconomic stabilisation works. Most monetary and fiscal interventions result in a rise in the financial markets, NGDP expectations and economic performance in the short run. Yet,

  • we are in the middle of a ‘great stagnation’ and have been for a few decades.
  • the frequency of crises seems to have risen dramatically in the last fifty years culminating in the environment since 2008 which is best described as a perpetual crisis.
  • each recovery seems to be weaker than the previous one and requires an increased injection of stimulus to achieve results that were easily achieved by a simple rate cut not that long ago.

Similarly, the history of mental health post-WW2 too has been a puzzle and is summarised by Whitaker as follows:

The puzzle can now be precisely summed up. On the one hand, we know that many people are helped by psychiatric medications. We know that many people stabilize well on them and will personally attest to how the drugs have helped them lead normal lives. Furthermore, as Satcher noted in his 1999 report, the scientific literature does document that psychiatric medications, at least over the short term, are “effective.” Psychiatrists and other physicians who prescribe the drugs will attest to that fact, and many parents of children taking psychiatric drugs will swear by the drugs as well. All of that makes for a powerful consensus: Psychiatric drugs work and help people lead relatively normal lives. And yet, at the same time, we are stuck with these disturbing facts: The number of disabled mentally ill has risen dramatically since 1955, and during the past two decades, a period when the prescribing of psychiatric medications has exploded, the number of adults and children disabled by mental illness has risen at a mind-boggling rate.

Whitaker then asks the obvious but heretical question – “Could our drug-based paradigm of care, in some unforeseen way, be fueling this modern-day plague?” and answers the question in the affirmative. But what are the precise mechanisms and patterns that underlie this deterioration?

Adaptive Response to Intervention and Drug Dependence

The fundamental reason why interventions fail in complex adaptive systems is the adaptive response triggered by the intervention that subverts the aim of the intervention. Moreover once the system is artificially stabilised and system agents have adapted to this new stability, the system cannot cope with any abrupt withdrawal of the stabilising force. For example, Whitaker notes that

Neuroleptics put a brake on dopamine transmission, and in response the brain puts down the dopamine accelerator (the extra D2 receptors). f the drug is abruptly withdrawn, the brake on dopamine is suddenly released while the accelerator is still pressed to the floor. The system is now wildly out of balance, and just as a car might careen out of control, so too the dopaminergic pathways in the brain……In short, initial exposure to neuroleptics put patients onto a path where they would likely need the drugs for life.

Whitaker makes the same observation for benzodiazepines and antidepressants:

benzodiazepines….work by perturbing a neurotransmitter system, and in response, the brain undergoes compensatory adaptations, and as a result of this change, the person becomes vulnerable to relapse upon drug withdrawal. That difficulty in turn may lead some to take the drugs indefinitely.

(antidepressants) perturb neurotransmitter systems in the brain. This leads to compensatory processes that oppose the initial acute effects of a drug…. When drug treatment ends, these processes may operate unopposed, resulting in appearance of withdrawal symptoms and increased vulnerability to relapse.

Similarly, when a central bank protects incumbent banks against liquidity risk, the banks choose to hold progressively more illiquid portfolios. When central banks provide incumbent banks with cheap funding in times of crisis to prevent failure and creditor losses, the banks choose to take on more leverage. This is similar to what John Adams has termed the ‘risk thermostat’ – the system readjusts to get back to its preferred risk profile. The protection once provided is almost impossible to withdraw without causing systemic havoc as agents adapt to the new stabilised reality and lose the ability to survive in an unstabilised environment.

Of course, in economic systems when agents actively intend to arbitrage such commitments by central banks, it is simply a form of moral hazard. But such an adaptation can easily occur via the natural selective forces at work in an economy – those who fail to take advantage of the Greenspan/Bernanke put simply go bust or get fired. In our brain the adaptation simply reflects homeostatic mechanisms selected for by the process of evolution.

Transformation into a Pathological State, Loss of Core Functionality and Deterioration of the Baseline State

I have argued in many posts that the successive cycles of Minskyian stabilisation have a role to play in the deterioration in the structural performance of the real economy which has manifested itself as ‘The Great Stagnation’. The same conclusion holds for many other complex adaptive systems and our brain is no different. Stabilisation kills much of what makes human beings creative. Innovation and creativity are fundamentally disequilibrium processes so it is no surprise that an environment of stability does not foster them. Whitaker interviews a patient on antidepressants who said: “I didn’t have mood swings after that, but instead of having a baseline of functioning normally, I was depressed. I was in a state of depression the entire time I was on the medication.”

He also notes disturbing research on the damage done to children who were treated for ADHD with Ritalin:

when researchers looked at whether Ritalin at least helped hyperactive children fare well academically, to get good grades and thus succeed as students, they found that it wasn’t so. Being able to focus intently on a math test, it turned out, didn’t translate into long-term academic achievement. This drug, Sroufe explained in 1973, enhances performance on “repetitive, routinized tasks that require sustained attention,” but “reasoning, problem solving and learning do not seem to be [positively] affected.”……Carol Whalen, a psychologist from the University of California at Irvine, noted in 1997 that “especially worrisome has been the suggestion that the unsalutary effects [of Ritalin] occur in the realm of complex, high-order cognitive functions such as flexible problem-solving or divergent thinking.”

Progressive Increase in Required Dosage

In economic systems, this steady structural deterioration means that increasing amounts of stimulus need to be applied in successive cycles of stabilisation to achieve the same levels of growth. Whitaker too identifies a similar tendency:

Over time, Chouinard and Jones noted, the dopaminergic pathways tended to become permanently dysfunctional. They became irreversibly stuck in a hyperactive state, and soon the patient’s tongue was slipping rhythmically in and out of his mouth (tardive dyskinesia) and psychotic symptoms were worsening (tardive psychosis). Doctors would then need to prescribe higher doses of antipsychotics to tamp down those tardive symptoms.

At this point, some of you may raise the following objection: so what if the new state is pathological? Maybe capitalism with its inherent instability is itself pathological. And once the safety nets of the Greenspan/Bernanke put, lender-of-last-resort programs and too-big-to-fail bailouts are put in place why would we need or want to remove them? If we simply medicate the economy ad infinitum, can we not avoid collapse ad infinitum?

This argument however is flawed.

  • The ability of economic players to reorganise to maximise the rents extracted from central banking and state commitments far exceeds the the resources available to the state and the central bank. The key reason for this is the purely financial nature of this commitment. For example, if the state decided to print money and support the price of corn at twice its natural market price, then it could conceivably do so forever. Sooner or later, rent extractors will run up against natural resource limits – for example,limits on arable land. But when the state commits to support a credit money dominant financial system and asset prices then the economic system can and will generate financial “assets” without limit to take advantage of this commitment. The only defense that the CB and the state possess is regulations aimed at maintaining financial markets in an incomplete, underdeveloped state where economic agents do not possess the tools to game the system. Unfortunately as Minsky and many others have documented, the pace of financial innovation over the last half-century has meant that banks and financialised corporates have all the tools they need to circumvent regulations and maximise rent extraction.
  • Even in a modern state that can print its own fiat currency, the ability to maintain financial commitments is subordinate to the need to control inflation. But doesn’t the complete absence of inflationary pressures in the current environment prove that we are nowhere close to any such limits? Not quite – As I have argued before, the current macroeconomic policy is defined by an abandonment of the full employment target in order to mitigate any risk of inflation whatsoever. The inflationary risk caused by rent extraction from the stabilisation commitment is being counterbalanced by a “reserve army of labour”. The reason for giving up the full employment is simple – As Minsky identified, once the economy has gone through successive cycles of stabilisation, it is prone to ‘rapid cycling’.

Rapid Cycling and Transformation of an Episodic Illness into a Chronic Illness

Minsky noted that

A high-investment, high-profit strategy for full employment – even with the underpinning of an active fiscal policy and an aware Federal Reserve system – leads to an increasingly unstable financial system, and an increasingly unstable economic performance. Within a short span of time, the policy problem cycles among preventing a deep depression, getting a stagnant economy moving again, reining in an inflation, and offsetting a credit squeeze or crunch.

In other words, an economy that attempts to achieve full employment will yo-yo uncontrollably between a state of debt-deflation and high, variable inflation – somewhat similar to a broken shower that only runs either too hot or too cold. The abandonment of the full employment target enables the system to postpone this point of rapid cycling.

The structural malformation of the economic system due to the application of increasing levels of stimulus to the task of stabilisation means that the economy has lost the ability to generate the endogenous growth and innovation that it could before it was so actively stabilised. The system has now been homogenised and is entirely dependent upon constant stimulus. The phenomenon of ‘rapid cycling’ explains a phenomenon I noted in an earlier post which is the apparently schizophrenic nature of the markets, turning from risk-on to risk-off at the drop of a hat. It is the lack of diversity that causes this as the vast majority of agents change their behaviour based on absence or presence of stabilising interventions.

Whitaker again notes the connection between medication and rapid cycling in many instances:

As early as 1965, before lithium had made its triumphant entry into American psychiatry, German psychiatrists were puzzling over the change they were seeing in their manic-depressive patients. Patients treated with antidepressants were relapsing frequently, the drugs “transforming the illness from an episodic course with free intervals to a chronic course with continuous illness,” they wrote. The German physicians also noted that in some patients, “the drugs produced a destabilization in which, for the first time, hypomania was followed by continual cycling between hypomania and depression.””

(stimulants) cause children to cycle through arousal and dysphoric states on a daily basis. When a child takes the drug, dopamine levels in the synapse increase, and this produces an aroused state. The child may show increased energy, an intensified focus, and hyperalertness. The child may become anxious, irritable, aggressive, hostile, and unable to sleep. More extreme arousal symptoms include obsessive-compulsive and hypomanic behaviors. But when the drug exits the brain, dopamine levels in the synapse sharply drop, and this may lead to such dysphoric symptoms as fatigue, lethargy, apathy, social withdrawal, and depression. Parents regularly talk of this daily “crash.”

THE PATIENT WANTS STABILITY TOO

At this point, I seem to be arguing that stabilisation is all just a con-game designed to enrich evil bankers, evil pharma etc. But such an explanation underestimates just how deep-seated the temptation and need to stabilise really is. The most critical component that it misses out on is the fact that the “patient” in complex adaptive systems is as eager to choose stability over resilience as the doctor is.

The Short-Term vs The Long-Term

As Daniel Carlat notes, the reality is that on the whole, psychiatric drugs “work” at least in the short term. Similarly, each individual act of macroeconomic stabilisation such as a lender-of-last-resort intervention, quantitative easing or a rate cut clearly has a positive impact on the short-term performance of both asset markets and the economy.

Whitaker too acknowledges this:

Those are the dueling visions of the psychopharmacology era. If you think of the drugs as “anti-disease” agents and focus on short-term outcomes, the young lady springs into sight. If you think of the drugs as “chemical imbalancers” and focus on long-term outcomes, the old hag appears. You can see either image, depending on where you direct your gaze.

The critical point here is that just like in forest fires and macroeconomies, the initial attempts to stabilise can be achieved easily and with very little medication. The results may seem even miraculous. But this initial period does not last. From one of many cases Whitaker quotes:

at first, “it was like a miracle,” she says. Andrew’s fears abated, he learned to tie his shoes, and his teachers praised his improved behavior. But after a few months, the drug no longer seemed to work so well, and whenever its effects wore off, there would be this “rebound effect.” Andrew would “behave like a wild man, out of control.” A doctor increased his dosage, only then it seemed that Andrew was like a “zombie,” his sense of humor reemerging only when the drug’s effects wore off. Next, Andrew needed to take clonidine in order to fall asleep at night. The drug treatment didn’t really seem to be helping, and so Ritalin gave way to other stimulants, including Adderall, Concerta, and dextroamphetamine. “It was always more drugs,” his mother says.

Medication Seen as Revealing Structural Flaws

One would think that the functional and structural deterioration that follows constant medication would cause both the patient and the doctor to reconsider the benefits of stabilisation. But this deterioration too can be interpreted in many different ways. Whitaker gives an example where the stabilised state is seen to be beneficial by revealing hitherto undiagnosed structural problems:

in 1982, Michael Strober and Gabrielle Carlson at the UCLA Neuropsychiatric Institute put a new twist into the juvenile bipolar story. Twelve of the sixty adolescents they had treated with antidepressants had turned “bipolar” over the course of three years, which—one might think—suggested that the drugs had caused the mania. Instead, Strober and Carlson reasoned that their study had shown that antidepressants could be used as a diagnostic tool. It wasn’t that antidepressants were causing some children to go manic, but rather the drugs were unmasking bipolar illness, as only children with the disease would suffer this reaction to an anti-depressant. “Our data imply that biologic differences between latent depressive subtypes are already present and detectable during the period of early adolescence, and that pharmacologic challenge can serve as one reliable aid in delimiting specific affective syndromes in juveniles,” they said.

Drug Withdrawal as Proof That It Works

The symptoms of drug withdrawal can also be interpreted to mean that the drug was necessary and that the patient is fundamentally ill. The reduction in withdrawal symptoms when the patient goes back on provides further “proof” that the drug works. Withdrawal symptoms can also be interpreted as proof that the patient needs to be treated for a longer period. Again, quoting from Whitaker:

Chouinard and Jones’s work also revealed that both psychiatrists and their patients would regularly suffer from a clinical delusion: They would see the return of psychotic symptoms upon drug withdrawal as proof that the antipsychotic was necessary and that it “worked.” The relapsed patient would then go back on the drug and often the psychosis would abate, which would be further proof that it worked. Both doctor and patient would experience this to be “true,” and yet, in fact, the reason that the psychosis abated with the return of the drug was that the brake on dopamine transmission was being reapplied, which countered the stuck dopamine accelerator. As Chouinard and Jones explained: “The need for continued neuroleptic treatment may itself be drug-induced.”

while they acknowledged that some alprazolam patients fared poorly when the drug was withdrawn, they reasoned that it had been used for too short a period and the withdrawal done too abruptly. “We recommend that patients with panic disorder be treated for a longer period, at least six months,” they said.

Similarly, macroeconomic crises can and frequently are interpreted as a need for better and more stabilisation. The initial positive impact of each intervention and the negative impact of reducing stimulus only reinforces this belief.

SCIENCE AND STABILISATION

A typical complaint against Whitaker’s argument is that his thesis is unproven. I would argue that within the confines of conventional “scientific” data analysis, his thesis and others directly opposed to it are essentially unprovable. To take an example from economics, is the current rush towards “safe” assets a sign that we need to produce more “safe” assets? Or is it a sign that our fragile economic system is addicted to the need for an ever-increasing supply of “safe” assets and what we need is a world in which no assets are safe and all market participants are fully aware of this fact?

In complex adaptive systems it can also be argued that the modern scientific method that relies on empirical testing of theoretical hypotheses against the data is itself fundamentally biased towards stabilisation and against resilience. The same story that I trace out below for the history of mental health can be traced out for economics and many other fields.

Desire to Become a ‘Real’ Science

Whitaker traces out how the theory attributing mental disorders to chemical imbalances was embraced as it enabled psychiatrists to become “real” doctors and captures the mood of the profession in the 80s:

Since the days of Sigmund Freud the practice of psychiatry has been more art than science. Surrounded by an aura of witchcraft, proceeding on impression and hunch, often ineffective, it was the bumbling and sometimes humorous stepchild of modern science. But for a decade and more, research psychiatrists have been working quietly in laboratories, dissecting the brains of mice and men and teasing out the chemical formulas that unlock the secrets of the mind. Now, in the 1980s, their work is paying off. They are rapidly identifying the interlocking molecules that produce human thought and emotion…. As a result, psychiatry today stands on the threshold of becoming an exact science, as precise and quantifiable as molecular genetics.

Search for the Magic Bullet despite Complexity of Problem

In the language of medicine, a ‘magic bullet’ is a drug that counters the root cause of the disease without adversely affecting any other part of the patient. The chemical-imbalance theory took a ‘magic bullet’ approach which reduced the complexity of our mental system to “a simple disease mechanism, one easy to grasp. In depression, the problem was that the serotonergic neurons released too little serotonin into the synaptic gap, and thus the serotonergic pathways in the brain were “underactive”. Antidepressants brought serotonin levels in the synaptic gap up to normal, and that allowed these pathways to transmit messages at a proper pace.”

Search for Scientific Method and Objective Criteria

Whitaker traces out the push towards making psychiatry an objective science with a defined method and its implications:

Congress had created the NIMH with the thought that it would transform psychiatry into a more modern, scientific discipline…..Psychiatrists and nurses would use “rating scales” to measure numerically the characteristic symptoms of the disease that was to be studied. Did a drug for schizophrenia reduce the patient’s “anxiety”? His or her “grandiosity”? “Hostility”? “Suspiciousness”? “Unusual thought content”? “Uncooperativeness”? The severity of all of those symptoms would be measured on a numerical scale and a total “symptom” score tabulated, and a drug would be deemed effective if it reduced the total score significantly more than a placebo did within a six-week period. At least in theory, psychiatry now had a way to conduct trials of psychiatric drugs that would produce an “objective” result. Yet the adoption of this assessment put psychiatry on a very particular path: The field would now see short-term reduction of symptoms as evidence of a drug’s efficacy. Much as a physician in internal medicine would prescribe an antibiotic for a bacterial infection, a psychiatrist would prescribe a pill that knocked down a “target symptom” of a “discrete disease.” The six-week “clinical trial” would prove that this was the right thing to do. However, this tool wouldn’t provide any insight into how patients were faring over the long term.

It cannot be emphasised enough that even increasing the period of the scientific trial is not enough to give us definitive answers. The argument that structural flaws are being uncovered or that withdrawal proves that the drug works cannot be definitively refuted. Moreover, at every point of time after medication is started, the short-term impact of staying on or increasing the level of medication is better than the alternative of going off the medication. The deeper issue here is also that in such a system, statistical analysis that tries to determine the efficacy of the intervention cannot deal with the fact that the nature of the intervention itself is to shift the distribution of outcomes into the tail and continue to do so as long as the level of medication keeps increasing.

The Control Agenda and High Modernism

The desire for stability and the control agenda is not simply a consequence of the growth of Olsonian special interests in the economy. The title of this post is inspired by Holling and Meffe’s classic paper on this topic in ecology. Their paper highlights that stabilisation is embedded within the command-and-control approach which itself is inherent to the high modernist way that James Scott has criticised.

Holling and Meffe also recognise that it is a simplistic application of “scientific” methods that underpins this command-and-control philosophy:

much of present ecological theory uses the equilibrium definition of resilience, even though that definition reinforces the pathology of equilibrium-centered command and control. That is because much of that theory draws predominantly from traditions of deductive mathematical theory (Pimm 1984) in which simplified, untouched ecological systems are imagined, or from traditions of engineering in which the motive is to design systems with a single operating objective (Waide & Webster 1976; De Angelis et. al. 1980; O’Neill et al. 1986), or from small-scale quadrant experiments in nature (Tilman & Downing 1994) in which long-term, large-scale successional or episodic transformations are not of concern. That makes the mathematics more tractable, it accommodates the engineer’s goal to develop optimal designs, and it provides the ecologist with a rationale for utilizing manageable, small sized, and short-term experiments, all reasonable goals. But these traditional concepts and techniques make the world appear more simple, tractable, and manageable than it really is. They carry an implicit assumption that there is global stability – that there is only one equilibrium steady-state, or, if other operating states exist, they should be avoided with safeguards and regulatory controls. They transfer the command-and-control myopia of exploitive development to similarly myopic demands for environmental regulations and prohibitions.

Those who emphasize ecosystem resilience, on the other hand, come from traditions of applied mathematics and applied resource ecology at the scale of ecosystems, such as the dynamics and management of freshwater systems (Fiering 1982) forests (Clark et al. 19759, fisheries (Walters 1986) semiarid grasslands (Walker et al. 1969), and interacting populations in nature (Dublin et al. 1990; Sinclair et al. 1990). Because these studies are rooted in inductive rather than deductive theory formation and in experience with the effects of large-scale management disturbances, the reality of flips from one stable state to another cannot be avoided (Helling 1986).

 

My aim in this last section is not to argue against the scientific method but simply to state that we have adopted too narrow a definition of what constitutes a scientific endeavour. Even this is not a coincidence. High modernism has its roots firmly planted in Enlightenment rationality and philosophical viewpoints that lie at the core of our idea of progress. In many uncertain domains, genuine progress and stabilisation that leads to fragility cannot be distinguished from each other. These are topics that I hope to explore in future posts.

Bookmark and Share

Written by Ashwin Parameswaran

December 14th, 2011 at 10:51 am

The Great Divide between Academics and Practitioners

with 3 comments

In an excellent article, Mark Thoma highlights the great divide between academics and practitioners in economics. He also identifies the fundamental reason for this divide – practitioners typically rely on intuition and rough heuristics whereas academics rely on rigorous theoretical constructs.

In Herbert Simon’s terminology, practitioners are satisficers, not optimisers. In a recent post, I outlined my preferred framework to analyse monetary policy as an attempt to influence the real rate curve. Clearly, this viewpoint is not rigorous but for my purposes transforming this framework into a theoretically watertight construct is not worth the effort. The aim of the framework is simple to give me a quick and dirty but useful way to process information and market data. It is also almost certain that in some scenarios, this framework will break down – but instead of thinking about all such scenarios in advance, I simply trust that my experience and my gut instinct will warn me when such a scenario occurs.

To most academics, the process I have outlined above would seem to be a distinctly unscientific and “irrational” method. But as I have argued before, heuristics and intuition are rational responses in an uncertain environment where time and resources are scarce. Herbert Simon and Gerd Gigerenzer have both done excellent work on the role of heuristics but the most relevant research on the role of intuition has been undertaken by Gary Klein and other researchers in the field of ‘Naturalistic Decision Making’ (NDM). NDM originated from Klein’s work in analysing the decision-making of firefighters – as Klein explains in this recent interview, expert firefighters follow a process that is far removed from the conventional definition of rational choice. They “build up a repertoire of patterns so that they can immediately identify, classify, and categorize situations, and have a rapid impulse about what to do. Not just what to do, but they’re framing the situation, and their frame is telling them what are the important cues. That’s why they’re always looking, or usually looking, in the right place. They know what to ignore, and what they have to watch carefully.” There’s nothing magical about this:

“Intuition is about expertise and tacit knowledge. I’ll contrast tacit knowledge with explicit knowledge. Explicit knowledge is knowledge of factual material. I can tell people facts, I can tell them over the phone, and they’ll know things. I can say I was born in the Bronx, and now you know where I was born. That’s an example of explicit knowledge, it’s factual information.

But there are other forms of knowledge. There’s knowledge about routines. Routines you can think of as a set of steps. But there’s also tacit knowledge, and expertise about when to start each step, and when it’s finished, when you’re done and ready to start the next one, and whether the steps are working or not. So even for routines, some expertise is needed.
There are other aspects of tacit knowledge that are about intuition, like our ability to make perceptual discriminations, so as we get experience, we can see things that we couldn’t see before….

Judgments based on intuition seem mysterious because intuition doesn’t involve explicit knowledge. It doesn’t involve declarative knowledge about facts. Therefore, we can’t explicitly trace the origins of our intuitive judgments. They come from other parts of our knowing. They come from our tacit knowledge and so they feel magical. Intuitions sometimes feel like we have ESP, but it isn’t magical, it’s really a consequence of the experience we’ve built up.”

Larry Summers is correct in noting that the solution is not for practitioners to become academics. It is for more academics to rigorously analyse the intuitive and heuristic-based methods and explanations that practitioners use. The real gap is in the paucity of applied economists and the misguided view of applied work with data-crunching rather than practical knowledge. As Daniel Kahneman explains in his introduction to Gary Klein’s interview:

“In the US, the word ”applied“ tends to diminish anything academic it touches. Add the word to the name of any academic discipline, from mathematics and statistics to psychology, and you find lowered status.  The attitude changed briefly during World War II, when the best academic psychologists rolled up their sleeves to contribute to the war effort. I believe it was not an accident that the 15 years following the war were among the most productive in the history of the discipline.  Old methods were discarded, old methodological taboos were dropped, and common sense prevailed over stale theoretical disputes. However, the word ”applied” did not retain its positive aura for very long. It is a pity.

Gary Klein is a living example of how useful applied psychology can be when it is done well. Klein is first and mainly a keen observer. He looks at people who are good at their job as they exercise their skills, sometimes in life-and-death situations, and he reports what he sees in clear and eloquent prose.  When you read his descriptions of real experts at work, you feel that it is the job of theorists to accommodate what he has seen – instead of doing what we often do, which is to scan the “real world” (when we think of it at all) for illustrations of our theoretical notions.”

The divide that Mark Thoma identifies is not restricted to economics – in the age of ‘Big Data’, all academic disciplines are moving away from the sort of work that requires researchers to get their hands dirty. In an excellent post, Jennifer Jacquet explains how field ecologists like Robert Paine are a dying breed, being replaced by ecologists more at home on a computer than in the field. This is not a criticism of mathematical ecology, simply an assertion that the kind of insights that Bob Paine derived from spending “45 years knee-deep in kelp and invertebrates on Washington State’s coast” are valuable and cannot be replicated by other means. This presumption that data and theory can substitute for experience on the ground is symptomatic of the broader problem of the downgrading of tacit and contextual knowledge highlighted by many other changes in academic economics, most notably the neglect of economic history and institutional detail. One of the most striking deficiencies in economic theory that was exposed during the crisis was the disconnect between monetary economics and the institutional reality of the new regime of shadow banking and derivatives. Hyman Minsky’s theories are relevant not because of their theoretical elegance but because of their firm grounding in the institutional evolution of the post-war monetary and banking system, a topic that he researched in great detail.

An example of how this balance between the theoretical and applied fields can be restored is provided by the collaborative work between Daniel Kahneman and Gary Klein. Kahneman and Klein have spent their entire careers tackling the same field (the psychology of decision-making) with diametrically opposed approaches – Kahneman focuses on controlled lab experiments, comparisons of decision-making performance to an objective optimum, and a generally skeptical stance towards human cognition. Klein focuses on research in real-world organisations, analysis of actual performance through more subjective variables and a generally admiring stance on human cognition. Yet they were able to collaborate and find common ground, the results of which are summarised in a fascinating paper. Economics could do with more applied researchers like Gary Klein as well as more theoretical researchers like Daniel Kahneman who are open to applied practical insights.

Bookmark and Share

Written by Ashwin Parameswaran

July 27th, 2011 at 9:43 am

Posted in Rationality

Advances in Technology and Artificial Intelligence: Implications for Education and Employment

with 10 comments

In a recent article, Paul Krugman pointed out the fallacies in the widely held belief that more education for all will lead to better jobs, lower unemployment and reduced inequality in the economy. The underlying thesis in Krugman’s argument (drawn from Autor, Levy and Murnane)  is fairly straightforward and compelling: Advances in computerisation do not increase the demand for all “skilled” labour. Instead they reduce the demand for routine tasks, including many tasks that we currently perceive as skilled and require significant formal education for a human being to carry out effectively.

This post is my take on what advances in technology, in particular artificial intelligence, imply for the nature of employment and education in our economy. In a nutshell, advances in artificial intelligence and robotics means that the type of education and employment that has been dominant throughout the past century is now almost obsolete. The routine jobs of 20th century manufacturing and services that were so amenable to creating mass employment are increasingly a thing of the past. This does not imply that college education is irrelevant. But it does imply that our current educational system, which is geared towards imparting routine and systematic skills and knowledge, needs a radical overhaul.

As Autour et al note, routine human tasks have gradually been replaced by machinery and technology  since atleast the advent of the Industrial Revolution. What has changed in the last twenty years with the advent of computerisation is that the sphere of human activities that can be replaced by technology has broadened significantly. But there are still some significant holes. The skills that Autour et al identify as complementary rather than substitutable by computerisation are those that have proved most challenging for AI scientists to replicate. The inability to automate many tasks that require human sensory and motor skills is an example of what AI researchers call Moravec’s Paradox. Hans Moravec identified that it is much easier to engineer apparently complex computational tasks such as the ability to play chess than it is to engineer the sensorimotor ability of a one-year old child. In a sense, computers find it harder to mimic some of our animalistic skills and relatively easy to mimic many of our abilities that we have long thought of as separating us from other animals. Moravec’s paradox explains why many manual jobs such as driving a car have so far resisted automation. At the same time AI has also found it hard to engineer the ability to perform some key non-routine cognitive tasks such as the ability to generate creative and novel solutions under conditions of significant irreducible uncertainty.

One of the popular misconceptions about the limits of AI/technology is the notion that the engineered alternative must mimic the human skillset completely in order to replace it. In many tasks the human method may not be the only way or even the best way to achieve the task. For example, the Roomba and subsumption architectures do not need to operate like a human being to get the job done. Similarly, a chess program can compete with a human player even though the brute-force method of the computer has very little in common with the pattern-recognising, intuitive method of the grandmaster. Moreover, automating and replacing human intervention frequently involves a redesign of the operating environment in which the task is performed to reduce uncertainty, so that the underlying task can be transformed into a routine and automatable one. Herbert Simon identified this long ago when he noted: “If we want an organism or mechanism to behave effectively in a complex and changing environment, we can design into it adaptive mechanisms that allow it to respond flexibly to the demands the environment places on it. Alternatively, we can try to simplify and stabilize the environment. We can adapt organism to environment or environment to organism”. To hazard a guess, the advent of the “car that drives itself” will probably involve a significant redesign of the design and rules of our roads.

This redesign of the work environment to reduce uncertainty lies at the heart of the Taylorist/Fordist logic that brought us the assembly line production system and has now been applied to many white-collar office jobs. Of course this uncertainty is not eliminated. As Richard Langlois notes, it is “pushed up the hierarchy to be dealt with by adaptable and less-specialized humans” or in many cases, it can even be pushed out of the organisation itself. Either way, what is indisputable is that for the vast majority of employees whether on an assembly line in FoxConn or in a call center in India, the job content is strictly codified and routine. Ironically, this very process of transforming a job description into one amenable to mass employment itself means that the job is that much more likely to be automated in the future as the sphere of activities that are thwarted by Moravec’s paradox reduces. For example, we may prefer competent customer service from our bank but have long since reconciled ourselves to sub-standard customer service as the price we pay for cheap banking. Once we have replaced the “tacit knowledge” of the “expert” customer service agent with an inexperienced agent who needs to be provided with clear rules, we are that much closer to replacing the agent in the process altogether.

The implication of my long-winded argument is that even Moravec’s paradox will not shield otherwise close-to-routine activities from automation in the long run. That leaves us with employment opportunities necessarily being concentrated in significantly non-routine tasks (cognitive or otherwise) that are hard to replicate effectively through computational means. It is easy to understand why the generation of novel and creative solutions is difficult to replicate in a systematic manner but this is not the only class of activities that falls under this umbrella. Also relevant are many activities that require what Hubert and Stuart Dreyfus call expert know-how. In their study of skill acquisition and training that was to form the basis of their influential critique of AI, they note that as one moves from being a novice at an activity to being an expert, the role of rules and algorithms in guiding our actions diminishes to be replaced with an intuitive tacit understanding. As Hubert Dreyfus notes, “a chess grandmaster not only sees the issues in a position almost immediately, but the right response just pops into his or her head.”

The irony of course is that the Taylorist logic of the last century has been focused so precisely on eliminating the need for such expert know-how, in the process driving our educational system to de-emphasise the same. What we need is not so much more education as a radically different kind of education. Frank Levy himself made this very point in an article a few years ago but the need to overhaul our industrial-age education system has been most eloquently championed by Sir Ken Robinson [1,2]. To say that our educational system needs to focus on “creativity” is not to claim that we all need to become artists and scientists. Creativity here is defined as simply the ability to explore effectively rather than follow a algorithmic routine, a role that many of our current methods of “teaching” are not set up to achieve. It applies as much to the intuitive, unpredictable nature of biomedical research detailed by James Austin as it does to the job of an expert motorcycle mechanic that Matthew Crawford describes so eloquently. The need to move beyond a simple, algorithmic level of expertise is not one driven by sentiment but increasingly by necessity as the scope of tasks that can be performed by AI agents expands.   A corollary of this line of thought is that jobs that can provide “mass” employment will likely be increasingly hard to find. This does not mean that full employment is impossible, simply that any job that is routine enough to employ a large number of people doing a very similar role is likely to be automated sooner or later.

 

Bookmark and Share

Written by Ashwin Parameswaran

March 15th, 2011 at 1:43 pm

Evolvability, Robustness and Resilience in Complex Adaptive Systems

with 14 comments

In a previous post, I asserted that “the existence of irreducible uncertainty is sufficient to justify an evolutionary approach for any social system, whether it be an organization or a macro-economy.” This is not a controversial statement – Nelson and Winter introduced their seminal work on evolutionary economics as follows: “Our evolutionary theory of economic change…is not an interpretation of economic reality as a reflection of supposedly constant “given data” but a scheme that may help an observer who is sufficiently knowledgeable regarding the facts of the present to see a little further through the mist that obscures the future.”

In microeconomics, irreducible uncertainty implies a world of bounded rationality where many heuristics become not signs of irrationality but a rational and effective tool of decision-making. But it is the implications of human action under uncertainty for macro-economic outcomes that is the focus of this blog – In previous posts (1,2) I have elaborated upon the resilience-stability tradeoff and its parallels in economics and ecology. This post focuses on another issue critical to the functioning of all complex adaptive systems: the relationship between evolvability and robustness.

Evolvability and Robustness Defined

Hiroaki Kitano defines robustness as follows: “Robustness is a property that allows a system to maintain its functions despite external and internal perturbations….A system must be robust to function in unpredictable environments using unreliable components.” Kitano makes it explicit that robustness is concerned with the maintenance of functionality rather than specific components: “Robustness is often misunderstood to mean staying unchanged regardless of stimuli or mutations, so that the structure and components of the system, and therefore the mode of operation, is unaffected. In fact, robustness is the maintenance of specific functionalities of the system against perturbations, and it often requires the system to change its mode of operation in a flexible way. In other words, robustness allows changes in the structure and components of the system owing to perturbations, but specific functions are maintained.”

Evolvability is defined as the ability of the system to generate novelty and innovate thus enabling the system to “adapt in ways that exploit new resources or allow them to persist under unprecedented environmental regime shifts” (Whitacre 2010). At first glance, evolvability and robustness appear to be incompatible: Generation of novelty involves a leap into the dark, an exploration rather than an act of “rational choice” and the search for a beneficial innovation carries with it a significant risk of failure. It’s worth noting that in social systems, this dilemma vanishes in the absence of irreducible uncertainty. If all adaptations are merely a realignment to a known systemic configuration (“known” in either a deterministic or a probabilistic sense), then an inability to adapt needs other explanations such as organisational rigidity.

Evolvability, Robustness and Resilience

Although it is typical to equate resilience with robustness, resilient complex adaptive systems also need to possess the ability to innovate and generate novelty. As Allen and Holling put it : “Novelty and innovation are required to keep existing complex systems resilient and to create new structures and dynamics following system crashes”. Evolvability also enables the system to undergo fundamental transformational change – it could be argued that such innovations are even more important in a modern capitalist economic system than they are in the biological or ecological arena. The rest of this post will focus on elaborating upon how macro-economic systems can be both robust and evolvable at the same time – the apparent conflict between evolvability and robustness arises from a fallacy of composition where macro-resilience is assumed to arise from micro-resilience, when in fact it arises from the very absence of micro-resilience.

EVOLVABILITY, ROBUSTNESS AND RESILIENCE IN MACRO-ECONOMIC SYSTEMS

The pre-eminent reference on how a macro-economic system can be both robust and evolvable at the same time is the work of Burton Klein in his books “Dynamic Economics” and “Prices, Wages and Business Cycles: A Dynamic Theory”. But as with so many other topics in evolutionary economics, no one has summarised it better than Brian Loasby: “Any economic system which is to remain viable over a long period must be able to cope with unexpected change. It must be able to revise or replace policies which have worked well. Yet this ability is problematic. Two kinds of remedy may be tried, at two different system levels. One is to try to sensitize those working within a particular research programme to its limitations and to possible alternatives, thus following Menger’s principle of creating private reserves against unknown but imaginable dangers, and thereby enhancing the capacity for internal adaptation….But reserves have costs; and it may be better , from a system-wide perspective, to accept the vulnerability of a sub-system in order to exploit its efficiency, while relying on the reserves which are the natural product of a variety of sub-systems….
Research programmes, we should recall, are imperfectly specified, and two groups starting with the same research programme are likely to become progressively differentiated by their experience, if there are no strong pressures to keep them closely aligned. The long-run equilibrium of the larger system might therefore be preserved by substitution between sub-systems as circumstances change. External selection may achieve the same overall purpose as internal adaptation – but only if the system has generated adequate variety from which the selection may be made. An obvious corollary which has been emphasised by Klein (1977) is that attempts to preserve sub-system stability may wreck the larger system. That should not be a threatening notion to economists; it also happens to be exemplified by Marshall’s conception of the long-period equilibrium of the industry as a population equilibrium, which is sustained by continued change in the membership of that population. The tendency of variation is not only a chief cause of progress; it is also an aid to stability in a changing environment (Eliasson, 1991). The homogeneity which is conducive to the attainment of conventional welfare optima is a threat to the resilience which an economy needs.”

Uncertainty can be tackled at the micro-level by maintaining reserves and slack (liquidity, retained profits) but this comes at the price of slack at the macro-level in terms of lost output and employment. Note that this is essentially a Keynesian conclusion, similar to how individually rational saving decisions can lead to collectively sub-optimal outcomes. From a systemic perspective, it is more preferable to substitute the micro-resilience with a diverse set of micro-fragilities. But how do we induce the loss of slack at firm-level? And how do we ensure that this loss of micro-resilience occurs in a sufficiently diverse manner?

The “Invisible Foot”

The concept of the “Invisible Foot” was introduced by Joseph Berliner as a counterpoint to Adam Smith’s “Invisible Hand” to explain why innovation was so hard in the centrally planned Soviet economy: “Adam Smith taught us to think of competition as an “invisible hand” that guides production into the socially desirable channels….But if Adam Smith had taken as his point of departure not the coordinating mechanism but the innovation mechanism of capitalism, he may well have designated competition not as an invisible hand but as an invisible foot. For the effect of competition is not only to motivate profit-seeking entrepreneurs to seek yet more profit but to jolt conservative enterprises into the adoption of new technology and the search for improved processes and products. From the point of view of the static efficiency of resource allocation, the evil of monopoly is that it prevents resources from flowing into those lines of production in which their social value would be greatest. But from the point of view of innovation, the evil of monopoly is that it enables producers to enjoy high rates of profit without having to undertake the exacting and risky activities associated with technological change. A world of monopolies, socialist or capitalist, would be a world with very little technological change.” To maintain an evolvable macro-economy, the invisible foot needs to be “applied vigorously to the backsides of enterprises that would otherwise have been quite content to go on producing the same products in the same ways, and at a reasonable profit, if they could only be protected from the intrusion of competition.”

Entry of New Firms and the Invisible Foot

Burton Klein’s great contribution along with other dynamic economists of the time (notably Gunnar Eliasson) was to highlight the critical importance of entry of new firms in maintaining the efficacy of the invisible foot. Klein believed that “the degree of risk taking is determined by the robustness of dynamic competition, which mainly depends on the rate of entry of new firms. If entry into an industry is fairly steady, the game is likely to have the flavour of a highly competitive sport. When some firms in an industry concentrate on making significant advances that will bear fruit within several years, others must be concerned with making their long-run profits as large as possible, if they hope to survive. But after entry has been closed for a number of years, a tightly organised oligopoly will probably emerge in which firms will endeavour to make their environments highly predictable in order to make their environments highly predictable in order to make their short-run profits as large as possible….Because of new entries, a relatively concentrated industry can remain highly dynamic. But, when entry is absent for some years, and expectations are premised on the future absence of entry, a relatively concentrated industry is likely to evolve into a tight oligopoly. In particular, when entry is long absent, managers are likely to be more and more narrowly selected; and they will probably engage in such parallel behaviour with respect to products and prices that it might seem that the entire industry is commanded by a single general!”

Again, it can’t be emphasised enough that this argument does not depend on incumbent firms leaving money on the table – on the contrary, they may redouble their attempts at static optimisation. From the perspective of each individual firm, innovation is an incredibly risky process even though the result of such dynamic competition from the perspective of the industry or macro-economy may be reasonably predictable. Of course, firms can and do mitigate this risk by various methods but this argument only claims that any single firm, however dominant cannot replicate the “risk-free” innovation dynamics of a vibrant industry in-house.

Micro-Fragility as the Hidden Hand of Macro-Resilience

In an environment free of irreducible uncertainty, evolvability suffers leading to reduced macro-resilience. “If firms could predict each others’ advances they would not have to insure themselves against uncertainty by taking risks. And no smooth progress would occur” (Klein 1977). Conversely, “because firms cannot predict each other’s discoveries, they undertake different approaches towards achieving the same goal. And because not all of the approaches will turn out to be equally successful, the pursuit of parallel paths provides the options required for smooth progress.”

The Aftermath of the Minsky Moment: A Problem of Micro-Resilience

Within the context of the current crisis, the pre-Minsky moment system was a homogeneous system with no slack which enabled the attainment of “conventional welfare optima” but at the cost of an incredibly fragile and unevolvable condition. The logical evolution of such a system post the Minsky moment is of course still a homogeneous system but with significant firm-level slack built in which is equally unsatisfactory. In such a situation, the kind of macro-economic intervention matters as much as the force of intervention. For example, in an ideal world, monetary policy aimed at reducing borrowing rates of incumbent banks and corporates will flow through into reduced borrowing rates for new firms. In a dynamically uncompetitive world, such a policy will only serve the interests of the incumbents.

The “Invisible Foot” and Employment

Vivek Wadhwa argues that startups are the main source of net job growth in the US economy and Mark Thoma links to research that confirms this thesis. Even if one disagrees with this thesis, the “invisible foot” thesis argues that if the old guard is to contribute to employment, they must be forced to give up their “slack” by the strength of dynamic competition and dynamic competition is maintained by preserving conditions that encourage entry of new firms.

MICRO-EVOLVABILITY AND MACRO-RESILIENCE IN BIOLOGY AND ECOLOGY

Note: The aim of this section is not to draw any false precise equivalences between economic resilience and ecological or biological resilience but simply to highlight the commonality of the micro-macro fallacy of composition across complex adaptive systems – a detailed comparison will hopefully be the subject of a future post. I have tried to keep the section on biological resilience as brief and simple as possible but an understanding of the genotype-phenotype distinction and neutral networks is essential to make sense of it.

Biology: Genotypic Variation and Phenotypic Robustness

In the specific context of biology, evolvability can be defined as “the capacity to generate heritable, selectable phenotypic variation. This capacity may have two components: (i) to reduce the potential lethality of mutations and (ii) to reduce the number of mutations needed to produce phenotypically novel traits” (Kirschner and Gerhart 1998). The apparent conflict between evolvability and robustness can be reconciled by distinguishing between genotypic and phenotypic robustness and evolvability. James Whitacre summarises Andrew Wagner’s work on RNA genotypes and their structure phenotypes as follows: “this conflict is unresolvable only when robustness is conferred in both the genotype and the phenotype. On the other hand, if the phenotype is robustly maintained in the presence of genetic mutations, then a number of cryptic genetic changes may be possible and their accumulation over time might expose a broad range of distinct phenotypes, e.g. by movement across a neutral network. In this way, robustness of the phenotype might actually enhance access to heritable phenotypic variation and thereby improve long-term evolvability.”

Ecology: Species-Level Variability and Functional Stability

The notion of micro-variability being consistent with and even being responsible for macro-resilience is an old one in ecology as Simon Levin and Jane Lubchenco summarise here: “That the robustness of an ensemble may rest upon the high turnover of the units that make it up is a familiar notion in community ecology. MacArthur and Wilson (1967), in their foundational work on island biogeography, contrasted the constancy and robustness of the number of species on an island with the ephemeral nature of species composition. Similarly, Tilman and colleagues (1996) found that the robustness of total yield in high-diversity assemblages arises not in spite of, but primarily because of, the high variability of individual population densities.”

The concept is also entirely consistent with the “Panarchy” thesis which views an ecosystem as a nested hierarchy of adaptive cycles: “Adaptive cycles are nested in a hierarchy across time and space which helps explain how adaptive systems can, for brief moments, generate novel recombinations that are tested during longer periods of capital accumulation and storage. These windows of experimentation open briefly, but the results do not trigger cascading instabilities of the whole because of the stabilizing nature of nested hierarchies. In essence, larger and slower components of the hierarchy provide the memory of the past and of the distant to allow recovery of smaller and faster adaptive cycles.”

Misc. Notes

1. It must be emphasised that micro-fragility is a necessary, but not a sufficient condition for an evolvable and robust macro-system. The role of not just redundancy but degeneracy is critical as is the size of the population.

2. Many commentators use resilience and robustness interchangeably. I draw a distinction primarily because my definitions of robustness and evolvability are borrowed from biology and my definition of resilience is borrowed from ecology which in my opinion defines a robust and evolvable system as a resilient one.

Bookmark and Share

Written by Ashwin Parameswaran

August 30th, 2010 at 8:38 am

Heuristics and Robustness in Asset Allocation: The 1/N Rule, “Hard” Constraints and Fractional Kelly Strategies

with 9 comments

Harry Markowitz received the Nobel Prize in Economics in 1990 for his work on mean-variance optimisation that provided the foundations for Modern Portfolio Theory (MPT). Yet as Gerd Gigerenzer notes, when it came to investing his own money, Markowitz relied on a simple heuristic, the “1/N Rule” which simply allocates equally across all N funds under consideration. At first glance, this may seem to be an incredibly irrational strategy. Yet, there is compelling empirical evidence backing even such a simple heuristic as the 1/N Rule. Gigerenzer points to a study conducted by DeMiguel, Garlappi and Uppal (DMU) which after comparing many asset-allocation strategies including Markowitz mean-variance optimisation concludes that “there is no single model that consistently delivers a Sharpe ratio or a CEQ return that is higher than that of the 1/ N portfolio, which also has a very low turnover.”

Before exploring exactly what the DMU study and Gigerenzer’s work implies, it is worth emphasizing what it does not imply. First, as both DMU and Gigerenzer stress, the purpose of this post is not to argue for the superiority of the 1/N Rule over all other asset-allocation strategies. The aim is just to illustrate how simple heuristics can outperform apparently complex optimisation strategies under certain circumstances. Second, the 1/N Rule does not apply when allocating across securities with excessive idiosyncratic risk e.g. single stocks. In the DMU study for example, the N assets are equity portfolios constructed on the basis of industry classification, countries, firm characteristics etc.

So in what circumstances does the 1/N Rule outperform? Gigerenzer provides a few answers here as do DMU in the above-mentioned study but in my opinion, all of them come down to “the predictive uncertainty of the problem“. When faced with significant irreducible uncertainty, the robustness of the approach is more relevant to its future performance than its optimality. As Gigerenzer notes, this is not about computational intractability – indeed, a more uncertain environment requires a simpler approach, not a more complex one. In his words: “The optimization models performed better than the simple heuristic in data fitting but worse in predicting the future.”

Again, it’s worth reiterating that both studies do not imply that we should abandon all attempts at asset allocation – the DMU study essentially evaluates the 1/N Rule and all other strategies based on their risk-adjusted returns as defined under MPT i.e. by their Sharpe Ratio. Given that most active asset management implies a certain absence of faith in the canonical assumptions underlying MPT, some strategies could outperform if evaluated differently. Nevertheless, the fundamental conclusion regarding the importance of a robust approach holds and a robust asset allocation can be achieved in other ways. For example, when allocating across 20 asset categories, any preferred asset-allocation algorithm could be used with a constraint that the maximum allocation to any category cannot exceed 10%. Such “hard limits” are commonly used by fund managers and although they may not have any justifying rationale under MPT, this does not mean that they are “irrational”.

The need to increase robustness over optimisation when faced with uncertainty is also one of the reasons why the Kelly Criterion is so often implemented in practise as a “Fractional Kelly” strategy. The Kelly Criterion is used to determine the optimal size of sequential bets/investments that maximises the expected growth rate of the portfolio. It depends crucially upon the estimate of the “edge” that the trader possesses. In an uncertain environment, this estimate is less reliable and as Ed Thorp explains here, the edge will most likely be overestimated. In Ed Thorp’s words: “Estimates….in the stock market have many uncertainties and, in cases of forecast excess return, are more likely to be too high than too low. The tendency is to regress towards the mean….The economic situation can change for companies, industries, or the economy as a whole. Systems that worked may be partly or entirely based on data mining….Systems that do work attract capital, which tends to push exceptional [edge] down towards average values.”

Bookmark and Share

Written by Ashwin Parameswaran

July 8th, 2010 at 5:31 am