macroresilience

resilience, not stability

Archive for December, 2011

People Make Poor Monitors for Computers

with 55 comments

In the early hours of June 1st 2009, Air France Flight 447 crashed into the Atlantic Ocean. Till the black boxes of AF447 were recovered in April 2011, the exact circumstances of the crash remained a mystery. The most widely accepted explanation for the disaster attributes a large part of the blame to human error when faced with a partial but not fatal systems failure. Yet a small but vocal faction blames the disaster and others like it on the increasingly automated nature of modern passenger airplanes.

This debate bears an uncanny resemblance to a similar debate as to the causes of the financial crisis – many commentators blame the persistently irrational nature of human judgement for the recurrence of financial crises. Others such as Amar Bhide blame the unwise deference to imperfect financial models over human judgement. In my opinion, both perspectives miss the true dynamic. These disasters are not driven by human error or systems error alone but by fatal flaws in the interaction between human intelligence and complex, near fully-automated systems.

In a recent article drawing upon the black box transcripts, Jeff Wise attributes the crash primarily to a “simple but persistent mistake on the part of one of the pilots”. According to Wise, the co-pilot reacted to the persistent stall warning by “pulling back on the stick, the exact opposite of what he must do to recover from the stall”.

But there are many hints that the story is nowhere near as simple. As Peter Garrison notes :

every pilot knows that to recover from a stall you must get the nose down. But because a fully developed stall in a large transport is considered highly unlikely, and because in IFR air traffic vertical separation, and therefore control of altitude, is important, transport pilots have not been trained to put the nose down when they hear the stall warning — which heralds, after all, not a fully developed stall, but merely an approaching one. Instead, they have been trained to increase power and to “fly out of the stall” without losing altitude. Perhaps that is what the pilot flying AF447 intended. But the airplane was already too deeply stalled, and at too high an altitude, to recover with power alone.

The patterns of the AF447 disaster are not unique. As Chris Sorensen observes, over 50 commercial aircrafts have crashed in “loss-of-control” accidents in the last five years, a trend for which there is no shortage of explanations:

Some argue that the sheer complexity of modern flight systems, though designed to improve safety and reliability, can overwhelm even the most experienced pilots when something actually goes wrong. Others say an increasing reliance on automated flight may be dulling pilots’ sense of flying a plane, leaving them ill-equipped to take over in an emergency. Still others question whether pilot-training programs have lagged behind the industry’s rapid technological advances.

But simply invoking terms such as “automation addiction” or blaming disasters on irrational behaviour during times of intense stress does not get at the crux of the issue.

People Make Poor Monitors for Computers

Airplane automation systems are not the first to discover the truth in the comment made by David Jenkins that “computers make great monitors for people, but people make poor monitors for computers.” As James Reason observes in his seminal book ‘Human Error’:

We have thus traced a progression from where the human is the prime mover and the computer the slave to one in which the roles are very largely reversed. For most of the time, the operator’s task is reduced to that of monitoring the system to ensure that it continues to function within normal limits. The advantages of such a system are obvious; the operator’s workload is substantially reduced, and the [system] performs tasks that the human can specify but cannot actually do. However, the main reason for the human operator’s continued presence is to use his still unique powers of knowledge-based reasoning to cope with system emergencies. And this is a task peculiarly ill-suited to the particular strengths and weaknesses of human cognition…..

most operator errors arise from a mismatch between the properties of the system as a whole and the characteristics of human information processing. System designers have unwittingly created a work situation in which many of the normally adaptive characteristics of human cognition (its natural heuristics and biases) are transformed into dangerous liabilities.

As Jeff Wise notes, it is impossible to stall an Airbus in most conditions. AF447 however went into a state known as ‘alternate law’ which most pilots have never experienced where the airplane could be stalled:

“You can’t stall the airplane in normal law,” says Godfrey Camilleri, a flight instructor who teaches Airbus 330 systems to US Airways pilots….But once the computer lost its airspeed data, it disconnected the autopilot and switched from normal law to “alternate law,” a regime with far fewer restrictions on what a pilot can do. “Once you’re in alternate law, you can stall the airplane,” Camilleri says….It’s quite possible that Bonin had never flown an airplane in alternate law, or understood its lack of restrictions. According to Camilleri, not one of US Airway’s 17 Airbus 330s has ever been in alternate law. Therefore, Bonin may have assumed that the stall warning was spurious because he didn’t realize that the plane could remove its own restrictions against stalling and, indeed, had done so.

This inability of the human operator to fill in the gaps in a near-fully automated system was identified by Lisanne Bainbridge as one of the ironies of automation which James Reason summarised:

the same designer who seeks to eliminate human beings still leaves the operator “to do the tasks which the designer cannot think how to automate” (Bainbridge,1987, p.272). In an automated plant, operators are required to monitor that the automatic system is functioning properly. But it is well known that even highly motivated operators cannot maintain effective vigilance for anything more than quite short periods; thus, they are demonstrably ill-suited to carry out this residual task of monitoring for rare, abnormal events. In order to aid them, designers need to provide automatic alarm signals. But who decides when these automatic alarms have failed or been switched off?

As Robert Charette notes, the same is true for airplane automation:

operators are increasingly left out of the loop, at least until something unexpected happens. Then the operators need to get involved quickly and flawlessly, says Raja Parasuraman, professor of psychology at George Mason University in Fairfax, Va., who has been studying the issue of increasingly reliable automation and how that affects human performance, and therefore overall system performance. ”There will always be a set of circumstances that was not expected, that the automation either was not designed to handle or other things that just cannot be predicted,” explains Parasuraman. So as system reliability approaches—but doesn’t quite reach—100 percent, ”the more difficult it is to detect the error and recover from it,” he says…..In many ways, operators are being asked to be omniscient systems administrators who are able to jump into the middle of a situation that a complex automated system can’t or wasn’t designed to handle, quickly diagnose the problem, and then find a satisfactory and safe solution.

Stored Routines Are Not Effective in Rare Situations

As James Reason puts it:

the main reason why humans are retained in systems that are primarily controlled by intelligent computers is to handle ‘non-design’ emergencies. In short, operators are there because system designers cannot foresee all possible scenarios of failure and hence are not able to provide automatic safety devices for every contingency. In addition to their cosmetic value, human beings owe their inclusion in hazardous systems to their unique, knowledge-based ability to carry out ‘on-line’ problem solving in novel situations. Ironically, and notwithstanding the Apollo 13 astronauts and others demonstrating inspired improvisation, they are not especially good at it; at least not in the conditions that usually prevail during systems emergencies. One reason for this is that stressed human beings are strongly disposed to employ the effortless, parallel, preprogrammed operations of highly specialised, low-level processors and their associated heuristics. These stored routines are shaped by personal history and reflect the recurring patterns of past experience……

Why do we have operators in complex systems? To cope with emergencies. What will they actually use to deal with these problems? Stored routines based on previous interactions with a specific environment. What, for the most part, is their experience within the control room? Monitoring and occasionally tweaking the plant while it performs within safe operating limits. So how can they perform adequately when they are called upon to reenter the control loop? The evidence is that this task has become so alien and the system so complex that, on a significant number of occasions, they perform badly.

Wise again identifies this problem in the case of AF447:

While Bonin’s behavior is irrational, it is not inexplicable. Intense psychological stress tends to shut down the part of the brain responsible for innovative, creative thought. Instead, we tend to revert to the familiar and the well-rehearsed. Though pilots are required to practice hand-flying their aircraft during all phases of flight as part of recurrent training, in their daily routine they do most of their hand-flying at low altitude—while taking off, landing, and maneuvering. It’s not surprising, then, that amid the frightening disorientation of the thunderstorm, Bonin reverted to flying the plane as if it had been close to the ground, even though this response was totally ill-suited to the situation.

Deskilling From Automation

As James Reason observes:

Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills. One of the consequences of automation, therefore, is that operators become de-skilled in precisely those activities that justify their marginalised existence. But when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions. Duncan (1987, p. 266) makes the same point: “The more reliable the plant, the less opportunity there will be for the operator to practise direct intervention, and the more difficult will be the demands of the remaining tasks requiring operator intervention.”

Opacity and Too Much Information of Uncertain Reliability

Wise captures this problem and its interaction with a human who has very little experience in managing the crisis scenario:

Over the decades, airliners have been built with increasingly automated flight-control functions. These have the potential to remove a great deal of uncertainty and danger from aviation. But they also remove important information from the attention of the flight crew. While the airplane’s avionics track crucial parameters such as location, speed, and heading, the human beings can pay attention to something else. But when trouble suddenly springs up and the computer decides that it can no longer cope—on a dark night, perhaps, in turbulence, far from land—the humans might find themselves with a very incomplete notion of what’s going on. They’ll wonder: What instruments are reliable, and which can’t be trusted? What’s the most pressing threat? What’s going on? Unfortunately, the vast majority of pilots will have little experience in finding the answers.

A similar scenario occurred in the case of the Qantas-owned A380 which took off from Singapore in November 2010:

Shortly after takeoff from Singapore, one of the hulking A380’s four engines exploded and sent pieces of the engine cowling raining down on an Indonesian island. The blast also damaged several of the A380’s key systems, causing the unsuspecting flight crew to be bombarded with no less than 54 different warnings and error messages—so many that co-pilot Matt Hicks later said that, at one point, he held his thumb over a button that muted the cascade of audible alarms, which threatened to distract Capt. Richard De Crespigny and the rest of the feverishly working flight crew. Luckily for passengers, Qantas Flight 32 had an extra two pilots in the cockpit as part of a training exercise, all of whom pitched in to complete the nearly 60 checklists required to troubleshoot the various systems. The wounded plane limped back to Singapore Changi Airport, where it made an emergency landing.

Again James Reason captures the essence of the problem:

One of the consequences of the developments outlined above is that complex, tightly-coupled and highly defended systems have become increasingly opaque to the people who manage, maintain and operate them. This opacity has two aspects: not knowing what is happening and not understanding what the system can do. As we have seen, automation has wrought a fundamental change in the roles people play within certain high-risk technologies. Instead of having ‘hands on’ contact with the process, people have been promoted “to higher-level supervisory tasks and to long-term maintenance and planning tasks” (Rasmussen, 1988). In all cases, these are far removed from the immediate processing. What direct information they have is filtered through the computer-based interface. And, as many accidents have demonstrated, they often cannot find what they need to know while, at the same time, being deluged with information they do not want nor know how to interpret.

Absence of Intuitive Feedback

Among others, Hubert and Stuart Dreyfus have shown that human expertise relies on an intuitive and tacit understanding of the situation rather than a rule-bound and algorithmic understanding. The development of intuitive expertise depends upon the availability of clear and intuitive feedback which complex, automated systems are often unable to provide.

In AF447, when the co-pilot did push forward on the stick (the “correct” response), the behaviour of the stall warning was exactly the opposite of what he would have intuitively expected:

At one point the pilot briefly pushed the stick forward. Then, in a grotesque miscue unforeseen by the designers of the fly-by-wire software, the stall warning, which had been silenced, as designed, by very low indicated airspeed, came to life. The pilot, probably inferring that whatever he had just done must have been wrong, returned the stick to its climb position and kept it there for the remainder of the flight.

Absence of feedback prevents effective learning but the wrong feedback can have catastrophic consequences.

The Fallacy of Defence in Depth

In complex automated systems, the redundancies and safeguards built into the system also contribute to its opacity. By protecting system performance against single faults, redundancies allow the latent buildup of multiple faults. Jens Rasmussen called this ‘the fallacy of defence in depth’ which James Reason elaborates upon:

the system very often does not respond actively to single faults. Consequently, many errors and faults made by the staff and maintenance personnel do not directly reveal themselves by functional response from the system. Humans can operate with an extremely high level of reliability in a dynamic environment when slips and mistakes have immediately visible effects and can be corrected……Violation of safety preconditions during work on the system will probably not result in an immediate functional response, and latent effects of erroneous acts can therefore be left in the system. When such errors are allowed to be present in a system over a longer period of time, the probability of coincidence of the multiple faults necessary for release of an accident is drastically increased. Analyses of major accidents typically show that the basic safety of the system has eroded due to latent errors.

This is exactly what occurred on Malaysia Airlines Flight 124 in August 2005:

The fault-tolerant ADIRU was designed to operate with a failed accelerometer (it has six). The redundant design of the ADIRU also meant that it wasn’t mandatory to replace the unit when an accelerometer failed. However, when the second accelerometer failed, a latent software anomaly allowed inputs from the first faulty accelerometer to be used, resulting in the erroneous feed of acceleration information into the flight control systems. The anomaly, which lay hidden for a decade, wasn’t found in testing because the ADIRU’s designers had never considered that such an event might occur.

Again, defence-in-depth systems are uniquely unsuited to human expertise as Gary Klein notes:

In a massively defended system, if an accident sneaks through all the defenses, the operators will find it far more difficult to diagnose and correct it. That is because they must deal with all of the defenses, along with the accident itself…..A unit designed to reduce small errors helped to create a large one.

Two Approaches to Airplane Automation: Airbus and Boeing

Although both Airbus and Boeing have adopted the fly-by-wire technology, there are fundamental differences in their respective approaches. Whereas Boeing’s system enforces soft limits that can be overridden at the discretion of the pilot, Airbus’ fly-by-wire system has built-in hard limits that cannot be overridden completely at the pilot’s discretion.

As Simon Calder notes, pilots have raised concerns in the past about Airbus‘ systems being “overly sophisticated” as opposed to Boeing’s “rudimentary but robust” system. But this does not imply that the Airbus approach is inferior. It is instructive to analyse Airbus’ response to pilot demands for a manual override switch that allows the pilot to take complete control:

If we have a button, then the pilot has to be trained on how to use the button, and there are no supporting data on which to base procedures or training…..The hard control limits in the Airbus design provide a consistent “feel” for the aircraft, from the 120-passenger A319 to the 350-passenger A340. That consistency itself builds proficiency and confidence……You don’t need engineering test pilot skills to fly this airplane.

David Evans captures the essence of this philosophy as aimed at minimising the “potential for human error, to keep average pilots within the limits of their average training and skills”.

It is easy to criticise Airbus‘ approach but the hard constraints clearly demand less from the pilot. In the hands of an expert pilot, Boeing’s system may outperform. But if the pilot is a novice, Airbus’ system almost certainly delivers superior results. Moreover, as I discussed earlier in the post, the transition to an almost fully automated system by itself reduces the probability that the human operator can achieve intuitive expertise. In other words, the transition to near-autonomous systems creates a pool of human operators that appear to frequently commit “irrational” errors and is therefore almost impossible to reverse.

 *          *         *

People Make Poor Monitors for Some Financial Models

In earlier post, I analysed Amar Bhide’s argument that a significant causal agent in the financial crisis was the replacement of discretion with models in many areas of finance – for example, banks’ mortgage lending decisions. In his excellent book, ‘A Call for Judgement’, he expands on this argument and amongst other technologies, lays some of the blame for this over-mechanisation of finance on the ubiquitous Black-Scholes-Merton (BSM) formula. Although I agree with much of his book, this thesis is too simplistic.

There is no doubt that BSM has many limitations – amongst the most severe being the assumption of continuous asset price movements, a known and flat volatility surface, and an asset price distribution free of fat tails. But the systemic impact of all these limitations is grossly overstated:

  • BSM and similar models have never been used as “valuation” methods on a large scale in derivatives markets but as a tool which tries to back out an implied volatility and generate useful hedge ratios by taking market prices for options as a given. In other words, volatility plays the role of the “wrong number in the wrong formula to get the right price”.
  • When “simple” BSM-like models are used to price more exotic derivatives, they have a modest role to play. As Emanuel Derman puts it, practitioners use models as “interpolating formulas that take you from known prices of liquid securities to the unknown values of illiquid securities”.

Nevertheless, this does not imply that financial modelling choices have no role to play in determining system resilience. But the role was more subtle and had to do less with the imperfections of the models themselves as with the imperfections of how complex models used to price complex products could be used by human traders.

Since the discovery of the volatility smile, traders have known that the interpolation process to price exotic options requires something more than a simple BSM model. One would assume that traders would want to use a model that was accurate and comprehensive as possible. But this has rarely been the case. Supposedly inferior local volatility models still flourish and even in some of the most complex domains of exotic derivatives, models are still chosen based on their intuitive similarities to a BSM-like approach where the free parameters can be thought of as volatility or correlation e.g. The Libor Market Model.

The choice of intuitive understanding over model accuracy is not unwarranted. As all market practitioners know, there is no such thing as a perfect derivatives pricing model. Paul Wilmott hit the nail on the head when he observed that *“the many improvements on Black-Scholes are rarely improvements, the best that can be said for many of them is that they are just better at hiding their faults. Black-Scholes also has its faults, but at least you can see them”.

However, as markets have evolved, maintaining this balance between intuitive understanding and accuracy has become increasingly difficult:

  • Intuitive yet imperfect models require experienced and expert traders. Scaling up trading volumes of exotic derivatives however requires that pricing and trading systems be pushed out to novice traders as well as non-specialists such as salespeople.
  • With the increased complexity of derivative products, preserving an intuitive yet sufficiently accurate model becomes an almost impossible task.
  • Product complexity combined with the inevitable discretion available to traders when they use simpler models presents significant control challenges and an increased potential for fraud.

In this manner, the same paradoxical evolution that have been observed in nuclear plants and airplane automation is now being experienced in finance. The need to scale up and accommodate complex products necessitates the introduction of complex, unintuitive models in combination with which human intuitive expertise is unable to add any value. In such a system, a novice is often as good as a more experienced operator. The ability of these models to tackle most scenarios on ‘auto-pilot’ results in a deskilled and novice-heavy human component in the system which is ill-equipped to tackle the inevitable occasion when the model fails. The failure is inevitably taken as evidence of human failure upon which the system is made even more automated and more safeguards and redundancies are built into the system. This exacerbates the problem of absence of feedback when small errors occur. The buildup of latent errors again increases and failures become even more catastrophic.

 *          *         *

My focus on airplane automation and financial models is simply illustrative. There are ample signs of this incompatibility between human monitors and near-fully automated systems in other domains as well. For example, Andrew Hill observes:

In developed economies, Lynda Gratton writes in her new book The Shift, “when the tasks are more complex and require innovation or problem solving, substitution [by machines or computers] has not taken place”. This creates a paradox: far from making manufacturers easier to manage, automation can make managers’ jobs more complicated. As companies assign more tasks to machines, they need people who are better at overseeing the more sophisticated workforce and doing the jobs that machines cannot….

The insight that greater process efficiency adds to the pressure on managers is not new. Even Frederick Winslow Taylor – these days more often caricatured as a dinosaur for his time-and-motion studies – pointed out in his century-old The Principles of Scientific Management that imposing a more mechanistic regime on workers would oblige managers to take on “other types of duties which involve new and heavy burdens”…..

There is no doubt Foxconn and its peers will be able to automate their labour-intensive processes. They are already doing so. The big question is how easily they will find and develop managers able to oversee the highly skilled workforce that will march with their robot armies.

This process of integrating human intelligence with artificial intelligence is simply a continuation of the process through which human beings went from being tool-users to minders and managers of automated systems. The current transition is important in that for the first time, many of these algorithmic and automated systems can essentially run themselves with human beings performing the role of supervisors who only need to intervene in extraordinary circumstances. Although it seems logical that the same process of increased productivity that has occurred during the modern ‘Control Revolution’ will continue during the creation of the “vast,automatic and invisible” ‘second economy’, the incompability of human cognition with near-fully automated systems suggests that it may only do so by taking on an increased risk of rare but catastrophic failure.

Bookmark and Share

Written by Ashwin Parameswaran

December 29th, 2011 at 11:58 pm

The Pathology of Stabilisation in Complex Adaptive Systems

with 72 comments

The core insight  of the resilience-stability tradeoff is that stability leads to loss of resilience. Therefore stabilisation too leads to increased systemic fragility. But there is a lot more to it. In comparing economic crises to forest fires and river floods, I have highlighted the common patterns to the process of system fragilisation which eventually leaves the system “manager” in a situation where there are no good options left.

Drawing upon the work of Mancur Olson, I have explored how the buildup of special interests means that stability is self-reinforcing. Once rent-seeking has achieved sufficient scale, “distributional coalitions have the incentive and..the power to prevent changes that would deprive them of their enlarged share of the social output”. But what if we “solve” the Olsonian problem? Would that mitigate the problem of increased stabilisation and fragility? In this post, I will argue that the cycle of fragility and collapse has much deeper roots than any particular form of democracy.

In this analysis, I am going to move away from ecological analogies and instead turn to an example from modern medicine. In particular, I am going to compare the experience and history of psychiatric medication in the second half of the twentieth century to some of the issues we have already looked at in macroeconomic and ecological stabilisation. I hope to convince you that the uncanny similarities in the patterns observed in stabilised systems across such diverse domains are not a coincidence. In fact, the human body provides us with a much closer parallel to economic systems than even ecological systems with respect to the final stages of stabilisation. Most ecological systems collapse sooner simply because the limits to which resources will be spent in an escalating fashion to preserve stability are much smaller. For example, there are limits to the resources that will be deployed to prevent a forest fire, no matter how catastrophic. On the other hand, the resources that will be deployed to prevent collapse of any system that is integral to human beings are much larger.

Even by the standards of this blog, this will be a controversial article. In my discussion of psychiatric medicine I am relying primarily on Robert Whitaker’s excellent but controversial and much-disputed book ‘Anatomy of an Epidemic’. Nevertheless, I want to emphasise that my ultimate conclusions are much less incendiary than those of Whitaker. In the same way that I want to move beyond an explanation of the economic crisis that relies on evil bankers, crony capitalists and self-interested technocrats, I am trying to move beyond an explanation that blames evil pharma and misguided doctors for the crisis in mental health. I am not trying to imply that fraud and rent-seeking does not have a role to play. I am arguing that even if we eliminate them, the aim of a resilient economic and social system would not be realised.

THE PUZZLE

The puzzle of the history of macroeconomic stabilisation post-WW2 can be summarised as follows. Clearly every separate event of macroeconomic stabilisation works. Most monetary and fiscal interventions result in a rise in the financial markets, NGDP expectations and economic performance in the short run. Yet,

  • we are in the middle of a ‘great stagnation’ and have been for a few decades.
  • the frequency of crises seems to have risen dramatically in the last fifty years culminating in the environment since 2008 which is best described as a perpetual crisis.
  • each recovery seems to be weaker than the previous one and requires an increased injection of stimulus to achieve results that were easily achieved by a simple rate cut not that long ago.

Similarly, the history of mental health post-WW2 too has been a puzzle and is summarised by Whitaker as follows:

The puzzle can now be precisely summed up. On the one hand, we know that many people are helped by psychiatric medications. We know that many people stabilize well on them and will personally attest to how the drugs have helped them lead normal lives. Furthermore, as Satcher noted in his 1999 report, the scientific literature does document that psychiatric medications, at least over the short term, are “effective.” Psychiatrists and other physicians who prescribe the drugs will attest to that fact, and many parents of children taking psychiatric drugs will swear by the drugs as well. All of that makes for a powerful consensus: Psychiatric drugs work and help people lead relatively normal lives. And yet, at the same time, we are stuck with these disturbing facts: The number of disabled mentally ill has risen dramatically since 1955, and during the past two decades, a period when the prescribing of psychiatric medications has exploded, the number of adults and children disabled by mental illness has risen at a mind-boggling rate.

Whitaker then asks the obvious but heretical question – “Could our drug-based paradigm of care, in some unforeseen way, be fueling this modern-day plague?” and answers the question in the affirmative. But what are the precise mechanisms and patterns that underlie this deterioration?

Adaptive Response to Intervention and Drug Dependence

The fundamental reason why interventions fail in complex adaptive systems is the adaptive response triggered by the intervention that subverts the aim of the intervention. Moreover once the system is artificially stabilised and system agents have adapted to this new stability, the system cannot cope with any abrupt withdrawal of the stabilising force. For example, Whitaker notes that

Neuroleptics put a brake on dopamine transmission, and in response the brain puts down the dopamine accelerator (the extra D2 receptors). f the drug is abruptly withdrawn, the brake on dopamine is suddenly released while the accelerator is still pressed to the floor. The system is now wildly out of balance, and just as a car might careen out of control, so too the dopaminergic pathways in the brain……In short, initial exposure to neuroleptics put patients onto a path where they would likely need the drugs for life.

Whitaker makes the same observation for benzodiazepines and antidepressants:

benzodiazepines….work by perturbing a neurotransmitter system, and in response, the brain undergoes compensatory adaptations, and as a result of this change, the person becomes vulnerable to relapse upon drug withdrawal. That difficulty in turn may lead some to take the drugs indefinitely.

(antidepressants) perturb neurotransmitter systems in the brain. This leads to compensatory processes that oppose the initial acute effects of a drug…. When drug treatment ends, these processes may operate unopposed, resulting in appearance of withdrawal symptoms and increased vulnerability to relapse.

Similarly, when a central bank protects incumbent banks against liquidity risk, the banks choose to hold progressively more illiquid portfolios. When central banks provide incumbent banks with cheap funding in times of crisis to prevent failure and creditor losses, the banks choose to take on more leverage. This is similar to what John Adams has termed the ‘risk thermostat’ – the system readjusts to get back to its preferred risk profile. The protection once provided is almost impossible to withdraw without causing systemic havoc as agents adapt to the new stabilised reality and lose the ability to survive in an unstabilised environment.

Of course, in economic systems when agents actively intend to arbitrage such commitments by central banks, it is simply a form of moral hazard. But such an adaptation can easily occur via the natural selective forces at work in an economy – those who fail to take advantage of the Greenspan/Bernanke put simply go bust or get fired. In our brain the adaptation simply reflects homeostatic mechanisms selected for by the process of evolution.

Transformation into a Pathological State, Loss of Core Functionality and Deterioration of the Baseline State

I have argued in many posts that the successive cycles of Minskyian stabilisation have a role to play in the deterioration in the structural performance of the real economy which has manifested itself as ‘The Great Stagnation’. The same conclusion holds for many other complex adaptive systems and our brain is no different. Stabilisation kills much of what makes human beings creative. Innovation and creativity are fundamentally disequilibrium processes so it is no surprise that an environment of stability does not foster them. Whitaker interviews a patient on antidepressants who said: “I didn’t have mood swings after that, but instead of having a baseline of functioning normally, I was depressed. I was in a state of depression the entire time I was on the medication.”

He also notes disturbing research on the damage done to children who were treated for ADHD with Ritalin:

when researchers looked at whether Ritalin at least helped hyperactive children fare well academically, to get good grades and thus succeed as students, they found that it wasn’t so. Being able to focus intently on a math test, it turned out, didn’t translate into long-term academic achievement. This drug, Sroufe explained in 1973, enhances performance on “repetitive, routinized tasks that require sustained attention,” but “reasoning, problem solving and learning do not seem to be [positively] affected.”……Carol Whalen, a psychologist from the University of California at Irvine, noted in 1997 that “especially worrisome has been the suggestion that the unsalutary effects [of Ritalin] occur in the realm of complex, high-order cognitive functions such as flexible problem-solving or divergent thinking.”

Progressive Increase in Required Dosage

In economic systems, this steady structural deterioration means that increasing amounts of stimulus need to be applied in successive cycles of stabilisation to achieve the same levels of growth. Whitaker too identifies a similar tendency:

Over time, Chouinard and Jones noted, the dopaminergic pathways tended to become permanently dysfunctional. They became irreversibly stuck in a hyperactive state, and soon the patient’s tongue was slipping rhythmically in and out of his mouth (tardive dyskinesia) and psychotic symptoms were worsening (tardive psychosis). Doctors would then need to prescribe higher doses of antipsychotics to tamp down those tardive symptoms.

At this point, some of you may raise the following objection: so what if the new state is pathological? Maybe capitalism with its inherent instability is itself pathological. And once the safety nets of the Greenspan/Bernanke put, lender-of-last-resort programs and too-big-to-fail bailouts are put in place why would we need or want to remove them? If we simply medicate the economy ad infinitum, can we not avoid collapse ad infinitum?

This argument however is flawed.

  • The ability of economic players to reorganise to maximise the rents extracted from central banking and state commitments far exceeds the the resources available to the state and the central bank. The key reason for this is the purely financial nature of this commitment. For example, if the state decided to print money and support the price of corn at twice its natural market price, then it could conceivably do so forever. Sooner or later, rent extractors will run up against natural resource limits – for example,limits on arable land. But when the state commits to support a credit money dominant financial system and asset prices then the economic system can and will generate financial “assets” without limit to take advantage of this commitment. The only defense that the CB and the state possess is regulations aimed at maintaining financial markets in an incomplete, underdeveloped state where economic agents do not possess the tools to game the system. Unfortunately as Minsky and many others have documented, the pace of financial innovation over the last half-century has meant that banks and financialised corporates have all the tools they need to circumvent regulations and maximise rent extraction.
  • Even in a modern state that can print its own fiat currency, the ability to maintain financial commitments is subordinate to the need to control inflation. But doesn’t the complete absence of inflationary pressures in the current environment prove that we are nowhere close to any such limits? Not quite – As I have argued before, the current macroeconomic policy is defined by an abandonment of the full employment target in order to mitigate any risk of inflation whatsoever. The inflationary risk caused by rent extraction from the stabilisation commitment is being counterbalanced by a “reserve army of labour”. The reason for giving up the full employment is simple – As Minsky identified, once the economy has gone through successive cycles of stabilisation, it is prone to ‘rapid cycling’.

Rapid Cycling and Transformation of an Episodic Illness into a Chronic Illness

Minsky noted that

A high-investment, high-profit strategy for full employment – even with the underpinning of an active fiscal policy and an aware Federal Reserve system – leads to an increasingly unstable financial system, and an increasingly unstable economic performance. Within a short span of time, the policy problem cycles among preventing a deep depression, getting a stagnant economy moving again, reining in an inflation, and offsetting a credit squeeze or crunch.

In other words, an economy that attempts to achieve full employment will yo-yo uncontrollably between a state of debt-deflation and high, variable inflation – somewhat similar to a broken shower that only runs either too hot or too cold. The abandonment of the full employment target enables the system to postpone this point of rapid cycling.

The structural malformation of the economic system due to the application of increasing levels of stimulus to the task of stabilisation means that the economy has lost the ability to generate the endogenous growth and innovation that it could before it was so actively stabilised. The system has now been homogenised and is entirely dependent upon constant stimulus. The phenomenon of ‘rapid cycling’ explains a phenomenon I noted in an earlier post which is the apparently schizophrenic nature of the markets, turning from risk-on to risk-off at the drop of a hat. It is the lack of diversity that causes this as the vast majority of agents change their behaviour based on absence or presence of stabilising interventions.

Whitaker again notes the connection between medication and rapid cycling in many instances:

As early as 1965, before lithium had made its triumphant entry into American psychiatry, German psychiatrists were puzzling over the change they were seeing in their manic-depressive patients. Patients treated with antidepressants were relapsing frequently, the drugs “transforming the illness from an episodic course with free intervals to a chronic course with continuous illness,” they wrote. The German physicians also noted that in some patients, “the drugs produced a destabilization in which, for the first time, hypomania was followed by continual cycling between hypomania and depression.””

(stimulants) cause children to cycle through arousal and dysphoric states on a daily basis. When a child takes the drug, dopamine levels in the synapse increase, and this produces an aroused state. The child may show increased energy, an intensified focus, and hyperalertness. The child may become anxious, irritable, aggressive, hostile, and unable to sleep. More extreme arousal symptoms include obsessive-compulsive and hypomanic behaviors. But when the drug exits the brain, dopamine levels in the synapse sharply drop, and this may lead to such dysphoric symptoms as fatigue, lethargy, apathy, social withdrawal, and depression. Parents regularly talk of this daily “crash.”

THE PATIENT WANTS STABILITY TOO

At this point, I seem to be arguing that stabilisation is all just a con-game designed to enrich evil bankers, evil pharma etc. But such an explanation underestimates just how deep-seated the temptation and need to stabilise really is. The most critical component that it misses out on is the fact that the “patient” in complex adaptive systems is as eager to choose stability over resilience as the doctor is.

The Short-Term vs The Long-Term

As Daniel Carlat notes, the reality is that on the whole, psychiatric drugs “work” at least in the short term. Similarly, each individual act of macroeconomic stabilisation such as a lender-of-last-resort intervention, quantitative easing or a rate cut clearly has a positive impact on the short-term performance of both asset markets and the economy.

Whitaker too acknowledges this:

Those are the dueling visions of the psychopharmacology era. If you think of the drugs as “anti-disease” agents and focus on short-term outcomes, the young lady springs into sight. If you think of the drugs as “chemical imbalancers” and focus on long-term outcomes, the old hag appears. You can see either image, depending on where you direct your gaze.

The critical point here is that just like in forest fires and macroeconomies, the initial attempts to stabilise can be achieved easily and with very little medication. The results may seem even miraculous. But this initial period does not last. From one of many cases Whitaker quotes:

at first, “it was like a miracle,” she says. Andrew’s fears abated, he learned to tie his shoes, and his teachers praised his improved behavior. But after a few months, the drug no longer seemed to work so well, and whenever its effects wore off, there would be this “rebound effect.” Andrew would “behave like a wild man, out of control.” A doctor increased his dosage, only then it seemed that Andrew was like a “zombie,” his sense of humor reemerging only when the drug’s effects wore off. Next, Andrew needed to take clonidine in order to fall asleep at night. The drug treatment didn’t really seem to be helping, and so Ritalin gave way to other stimulants, including Adderall, Concerta, and dextroamphetamine. “It was always more drugs,” his mother says.

Medication Seen as Revealing Structural Flaws

One would think that the functional and structural deterioration that follows constant medication would cause both the patient and the doctor to reconsider the benefits of stabilisation. But this deterioration too can be interpreted in many different ways. Whitaker gives an example where the stabilised state is seen to be beneficial by revealing hitherto undiagnosed structural problems:

in 1982, Michael Strober and Gabrielle Carlson at the UCLA Neuropsychiatric Institute put a new twist into the juvenile bipolar story. Twelve of the sixty adolescents they had treated with antidepressants had turned “bipolar” over the course of three years, which—one might think—suggested that the drugs had caused the mania. Instead, Strober and Carlson reasoned that their study had shown that antidepressants could be used as a diagnostic tool. It wasn’t that antidepressants were causing some children to go manic, but rather the drugs were unmasking bipolar illness, as only children with the disease would suffer this reaction to an anti-depressant. “Our data imply that biologic differences between latent depressive subtypes are already present and detectable during the period of early adolescence, and that pharmacologic challenge can serve as one reliable aid in delimiting specific affective syndromes in juveniles,” they said.

Drug Withdrawal as Proof That It Works

The symptoms of drug withdrawal can also be interpreted to mean that the drug was necessary and that the patient is fundamentally ill. The reduction in withdrawal symptoms when the patient goes back on provides further “proof” that the drug works. Withdrawal symptoms can also be interpreted as proof that the patient needs to be treated for a longer period. Again, quoting from Whitaker:

Chouinard and Jones’s work also revealed that both psychiatrists and their patients would regularly suffer from a clinical delusion: They would see the return of psychotic symptoms upon drug withdrawal as proof that the antipsychotic was necessary and that it “worked.” The relapsed patient would then go back on the drug and often the psychosis would abate, which would be further proof that it worked. Both doctor and patient would experience this to be “true,” and yet, in fact, the reason that the psychosis abated with the return of the drug was that the brake on dopamine transmission was being reapplied, which countered the stuck dopamine accelerator. As Chouinard and Jones explained: “The need for continued neuroleptic treatment may itself be drug-induced.”

while they acknowledged that some alprazolam patients fared poorly when the drug was withdrawn, they reasoned that it had been used for too short a period and the withdrawal done too abruptly. “We recommend that patients with panic disorder be treated for a longer period, at least six months,” they said.

Similarly, macroeconomic crises can and frequently are interpreted as a need for better and more stabilisation. The initial positive impact of each intervention and the negative impact of reducing stimulus only reinforces this belief.

SCIENCE AND STABILISATION

A typical complaint against Whitaker’s argument is that his thesis is unproven. I would argue that within the confines of conventional “scientific” data analysis, his thesis and others directly opposed to it are essentially unprovable. To take an example from economics, is the current rush towards “safe” assets a sign that we need to produce more “safe” assets? Or is it a sign that our fragile economic system is addicted to the need for an ever-increasing supply of “safe” assets and what we need is a world in which no assets are safe and all market participants are fully aware of this fact?

In complex adaptive systems it can also be argued that the modern scientific method that relies on empirical testing of theoretical hypotheses against the data is itself fundamentally biased towards stabilisation and against resilience. The same story that I trace out below for the history of mental health can be traced out for economics and many other fields.

Desire to Become a ‘Real’ Science

Whitaker traces out how the theory attributing mental disorders to chemical imbalances was embraced as it enabled psychiatrists to become “real” doctors and captures the mood of the profession in the 80s:

Since the days of Sigmund Freud the practice of psychiatry has been more art than science. Surrounded by an aura of witchcraft, proceeding on impression and hunch, often ineffective, it was the bumbling and sometimes humorous stepchild of modern science. But for a decade and more, research psychiatrists have been working quietly in laboratories, dissecting the brains of mice and men and teasing out the chemical formulas that unlock the secrets of the mind. Now, in the 1980s, their work is paying off. They are rapidly identifying the interlocking molecules that produce human thought and emotion…. As a result, psychiatry today stands on the threshold of becoming an exact science, as precise and quantifiable as molecular genetics.

Search for the Magic Bullet despite Complexity of Problem

In the language of medicine, a ‘magic bullet’ is a drug that counters the root cause of the disease without adversely affecting any other part of the patient. The chemical-imbalance theory took a ‘magic bullet’ approach which reduced the complexity of our mental system to “a simple disease mechanism, one easy to grasp. In depression, the problem was that the serotonergic neurons released too little serotonin into the synaptic gap, and thus the serotonergic pathways in the brain were “underactive”. Antidepressants brought serotonin levels in the synaptic gap up to normal, and that allowed these pathways to transmit messages at a proper pace.”

Search for Scientific Method and Objective Criteria

Whitaker traces out the push towards making psychiatry an objective science with a defined method and its implications:

Congress had created the NIMH with the thought that it would transform psychiatry into a more modern, scientific discipline…..Psychiatrists and nurses would use “rating scales” to measure numerically the characteristic symptoms of the disease that was to be studied. Did a drug for schizophrenia reduce the patient’s “anxiety”? His or her “grandiosity”? “Hostility”? “Suspiciousness”? “Unusual thought content”? “Uncooperativeness”? The severity of all of those symptoms would be measured on a numerical scale and a total “symptom” score tabulated, and a drug would be deemed effective if it reduced the total score significantly more than a placebo did within a six-week period. At least in theory, psychiatry now had a way to conduct trials of psychiatric drugs that would produce an “objective” result. Yet the adoption of this assessment put psychiatry on a very particular path: The field would now see short-term reduction of symptoms as evidence of a drug’s efficacy. Much as a physician in internal medicine would prescribe an antibiotic for a bacterial infection, a psychiatrist would prescribe a pill that knocked down a “target symptom” of a “discrete disease.” The six-week “clinical trial” would prove that this was the right thing to do. However, this tool wouldn’t provide any insight into how patients were faring over the long term.

It cannot be emphasised enough that even increasing the period of the scientific trial is not enough to give us definitive answers. The argument that structural flaws are being uncovered or that withdrawal proves that the drug works cannot be definitively refuted. Moreover, at every point of time after medication is started, the short-term impact of staying on or increasing the level of medication is better than the alternative of going off the medication. The deeper issue here is also that in such a system, statistical analysis that tries to determine the efficacy of the intervention cannot deal with the fact that the nature of the intervention itself is to shift the distribution of outcomes into the tail and continue to do so as long as the level of medication keeps increasing.

The Control Agenda and High Modernism

The desire for stability and the control agenda is not simply a consequence of the growth of Olsonian special interests in the economy. The title of this post is inspired by Holling and Meffe’s classic paper on this topic in ecology. Their paper highlights that stabilisation is embedded within the command-and-control approach which itself is inherent to the high modernist way that James Scott has criticised.

Holling and Meffe also recognise that it is a simplistic application of “scientific” methods that underpins this command-and-control philosophy:

much of present ecological theory uses the equilibrium definition of resilience, even though that definition reinforces the pathology of equilibrium-centered command and control. That is because much of that theory draws predominantly from traditions of deductive mathematical theory (Pimm 1984) in which simplified, untouched ecological systems are imagined, or from traditions of engineering in which the motive is to design systems with a single operating objective (Waide & Webster 1976; De Angelis et. al. 1980; O’Neill et al. 1986), or from small-scale quadrant experiments in nature (Tilman & Downing 1994) in which long-term, large-scale successional or episodic transformations are not of concern. That makes the mathematics more tractable, it accommodates the engineer’s goal to develop optimal designs, and it provides the ecologist with a rationale for utilizing manageable, small sized, and short-term experiments, all reasonable goals. But these traditional concepts and techniques make the world appear more simple, tractable, and manageable than it really is. They carry an implicit assumption that there is global stability – that there is only one equilibrium steady-state, or, if other operating states exist, they should be avoided with safeguards and regulatory controls. They transfer the command-and-control myopia of exploitive development to similarly myopic demands for environmental regulations and prohibitions.

Those who emphasize ecosystem resilience, on the other hand, come from traditions of applied mathematics and applied resource ecology at the scale of ecosystems, such as the dynamics and management of freshwater systems (Fiering 1982) forests (Clark et al. 19759, fisheries (Walters 1986) semiarid grasslands (Walker et al. 1969), and interacting populations in nature (Dublin et al. 1990; Sinclair et al. 1990). Because these studies are rooted in inductive rather than deductive theory formation and in experience with the effects of large-scale management disturbances, the reality of flips from one stable state to another cannot be avoided (Helling 1986).

 

My aim in this last section is not to argue against the scientific method but simply to state that we have adopted too narrow a definition of what constitutes a scientific endeavour. Even this is not a coincidence. High modernism has its roots firmly planted in Enlightenment rationality and philosophical viewpoints that lie at the core of our idea of progress. In many uncertain domains, genuine progress and stabilisation that leads to fragility cannot be distinguished from each other. These are topics that I hope to explore in future posts.

Bookmark and Share

Written by Ashwin Parameswaran

December 14th, 2011 at 10:51 am

A Simple Solution to the Eurozone Sovereign Funding Crisis

with 14 comments

In response to the sovereign funding crisis sweeping across the Eurozone, the ECB decided to “conduct two longer-term refinancing operations (LTROs) with a maturity of 36 months”. Combined with the commitment of the members of the Eurozone excluding the possibility of any more haircuts on private sector holders of Euro sovereign bonds, the aim of the current exercise is clear. As Nicholas Sarkozy put it rather bluntly,

Italian banks will be able to borrow [from the ECB] at 1 per cent, while the Italian state is borrowing at 6–7 per cent. It doesn’t take a finance specialist to see that the Italian state will be able to ask Italian banks to finance part of the government debt at a much lower rate.

In other words, the ECB will not finance fiscal deficits directly but will be more than happy to do so via the Eurozone banking system. But this plan still has a few critical flaws:

  • As Sony Kapoor notes, “By doing this, you are strengthening the link between banks and sovereigns, which has proven so dangerous in this crisis. Even if useful in the short term, it would seriously increase the vulnerability of both banks and sovereigns to future shocks.” In other words, if the promise to exclude the possibility of inflicting losses on sovereign debt-holders is broken at any point of time in the future, then sovereign default will coincide with a complete decimation of the incumbent banks in Europe.
  • European banks are desperately capital-constrained as the latest EBA estimates on the capital shortfall faced by European banks shows. In such a condition, banks will almost certainly take on increased sovereign debt exposures only at the expense of lending to the private sector and households. This can only exacerbate the recession in the Eurozone.
  • Sarkozy’s comment also hints at the deep unfairness of the current proposal. If default and haircuts are not on the table, then allowing banks to finance their sovereign debt holdings at a lower rate than the yield they earn on the sovereign bonds (at the same tenor) is simply a transfer of wealth from the Eurozone taxpayer to the banks. Such a privilege may only be extended to the banks if banking is a “perfectly competitive” sector which it is far from being even in a boom economy. In the midst of an economic crisis when so many banks are tottering, it is even further away from the ideal of perfect competition.

There is a simple solution that tackles all three of the above problems – extend the generous terms of refinancing sovereign debt to the entire populace of the Eurozone such that the market for the “support of sovereign debt” is transformed into something close to perfectly competitive. In practise, this simply requires undertaking a program of fast-track banking licenses to new banks with low minimum size requirements on the condition that they restrict their activities to a narrow mandate of buying sovereign debt. This plan can correct all the flaws of the current proposal:

  • Instead of being concentrated within the incumbent failing banks, the sovereign debt exposure of the Eurozone would be spread in a diversified manner within the population. This will also help in making the “no more haircuts” commitment more time-consistent. The wider base of sovereign debt holders will reduce the possibility that the commitment will be reversed by democratic means. The only argument against this plan is that such a concentrated new bank is too risky but that assumes that there is still default risk on Eurozone sovereign debt and that the commitment is not credible.
  • The plan effectively injects new capital into the banking sector allowing incumbent bank capital to be deployed towards lending to the private sector and households. If sovereign debt spreads collapse, then the plan will also shore up the financial position of the incumbent banks thus injecting further capital available to be deployed.
  • The plan is fair. If the current crisis is indeed just a problem of high interest rates fuelling an increased risk of default, then interest rates will rapidly fall to a level much closer to the refinancing rate. To the extent that rates stay elevated and spreads do not converge, it will provide a much more accurate reflection of the real risk of default. No one will earn a supra-normal rate of return.

On this blog, I have criticised the indiscriminate provision of “liquidity” backstops by central banks on many occasions. I have also asserted that key economic functions must be preserved, not the incumbent entities that provide such functions. In times of crisis, central banking interventions are only fair when they are effectively accessible to the masses. At this critical juncture, the socially just policy may also be the only option that can save the single currency project.

Bookmark and Share

Written by Ashwin Parameswaran

December 10th, 2011 at 12:57 am

The Great Recession, Business Investment and Crony Capitalism

with 8 comments

Paul Krugman points out that since 1985, business investment has been purely a demand story i.e. “a depressed economy led to low business investment” and vice versa. As he explains “The Great Recession, in particular, was led by housing and consumption, with business investment clearly responding rather than leading”. But this does not imply that low business investment does not have a causal role to play in the conditions that led to the Great Recession, or that increased business investment does not have a role to play in the recovery.

As Steve Roth notes, business investment has been anaemic throughout the neo-liberal era. JW Mason reminds us that the neo-liberal transition also coincided with a dramatically increased financialisation of the real economy. Throughout my series of posts on crony capitalism, I have argued that the structural and cyclical problems of the developed world are inextricably intertwined. The anaemic trend in business investment is the reason why the developed world has been in a ‘great stagnation’ for so long. This ‘investment deficit’ manifests itself as the ‘corporate savings glut’ and an increasingly financialised economy. The cause of the investment deficit is an increasingly financialised, cronyist, demosclerotic system where incumbent corporates do not face competitive pressure to engage in risky exploratory investment.

Business investments can typically either operate upon the scale of operations (e.g. capacity,product mix) or they can change the fundamental character of operations (e.g. changes in process, product). Investments in scaling up operations are most easily influenced by monetary policy initiatives which reduce interest rates and raise asset prices or direct fiscal policy initiatives which operate via the multiplier effect. Investments in process innovation require the presence of price competition within the industry. Investments in exploratory product innovation require not only competition amongst incumbent firms but competition from a constant and robust stream of new entrants into the industry.

In an economy where new entrants are stymied by an ever-growing ‘License Raj’ that costs the US economy an estimated $100 billion per year, a web of regulations that exist primarily to protect incumbent large corporates and a dysfunctional patent regime, it is not surprising that exploratory business investment has fallen so dramatically. A less cronyist and more dynamically competitive economy without the implicit asset-price protection of the Greenpan/Bernanke put will have lesser profits in aggregate but more investment. Incumbents need to be compelled to take on risky ventures by the threat of extinction and obsolescence. Increased investments in risky exploratory ventures will not only drag the economy out of the ‘Great Stagnation’ but it will result in a reduced share of GDP flowing to corporate profits and an increased proportion of GDP flowing towards wages. In turn, this enables the economy to achieve a sustainable state of full employment and even a higher level of sustainable consumption without households having to resort to increased leverage as they had to during the Great Moderation.

Alexander Field has illustrated how even the growth of the Golden Age of the 50s and the 60s was built upon the foundations of Pre-WW2 innovation. If this thesis is correct, the ‘Great Stagnation’ was inevitable and in fact understates how long ago the innovation deficit started. The Great Moderation far from being the cure was simply a palliative that postponed the inevitable end-point of the evolution of the macroeconomy through successive cycles of Minskyian stabilisation. As I noted in a previous post:

The neoliberal transition unshackled the invisible hand (the carrot of the profit motive) without ensuring that all key sectors of the economy were equally subject to the invisible foot (the stick of failure and losses and new firm entry)….“Order for all” became “order for the classes and disorder for the masses”….In this increasingly financialised economy, the increased market-sensitivity combined with the macro-stabilisation commitment encourages low-risk process innovation and discourages uncertain and exploratory product innovation. This tilt towards exploitation/cost-reduction without exploration kept inflation in check but it also implied a prolonged period of sub-par wage growth and a constant inability to maintain full employment unless the consumer or the government levered up. For the neo-liberal revolution to sustain a ‘corporate welfare state’ in a democratic system, the absence of wage growth necessitated an increase in household leverage for consumption growth to be maintained. 

When commentators such as James Livingston claim that tax cuts for businesses will not solve our problems and that we need a redistribution of income away from profits towards wages to trigger increased aggregate demand via aggregate consumption, I agree with them. But I disagree with the conclusion that the secular decline in business investment is inevitable, acceptable and unrelated to the current cyclical downturn. The fact that business investment during the Great Moderation only increased when consumption demand went up is a symptom of the corporatist nature of the economy. When the household sector has reached a state of peak debt and the financial system has reached its point of peak elasticity, simply running increased fiscal deficits without permitting the corporatist superstructure to collapse simply takes us to the end-state that Minsky himself envisioned: an economy that attempts to achieve full employment will yo-yo uncontrollably between a state of debt-deflation and high,variable inflation – somewhat similar to a broken shower that only runs either too hot or too cold. The only way in which the corporatist status quo can postpone collapse is to abandon the goal of full employment which is exactly the path that the developed world has taken.  This only substitutes an economic fragility with a deeper social fragility.

Stability for all is synonymous with an environment of permanent innovative stagnation. The Schumpeterian solution is to transform the system into one of instability for all. Micro-fragility is the key to macro-resilience but this fragility must be felt by all economic agents, labour and capital alike. In order to end the stagnation and achieve sustainable full employment, we need to allow incumbent banks and financialised corporations to collapse and dismantle the barriers to entry of new firms that pervade the economy. The risk of a deflationary contraction from allowing such a collapse be prevented in a simple and effective manner with a system of direct transfers to individuals as Steve Waldman has outlined. This solution also reverses the flow of rents that have exacerbated inequality over the past few decades.

Note: I went through a much longer version of the same argument with an emphasis on the relationship between employment and technology adapted to US economic history in a previous post. The above logic explains my disagreements with conventional Keynesian theory and my affinity with Post-Keynesian theory. Minsky viewed his theory as a  ‘investment theory of the cycle and a financial theory of investment’ and my views are simply a neo-Schumpeterian take on the same underlying framework.

Bookmark and Share

Written by Ashwin Parameswaran

December 7th, 2011 at 5:44 pm

Posted in Cronyism,Resilience