macroresilience

resilience, not stability

The Pathology of Stabilisation in Complex Adaptive Systems

with 72 comments

The core insight  of the resilience-stability tradeoff is that stability leads to loss of resilience. Therefore stabilisation too leads to increased systemic fragility. But there is a lot more to it. In comparing economic crises to forest fires and river floods, I have highlighted the common patterns to the process of system fragilisation which eventually leaves the system “manager” in a situation where there are no good options left.

Drawing upon the work of Mancur Olson, I have explored how the buildup of special interests means that stability is self-reinforcing. Once rent-seeking has achieved sufficient scale, “distributional coalitions have the incentive and..the power to prevent changes that would deprive them of their enlarged share of the social output”. But what if we “solve” the Olsonian problem? Would that mitigate the problem of increased stabilisation and fragility? In this post, I will argue that the cycle of fragility and collapse has much deeper roots than any particular form of democracy.

In this analysis, I am going to move away from ecological analogies and instead turn to an example from modern medicine. In particular, I am going to compare the experience and history of psychiatric medication in the second half of the twentieth century to some of the issues we have already looked at in macroeconomic and ecological stabilisation. I hope to convince you that the uncanny similarities in the patterns observed in stabilised systems across such diverse domains are not a coincidence. In fact, the human body provides us with a much closer parallel to economic systems than even ecological systems with respect to the final stages of stabilisation. Most ecological systems collapse sooner simply because the limits to which resources will be spent in an escalating fashion to preserve stability are much smaller. For example, there are limits to the resources that will be deployed to prevent a forest fire, no matter how catastrophic. On the other hand, the resources that will be deployed to prevent collapse of any system that is integral to human beings are much larger.

Even by the standards of this blog, this will be a controversial article. In my discussion of psychiatric medicine I am relying primarily on Robert Whitaker’s excellent but controversial and much-disputed book ‘Anatomy of an Epidemic’. Nevertheless, I want to emphasise that my ultimate conclusions are much less incendiary than those of Whitaker. In the same way that I want to move beyond an explanation of the economic crisis that relies on evil bankers, crony capitalists and self-interested technocrats, I am trying to move beyond an explanation that blames evil pharma and misguided doctors for the crisis in mental health. I am not trying to imply that fraud and rent-seeking does not have a role to play. I am arguing that even if we eliminate them, the aim of a resilient economic and social system would not be realised.

THE PUZZLE

The puzzle of the history of macroeconomic stabilisation post-WW2 can be summarised as follows. Clearly every separate event of macroeconomic stabilisation works. Most monetary and fiscal interventions result in a rise in the financial markets, NGDP expectations and economic performance in the short run. Yet,

  • we are in the middle of a ‘great stagnation’ and have been for a few decades.
  • the frequency of crises seems to have risen dramatically in the last fifty years culminating in the environment since 2008 which is best described as a perpetual crisis.
  • each recovery seems to be weaker than the previous one and requires an increased injection of stimulus to achieve results that were easily achieved by a simple rate cut not that long ago.

Similarly, the history of mental health post-WW2 too has been a puzzle and is summarised by Whitaker as follows:

The puzzle can now be precisely summed up. On the one hand, we know that many people are helped by psychiatric medications. We know that many people stabilize well on them and will personally attest to how the drugs have helped them lead normal lives. Furthermore, as Satcher noted in his 1999 report, the scientific literature does document that psychiatric medications, at least over the short term, are “effective.” Psychiatrists and other physicians who prescribe the drugs will attest to that fact, and many parents of children taking psychiatric drugs will swear by the drugs as well. All of that makes for a powerful consensus: Psychiatric drugs work and help people lead relatively normal lives. And yet, at the same time, we are stuck with these disturbing facts: The number of disabled mentally ill has risen dramatically since 1955, and during the past two decades, a period when the prescribing of psychiatric medications has exploded, the number of adults and children disabled by mental illness has risen at a mind-boggling rate.

Whitaker then asks the obvious but heretical question – “Could our drug-based paradigm of care, in some unforeseen way, be fueling this modern-day plague?” and answers the question in the affirmative. But what are the precise mechanisms and patterns that underlie this deterioration?

Adaptive Response to Intervention and Drug Dependence

The fundamental reason why interventions fail in complex adaptive systems is the adaptive response triggered by the intervention that subverts the aim of the intervention. Moreover once the system is artificially stabilised and system agents have adapted to this new stability, the system cannot cope with any abrupt withdrawal of the stabilising force. For example, Whitaker notes that

Neuroleptics put a brake on dopamine transmission, and in response the brain puts down the dopamine accelerator (the extra D2 receptors). f the drug is abruptly withdrawn, the brake on dopamine is suddenly released while the accelerator is still pressed to the floor. The system is now wildly out of balance, and just as a car might careen out of control, so too the dopaminergic pathways in the brain……In short, initial exposure to neuroleptics put patients onto a path where they would likely need the drugs for life.

Whitaker makes the same observation for benzodiazepines and antidepressants:

benzodiazepines….work by perturbing a neurotransmitter system, and in response, the brain undergoes compensatory adaptations, and as a result of this change, the person becomes vulnerable to relapse upon drug withdrawal. That difficulty in turn may lead some to take the drugs indefinitely.

(antidepressants) perturb neurotransmitter systems in the brain. This leads to compensatory processes that oppose the initial acute effects of a drug…. When drug treatment ends, these processes may operate unopposed, resulting in appearance of withdrawal symptoms and increased vulnerability to relapse.

Similarly, when a central bank protects incumbent banks against liquidity risk, the banks choose to hold progressively more illiquid portfolios. When central banks provide incumbent banks with cheap funding in times of crisis to prevent failure and creditor losses, the banks choose to take on more leverage. This is similar to what John Adams has termed the ‘risk thermostat’ – the system readjusts to get back to its preferred risk profile. The protection once provided is almost impossible to withdraw without causing systemic havoc as agents adapt to the new stabilised reality and lose the ability to survive in an unstabilised environment.

Of course, in economic systems when agents actively intend to arbitrage such commitments by central banks, it is simply a form of moral hazard. But such an adaptation can easily occur via the natural selective forces at work in an economy – those who fail to take advantage of the Greenspan/Bernanke put simply go bust or get fired. In our brain the adaptation simply reflects homeostatic mechanisms selected for by the process of evolution.

Transformation into a Pathological State, Loss of Core Functionality and Deterioration of the Baseline State

I have argued in many posts that the successive cycles of Minskyian stabilisation have a role to play in the deterioration in the structural performance of the real economy which has manifested itself as ‘The Great Stagnation’. The same conclusion holds for many other complex adaptive systems and our brain is no different. Stabilisation kills much of what makes human beings creative. Innovation and creativity are fundamentally disequilibrium processes so it is no surprise that an environment of stability does not foster them. Whitaker interviews a patient on antidepressants who said: “I didn’t have mood swings after that, but instead of having a baseline of functioning normally, I was depressed. I was in a state of depression the entire time I was on the medication.”

He also notes disturbing research on the damage done to children who were treated for ADHD with Ritalin:

when researchers looked at whether Ritalin at least helped hyperactive children fare well academically, to get good grades and thus succeed as students, they found that it wasn’t so. Being able to focus intently on a math test, it turned out, didn’t translate into long-term academic achievement. This drug, Sroufe explained in 1973, enhances performance on “repetitive, routinized tasks that require sustained attention,” but “reasoning, problem solving and learning do not seem to be [positively] affected.”……Carol Whalen, a psychologist from the University of California at Irvine, noted in 1997 that “especially worrisome has been the suggestion that the unsalutary effects [of Ritalin] occur in the realm of complex, high-order cognitive functions such as flexible problem-solving or divergent thinking.”

Progressive Increase in Required Dosage

In economic systems, this steady structural deterioration means that increasing amounts of stimulus need to be applied in successive cycles of stabilisation to achieve the same levels of growth. Whitaker too identifies a similar tendency:

Over time, Chouinard and Jones noted, the dopaminergic pathways tended to become permanently dysfunctional. They became irreversibly stuck in a hyperactive state, and soon the patient’s tongue was slipping rhythmically in and out of his mouth (tardive dyskinesia) and psychotic symptoms were worsening (tardive psychosis). Doctors would then need to prescribe higher doses of antipsychotics to tamp down those tardive symptoms.

At this point, some of you may raise the following objection: so what if the new state is pathological? Maybe capitalism with its inherent instability is itself pathological. And once the safety nets of the Greenspan/Bernanke put, lender-of-last-resort programs and too-big-to-fail bailouts are put in place why would we need or want to remove them? If we simply medicate the economy ad infinitum, can we not avoid collapse ad infinitum?

This argument however is flawed.

  • The ability of economic players to reorganise to maximise the rents extracted from central banking and state commitments far exceeds the the resources available to the state and the central bank. The key reason for this is the purely financial nature of this commitment. For example, if the state decided to print money and support the price of corn at twice its natural market price, then it could conceivably do so forever. Sooner or later, rent extractors will run up against natural resource limits – for example,limits on arable land. But when the state commits to support a credit money dominant financial system and asset prices then the economic system can and will generate financial “assets” without limit to take advantage of this commitment. The only defense that the CB and the state possess is regulations aimed at maintaining financial markets in an incomplete, underdeveloped state where economic agents do not possess the tools to game the system. Unfortunately as Minsky and many others have documented, the pace of financial innovation over the last half-century has meant that banks and financialised corporates have all the tools they need to circumvent regulations and maximise rent extraction.
  • Even in a modern state that can print its own fiat currency, the ability to maintain financial commitments is subordinate to the need to control inflation. But doesn’t the complete absence of inflationary pressures in the current environment prove that we are nowhere close to any such limits? Not quite – As I have argued before, the current macroeconomic policy is defined by an abandonment of the full employment target in order to mitigate any risk of inflation whatsoever. The inflationary risk caused by rent extraction from the stabilisation commitment is being counterbalanced by a “reserve army of labour”. The reason for giving up the full employment is simple – As Minsky identified, once the economy has gone through successive cycles of stabilisation, it is prone to ‘rapid cycling’.

Rapid Cycling and Transformation of an Episodic Illness into a Chronic Illness

Minsky noted that

A high-investment, high-profit strategy for full employment – even with the underpinning of an active fiscal policy and an aware Federal Reserve system – leads to an increasingly unstable financial system, and an increasingly unstable economic performance. Within a short span of time, the policy problem cycles among preventing a deep depression, getting a stagnant economy moving again, reining in an inflation, and offsetting a credit squeeze or crunch.

In other words, an economy that attempts to achieve full employment will yo-yo uncontrollably between a state of debt-deflation and high, variable inflation – somewhat similar to a broken shower that only runs either too hot or too cold. The abandonment of the full employment target enables the system to postpone this point of rapid cycling.

The structural malformation of the economic system due to the application of increasing levels of stimulus to the task of stabilisation means that the economy has lost the ability to generate the endogenous growth and innovation that it could before it was so actively stabilised. The system has now been homogenised and is entirely dependent upon constant stimulus. The phenomenon of ‘rapid cycling’ explains a phenomenon I noted in an earlier post which is the apparently schizophrenic nature of the markets, turning from risk-on to risk-off at the drop of a hat. It is the lack of diversity that causes this as the vast majority of agents change their behaviour based on absence or presence of stabilising interventions.

Whitaker again notes the connection between medication and rapid cycling in many instances:

As early as 1965, before lithium had made its triumphant entry into American psychiatry, German psychiatrists were puzzling over the change they were seeing in their manic-depressive patients. Patients treated with antidepressants were relapsing frequently, the drugs “transforming the illness from an episodic course with free intervals to a chronic course with continuous illness,” they wrote. The German physicians also noted that in some patients, “the drugs produced a destabilization in which, for the first time, hypomania was followed by continual cycling between hypomania and depression.””

(stimulants) cause children to cycle through arousal and dysphoric states on a daily basis. When a child takes the drug, dopamine levels in the synapse increase, and this produces an aroused state. The child may show increased energy, an intensified focus, and hyperalertness. The child may become anxious, irritable, aggressive, hostile, and unable to sleep. More extreme arousal symptoms include obsessive-compulsive and hypomanic behaviors. But when the drug exits the brain, dopamine levels in the synapse sharply drop, and this may lead to such dysphoric symptoms as fatigue, lethargy, apathy, social withdrawal, and depression. Parents regularly talk of this daily “crash.”

THE PATIENT WANTS STABILITY TOO

At this point, I seem to be arguing that stabilisation is all just a con-game designed to enrich evil bankers, evil pharma etc. But such an explanation underestimates just how deep-seated the temptation and need to stabilise really is. The most critical component that it misses out on is the fact that the “patient” in complex adaptive systems is as eager to choose stability over resilience as the doctor is.

The Short-Term vs The Long-Term

As Daniel Carlat notes, the reality is that on the whole, psychiatric drugs “work” at least in the short term. Similarly, each individual act of macroeconomic stabilisation such as a lender-of-last-resort intervention, quantitative easing or a rate cut clearly has a positive impact on the short-term performance of both asset markets and the economy.

Whitaker too acknowledges this:

Those are the dueling visions of the psychopharmacology era. If you think of the drugs as “anti-disease” agents and focus on short-term outcomes, the young lady springs into sight. If you think of the drugs as “chemical imbalancers” and focus on long-term outcomes, the old hag appears. You can see either image, depending on where you direct your gaze.

The critical point here is that just like in forest fires and macroeconomies, the initial attempts to stabilise can be achieved easily and with very little medication. The results may seem even miraculous. But this initial period does not last. From one of many cases Whitaker quotes:

at first, “it was like a miracle,” she says. Andrew’s fears abated, he learned to tie his shoes, and his teachers praised his improved behavior. But after a few months, the drug no longer seemed to work so well, and whenever its effects wore off, there would be this “rebound effect.” Andrew would “behave like a wild man, out of control.” A doctor increased his dosage, only then it seemed that Andrew was like a “zombie,” his sense of humor reemerging only when the drug’s effects wore off. Next, Andrew needed to take clonidine in order to fall asleep at night. The drug treatment didn’t really seem to be helping, and so Ritalin gave way to other stimulants, including Adderall, Concerta, and dextroamphetamine. “It was always more drugs,” his mother says.

Medication Seen as Revealing Structural Flaws

One would think that the functional and structural deterioration that follows constant medication would cause both the patient and the doctor to reconsider the benefits of stabilisation. But this deterioration too can be interpreted in many different ways. Whitaker gives an example where the stabilised state is seen to be beneficial by revealing hitherto undiagnosed structural problems:

in 1982, Michael Strober and Gabrielle Carlson at the UCLA Neuropsychiatric Institute put a new twist into the juvenile bipolar story. Twelve of the sixty adolescents they had treated with antidepressants had turned “bipolar” over the course of three years, which—one might think—suggested that the drugs had caused the mania. Instead, Strober and Carlson reasoned that their study had shown that antidepressants could be used as a diagnostic tool. It wasn’t that antidepressants were causing some children to go manic, but rather the drugs were unmasking bipolar illness, as only children with the disease would suffer this reaction to an anti-depressant. “Our data imply that biologic differences between latent depressive subtypes are already present and detectable during the period of early adolescence, and that pharmacologic challenge can serve as one reliable aid in delimiting specific affective syndromes in juveniles,” they said.

Drug Withdrawal as Proof That It Works

The symptoms of drug withdrawal can also be interpreted to mean that the drug was necessary and that the patient is fundamentally ill. The reduction in withdrawal symptoms when the patient goes back on provides further “proof” that the drug works. Withdrawal symptoms can also be interpreted as proof that the patient needs to be treated for a longer period. Again, quoting from Whitaker:

Chouinard and Jones’s work also revealed that both psychiatrists and their patients would regularly suffer from a clinical delusion: They would see the return of psychotic symptoms upon drug withdrawal as proof that the antipsychotic was necessary and that it “worked.” The relapsed patient would then go back on the drug and often the psychosis would abate, which would be further proof that it worked. Both doctor and patient would experience this to be “true,” and yet, in fact, the reason that the psychosis abated with the return of the drug was that the brake on dopamine transmission was being reapplied, which countered the stuck dopamine accelerator. As Chouinard and Jones explained: “The need for continued neuroleptic treatment may itself be drug-induced.”

while they acknowledged that some alprazolam patients fared poorly when the drug was withdrawn, they reasoned that it had been used for too short a period and the withdrawal done too abruptly. “We recommend that patients with panic disorder be treated for a longer period, at least six months,” they said.

Similarly, macroeconomic crises can and frequently are interpreted as a need for better and more stabilisation. The initial positive impact of each intervention and the negative impact of reducing stimulus only reinforces this belief.

SCIENCE AND STABILISATION

A typical complaint against Whitaker’s argument is that his thesis is unproven. I would argue that within the confines of conventional “scientific” data analysis, his thesis and others directly opposed to it are essentially unprovable. To take an example from economics, is the current rush towards “safe” assets a sign that we need to produce more “safe” assets? Or is it a sign that our fragile economic system is addicted to the need for an ever-increasing supply of “safe” assets and what we need is a world in which no assets are safe and all market participants are fully aware of this fact?

In complex adaptive systems it can also be argued that the modern scientific method that relies on empirical testing of theoretical hypotheses against the data is itself fundamentally biased towards stabilisation and against resilience. The same story that I trace out below for the history of mental health can be traced out for economics and many other fields.

Desire to Become a ‘Real’ Science

Whitaker traces out how the theory attributing mental disorders to chemical imbalances was embraced as it enabled psychiatrists to become “real” doctors and captures the mood of the profession in the 80s:

Since the days of Sigmund Freud the practice of psychiatry has been more art than science. Surrounded by an aura of witchcraft, proceeding on impression and hunch, often ineffective, it was the bumbling and sometimes humorous stepchild of modern science. But for a decade and more, research psychiatrists have been working quietly in laboratories, dissecting the brains of mice and men and teasing out the chemical formulas that unlock the secrets of the mind. Now, in the 1980s, their work is paying off. They are rapidly identifying the interlocking molecules that produce human thought and emotion…. As a result, psychiatry today stands on the threshold of becoming an exact science, as precise and quantifiable as molecular genetics.

Search for the Magic Bullet despite Complexity of Problem

In the language of medicine, a ‘magic bullet’ is a drug that counters the root cause of the disease without adversely affecting any other part of the patient. The chemical-imbalance theory took a ‘magic bullet’ approach which reduced the complexity of our mental system to “a simple disease mechanism, one easy to grasp. In depression, the problem was that the serotonergic neurons released too little serotonin into the synaptic gap, and thus the serotonergic pathways in the brain were “underactive”. Antidepressants brought serotonin levels in the synaptic gap up to normal, and that allowed these pathways to transmit messages at a proper pace.”

Search for Scientific Method and Objective Criteria

Whitaker traces out the push towards making psychiatry an objective science with a defined method and its implications:

Congress had created the NIMH with the thought that it would transform psychiatry into a more modern, scientific discipline…..Psychiatrists and nurses would use “rating scales” to measure numerically the characteristic symptoms of the disease that was to be studied. Did a drug for schizophrenia reduce the patient’s “anxiety”? His or her “grandiosity”? “Hostility”? “Suspiciousness”? “Unusual thought content”? “Uncooperativeness”? The severity of all of those symptoms would be measured on a numerical scale and a total “symptom” score tabulated, and a drug would be deemed effective if it reduced the total score significantly more than a placebo did within a six-week period. At least in theory, psychiatry now had a way to conduct trials of psychiatric drugs that would produce an “objective” result. Yet the adoption of this assessment put psychiatry on a very particular path: The field would now see short-term reduction of symptoms as evidence of a drug’s efficacy. Much as a physician in internal medicine would prescribe an antibiotic for a bacterial infection, a psychiatrist would prescribe a pill that knocked down a “target symptom” of a “discrete disease.” The six-week “clinical trial” would prove that this was the right thing to do. However, this tool wouldn’t provide any insight into how patients were faring over the long term.

It cannot be emphasised enough that even increasing the period of the scientific trial is not enough to give us definitive answers. The argument that structural flaws are being uncovered or that withdrawal proves that the drug works cannot be definitively refuted. Moreover, at every point of time after medication is started, the short-term impact of staying on or increasing the level of medication is better than the alternative of going off the medication. The deeper issue here is also that in such a system, statistical analysis that tries to determine the efficacy of the intervention cannot deal with the fact that the nature of the intervention itself is to shift the distribution of outcomes into the tail and continue to do so as long as the level of medication keeps increasing.

The Control Agenda and High Modernism

The desire for stability and the control agenda is not simply a consequence of the growth of Olsonian special interests in the economy. The title of this post is inspired by Holling and Meffe’s classic paper on this topic in ecology. Their paper highlights that stabilisation is embedded within the command-and-control approach which itself is inherent to the high modernist way that James Scott has criticised.

Holling and Meffe also recognise that it is a simplistic application of “scientific” methods that underpins this command-and-control philosophy:

much of present ecological theory uses the equilibrium definition of resilience, even though that definition reinforces the pathology of equilibrium-centered command and control. That is because much of that theory draws predominantly from traditions of deductive mathematical theory (Pimm 1984) in which simplified, untouched ecological systems are imagined, or from traditions of engineering in which the motive is to design systems with a single operating objective (Waide & Webster 1976; De Angelis et. al. 1980; O’Neill et al. 1986), or from small-scale quadrant experiments in nature (Tilman & Downing 1994) in which long-term, large-scale successional or episodic transformations are not of concern. That makes the mathematics more tractable, it accommodates the engineer’s goal to develop optimal designs, and it provides the ecologist with a rationale for utilizing manageable, small sized, and short-term experiments, all reasonable goals. But these traditional concepts and techniques make the world appear more simple, tractable, and manageable than it really is. They carry an implicit assumption that there is global stability – that there is only one equilibrium steady-state, or, if other operating states exist, they should be avoided with safeguards and regulatory controls. They transfer the command-and-control myopia of exploitive development to similarly myopic demands for environmental regulations and prohibitions.

Those who emphasize ecosystem resilience, on the other hand, come from traditions of applied mathematics and applied resource ecology at the scale of ecosystems, such as the dynamics and management of freshwater systems (Fiering 1982) forests (Clark et al. 19759, fisheries (Walters 1986) semiarid grasslands (Walker et al. 1969), and interacting populations in nature (Dublin et al. 1990; Sinclair et al. 1990). Because these studies are rooted in inductive rather than deductive theory formation and in experience with the effects of large-scale management disturbances, the reality of flips from one stable state to another cannot be avoided (Helling 1986).

 

My aim in this last section is not to argue against the scientific method but simply to state that we have adopted too narrow a definition of what constitutes a scientific endeavour. Even this is not a coincidence. High modernism has its roots firmly planted in Enlightenment rationality and philosophical viewpoints that lie at the core of our idea of progress. In many uncertain domains, genuine progress and stabilisation that leads to fragility cannot be distinguished from each other. These are topics that I hope to explore in future posts.

Bookmark and Share

Written by Ashwin Parameswaran

December 14th, 2011 at 10:51 am

A Simple Solution to the Eurozone Sovereign Funding Crisis

with 14 comments

In response to the sovereign funding crisis sweeping across the Eurozone, the ECB decided to “conduct two longer-term refinancing operations (LTROs) with a maturity of 36 months”. Combined with the commitment of the members of the Eurozone excluding the possibility of any more haircuts on private sector holders of Euro sovereign bonds, the aim of the current exercise is clear. As Nicholas Sarkozy put it rather bluntly,

Italian banks will be able to borrow [from the ECB] at 1 per cent, while the Italian state is borrowing at 6–7 per cent. It doesn’t take a finance specialist to see that the Italian state will be able to ask Italian banks to finance part of the government debt at a much lower rate.

In other words, the ECB will not finance fiscal deficits directly but will be more than happy to do so via the Eurozone banking system. But this plan still has a few critical flaws:

  • As Sony Kapoor notes, “By doing this, you are strengthening the link between banks and sovereigns, which has proven so dangerous in this crisis. Even if useful in the short term, it would seriously increase the vulnerability of both banks and sovereigns to future shocks.” In other words, if the promise to exclude the possibility of inflicting losses on sovereign debt-holders is broken at any point of time in the future, then sovereign default will coincide with a complete decimation of the incumbent banks in Europe.
  • European banks are desperately capital-constrained as the latest EBA estimates on the capital shortfall faced by European banks shows. In such a condition, banks will almost certainly take on increased sovereign debt exposures only at the expense of lending to the private sector and households. This can only exacerbate the recession in the Eurozone.
  • Sarkozy’s comment also hints at the deep unfairness of the current proposal. If default and haircuts are not on the table, then allowing banks to finance their sovereign debt holdings at a lower rate than the yield they earn on the sovereign bonds (at the same tenor) is simply a transfer of wealth from the Eurozone taxpayer to the banks. Such a privilege may only be extended to the banks if banking is a “perfectly competitive” sector which it is far from being even in a boom economy. In the midst of an economic crisis when so many banks are tottering, it is even further away from the ideal of perfect competition.

There is a simple solution that tackles all three of the above problems – extend the generous terms of refinancing sovereign debt to the entire populace of the Eurozone such that the market for the “support of sovereign debt” is transformed into something close to perfectly competitive. In practise, this simply requires undertaking a program of fast-track banking licenses to new banks with low minimum size requirements on the condition that they restrict their activities to a narrow mandate of buying sovereign debt. This plan can correct all the flaws of the current proposal:

  • Instead of being concentrated within the incumbent failing banks, the sovereign debt exposure of the Eurozone would be spread in a diversified manner within the population. This will also help in making the “no more haircuts” commitment more time-consistent. The wider base of sovereign debt holders will reduce the possibility that the commitment will be reversed by democratic means. The only argument against this plan is that such a concentrated new bank is too risky but that assumes that there is still default risk on Eurozone sovereign debt and that the commitment is not credible.
  • The plan effectively injects new capital into the banking sector allowing incumbent bank capital to be deployed towards lending to the private sector and households. If sovereign debt spreads collapse, then the plan will also shore up the financial position of the incumbent banks thus injecting further capital available to be deployed.
  • The plan is fair. If the current crisis is indeed just a problem of high interest rates fuelling an increased risk of default, then interest rates will rapidly fall to a level much closer to the refinancing rate. To the extent that rates stay elevated and spreads do not converge, it will provide a much more accurate reflection of the real risk of default. No one will earn a supra-normal rate of return.

On this blog, I have criticised the indiscriminate provision of “liquidity” backstops by central banks on many occasions. I have also asserted that key economic functions must be preserved, not the incumbent entities that provide such functions. In times of crisis, central banking interventions are only fair when they are effectively accessible to the masses. At this critical juncture, the socially just policy may also be the only option that can save the single currency project.

Bookmark and Share

Written by Ashwin Parameswaran

December 10th, 2011 at 12:57 am

The Great Recession, Business Investment and Crony Capitalism

with 8 comments

Paul Krugman points out that since 1985, business investment has been purely a demand story i.e. “a depressed economy led to low business investment” and vice versa. As he explains “The Great Recession, in particular, was led by housing and consumption, with business investment clearly responding rather than leading”. But this does not imply that low business investment does not have a causal role to play in the conditions that led to the Great Recession, or that increased business investment does not have a role to play in the recovery.

As Steve Roth notes, business investment has been anaemic throughout the neo-liberal era. JW Mason reminds us that the neo-liberal transition also coincided with a dramatically increased financialisation of the real economy. Throughout my series of posts on crony capitalism, I have argued that the structural and cyclical problems of the developed world are inextricably intertwined. The anaemic trend in business investment is the reason why the developed world has been in a ‘great stagnation’ for so long. This ‘investment deficit’ manifests itself as the ‘corporate savings glut’ and an increasingly financialised economy. The cause of the investment deficit is an increasingly financialised, cronyist, demosclerotic system where incumbent corporates do not face competitive pressure to engage in risky exploratory investment.

Business investments can typically either operate upon the scale of operations (e.g. capacity,product mix) or they can change the fundamental character of operations (e.g. changes in process, product). Investments in scaling up operations are most easily influenced by monetary policy initiatives which reduce interest rates and raise asset prices or direct fiscal policy initiatives which operate via the multiplier effect. Investments in process innovation require the presence of price competition within the industry. Investments in exploratory product innovation require not only competition amongst incumbent firms but competition from a constant and robust stream of new entrants into the industry.

In an economy where new entrants are stymied by an ever-growing ‘License Raj’ that costs the US economy an estimated $100 billion per year, a web of regulations that exist primarily to protect incumbent large corporates and a dysfunctional patent regime, it is not surprising that exploratory business investment has fallen so dramatically. A less cronyist and more dynamically competitive economy without the implicit asset-price protection of the Greenpan/Bernanke put will have lesser profits in aggregate but more investment. Incumbents need to be compelled to take on risky ventures by the threat of extinction and obsolescence. Increased investments in risky exploratory ventures will not only drag the economy out of the ‘Great Stagnation’ but it will result in a reduced share of GDP flowing to corporate profits and an increased proportion of GDP flowing towards wages. In turn, this enables the economy to achieve a sustainable state of full employment and even a higher level of sustainable consumption without households having to resort to increased leverage as they had to during the Great Moderation.

Alexander Field has illustrated how even the growth of the Golden Age of the 50s and the 60s was built upon the foundations of Pre-WW2 innovation. If this thesis is correct, the ‘Great Stagnation’ was inevitable and in fact understates how long ago the innovation deficit started. The Great Moderation far from being the cure was simply a palliative that postponed the inevitable end-point of the evolution of the macroeconomy through successive cycles of Minskyian stabilisation. As I noted in a previous post:

The neoliberal transition unshackled the invisible hand (the carrot of the profit motive) without ensuring that all key sectors of the economy were equally subject to the invisible foot (the stick of failure and losses and new firm entry)….“Order for all” became “order for the classes and disorder for the masses”….In this increasingly financialised economy, the increased market-sensitivity combined with the macro-stabilisation commitment encourages low-risk process innovation and discourages uncertain and exploratory product innovation. This tilt towards exploitation/cost-reduction without exploration kept inflation in check but it also implied a prolonged period of sub-par wage growth and a constant inability to maintain full employment unless the consumer or the government levered up. For the neo-liberal revolution to sustain a ‘corporate welfare state’ in a democratic system, the absence of wage growth necessitated an increase in household leverage for consumption growth to be maintained. 

When commentators such as James Livingston claim that tax cuts for businesses will not solve our problems and that we need a redistribution of income away from profits towards wages to trigger increased aggregate demand via aggregate consumption, I agree with them. But I disagree with the conclusion that the secular decline in business investment is inevitable, acceptable and unrelated to the current cyclical downturn. The fact that business investment during the Great Moderation only increased when consumption demand went up is a symptom of the corporatist nature of the economy. When the household sector has reached a state of peak debt and the financial system has reached its point of peak elasticity, simply running increased fiscal deficits without permitting the corporatist superstructure to collapse simply takes us to the end-state that Minsky himself envisioned: an economy that attempts to achieve full employment will yo-yo uncontrollably between a state of debt-deflation and high,variable inflation – somewhat similar to a broken shower that only runs either too hot or too cold. The only way in which the corporatist status quo can postpone collapse is to abandon the goal of full employment which is exactly the path that the developed world has taken.  This only substitutes an economic fragility with a deeper social fragility.

Stability for all is synonymous with an environment of permanent innovative stagnation. The Schumpeterian solution is to transform the system into one of instability for all. Micro-fragility is the key to macro-resilience but this fragility must be felt by all economic agents, labour and capital alike. In order to end the stagnation and achieve sustainable full employment, we need to allow incumbent banks and financialised corporations to collapse and dismantle the barriers to entry of new firms that pervade the economy. The risk of a deflationary contraction from allowing such a collapse be prevented in a simple and effective manner with a system of direct transfers to individuals as Steve Waldman has outlined. This solution also reverses the flow of rents that have exacerbated inequality over the past few decades.

Note: I went through a much longer version of the same argument with an emphasis on the relationship between employment and technology adapted to US economic history in a previous post. The above logic explains my disagreements with conventional Keynesian theory and my affinity with Post-Keynesian theory. Minsky viewed his theory as a  ‘investment theory of the cycle and a financial theory of investment’ and my views are simply a neo-Schumpeterian take on the same underlying framework.

Bookmark and Share

Written by Ashwin Parameswaran

December 7th, 2011 at 5:44 pm

Posted in Cronyism,Resilience

Debunking the ‘Savings Glut’ Thesis

with 20 comments

Some excellent recent research debunking the savings glut thesis: Borio and Disyatat, Hyun-Song Shin, Thomas Palley.

The Borio-Disyatat paper is especially recommended. It explains best why the savings glut thesis itself is a product of a faulty ‘Loanable Funds’ view of money. Much more appropriate is the credit/financing view of money that Borio and Disyatat take. The best explanation of this credit view is Chapter 3 (’Credit and Capital’) in Joseph Schumpeter’s book ‘Theory of Economic Development’. As Agnès Festré notes, Hayek had a very similar theory of credit but a very different opinion as to its implications:

both Hayek and Schumpeter make use of the mechanism of forced saving in their analyses of the cyclical upswing in order to describe the real effects of credit creation. In Schumpeter’s framework, the relevant redistribution of purchasing power is from traditional producers to innovators with banks playing a crucial complementary role in meeting demand for finance by innovating firms. The dynamic process thus set into motion then leads to a new quasi-equilibrium position characterised by higher productivity and an improved utilisation of resources. For Hayek, however, forced saving is equivalent to a redistribution from consumers to investing producers as credit not backed by voluntary savings is channelled towards investment activities, in the course of which more roundabout methods of production are being implemented. In this setting, expansion does not lead to a new equilibrium position but is equivalent to a deviation from the equilibrium path, that is to an economically harmful distortion of the relative (intertemporal) price system. The eventual return to equilibrium then takes place via an inevitable economic crisis.

Schumpeter viewed this elasticity of credit as the ‘differentia specifica’ of capitalism. Although this view combined with his vision of the banker as a ‘capitalist par excellence’ may have been true in an unstabilised financial system, it is not accurate in the stabilised financial system that his student Hyman Minsky identified as the reality of the modern capitalist economy. Successive rounds of stabilisation mean that the modern banker is more focused on seeking out bets that will be validated by central bank interventions than funding disruptive entrepreneurial activity. Moreover, we live in a world where maturity transformation is no longer required to meet our investment needs. The evolution and malformation of the financial system means that Hayek’s analysis is more relevant now than it probably was during his own lifetime.

Bookmark and Share

Written by Ashwin Parameswaran

November 22nd, 2011 at 5:49 am

The Euro and the Resilience-Stability Tradeoff

with 9 comments

In complex adaptive systems, stability does not equate to resilience. In fact, stability tends to breed loss of resilience and fragility or as Minsky put it, “stability is destabilising”. Although Minsky’s work has been somewhat neglected in economics, the principle of the resilience-stability tradeoff is common knowledge in ecology, especially since Buzz Holling’s pioneering work on the subject. If stability leads to fragility, then it follows that stabilisation too leads to increased system fragility. As Holling and Meffe put it in another landmark paper on the subject titled ‘Command and Control and the Pathology of Natural Resource Management’, “when the range of natural variation in a system is reduced, the system loses resilience.” Often, the goal of increased stability is synonymous with a goal of increased efficiency but “the goal of producing a maximum sustained yield may result in a more stable system of reduced resilience”.

The entire long arc of post-WW2 macroeconomic policy in the developed world can be described as a flawed exercise in macroeconomic stabilisation. But there is no better example of this principle than the Euro currency project as the below graph (from Pictet via FT Alphaville) illustrates.

Instead of a moderately volatile mix of different currencies and interest rates, we now have a mostly stable currency union prone to the occasional risk of systemic collapse. If this was all there is to it, then it is not clear that the Euro is such a bad idea. After all, simply shifting the volatility out to the tails is not by itself a bad outcome. But the resilience-stability tradeoff is more than just a simple transformation in distribution. Economic agents adapt to a prolonged period of stability in such a manner that the system cannot “withstand even modest adverse shocks”. “Normal” disturbances that were easily absorbed prior to the period of stabilisation are now sufficient to cause a catastrophic transition. Izabella Kaminska laments the fact that sovereign spreads for many Eurozone countries (vs 10Y Bunds) now exceed pre-Euro levels. But the real problem isn’t so much that spreads have blown out but that they have blown out after a prolonged period of stability.

One way to analyse this evolution is via Axel Leijonhufvud’s ‘corridor hypothesis’. Leijonhufvud postulated that a macroeconomy will adapt well to small shocks but “outside of a certain zone or “corridor” around its long-run growth path, it will only very sluggishly react to sufficiently large, infrequent shocks.” In Leijonhufvud’s own view, the driver of this “demand failure” outside the corridor was the “exhaustion of liquid buffers reinforced by dysfunctional revisions of permanent income expectations” Stability reduces the width of the corridor to the point where even a small shock is enough to push the system outside the corridor – the primary driver of this process is a progressive reduction of liquid buffers in the good times such that even a small shock will exhaust them.

In my earlier posts comparing the dilemmas of managing a stabilised economic system to those of a fire-suppressed forest and a flood-suppressed river system, I claimed that managing a stabilised and fragile system is akin to choosing between the frying pan and the fire. Minsky too recognised that the stabilisation program would eventually run into a cul-de-sac where “the choice seems to become whether to accomodate to an increasing inflation or to induce a debt-deflation process that can lead to a serious depression”. What he could not possibly have foreseen is that instead of turning towards the safety valve of inflation, the developed world would instead choose to abandon the goal of full employment. This of course simply chooses social fragility over macroeconomic fragility. The consequences of the abandonment of the Euro project pale in comparison to the forces that may be unleashed by the combination of social fragility via unemployment and a perceived democratic deficit.

Bookmark and Share

Written by Ashwin Parameswaran

November 14th, 2011 at 12:17 pm

Posted in Uncategorized

Rent-Seeking, The Progressive Agenda and Cash Transfers

with 20 comments

In my posts on the subject of cronyism and rent-seeking, I have drawn heavily on the work of Mancur Olson. My views are also influenced by my experiences of cronyism in India and comparing it to the Olsonian competitive sclerosis that afflicts most developed economies today. Although there are significant differences between cronyism in the developing and developed world, there is also a very significant common ground. In some respects, the rent-extraction apparatus in the developed world is just a more sophisticated version of the open corruption and looting that is common in many developing economies. This post explores some of this common ground.

Mancur Olson predicted the inexorable rise of rent seeking in a stable economy. But he also thought that once rent-seeking activities extracted too high a proportion of a nation’s GDP, the normal course of democracy and public anger may rein them in. Small rent seekers can fly under the radar but big rent-seekers are ultimately cut back to size. But is this necessarily true? Although there is some truth to this assertion, Olson was likely too optimistic about the existence of such limits. This post tries to provide an argument as to why this is not necessarily the case. After all, it can easily be argued that rents extracted by banks already swallow up a significant proportion of GDP. And there is no shortage of corrupt public programs that swallow up significant proportions of the public budget in the developing world. In a nutshell, my argument is that rent-extraction can avoid these limits by aligning itself to the progressive agenda – the very programs that purport to help the masses become the source of rents for the classes.

A transparent example of this phenomenon is the experience of the Mahatma Gandhi National Rural Employment Guarantee – a public program that guarantees 100 days of work for unskilled rural labourers in India. In a little more than half a decade since inception, it accounts for 3% of public spending and economists estimate that anywhere from a quarter to two-thirds of the expenditure does not reach those whom it is intended to help. So how does a program such as this not only survive but thrive? The answer is simple – despite the corruption, the scheme does disburse significant benefits to a large rural electorate. When faced with the choice of either tolerating a corrupt program or cancelling the program, the rural poor clearly prefer the status quo.

A rather more sophisticated example of this phenomenon is the endless black hole of losses that are Freddie Mac and Fannie Mae – $175 billion and counting. The press focuses on the comparatively small bonus payments to Freddie and Fannie executives but ignores their much larger role in the back door bailout of the banking sector. Again the reason why this goes relatively uncriticised is simple – despite the significant contribution made by Fannie and Freddie to the rents extracted by the “1%”, their operations also put money into the pockets of a vast cross-section of homeowners. Simply shutting them down would almost certainly constitute an act of political suicide.

Source (h/t to David Ruccio)

The masses become the shield for the very programs that enable a select few to extract significant rents out of the system. The same programs that are supposed to be part of the liberal social agenda like Fannie/Freddie become the weapons through which the cronyist corporate structure perpetuates itself, while the broad-based support for these programs makes them incredibly resilient and hard to reform once they have taken root.

Those who cherish the progressive agenda tend to argue that better implementation and regulation can solve the problem of rent extraction. But there is another option – complex programs with egalitarian aims should be replaced with direct cash transfers wherever feasible. This case has been argued persuasively in a recent book as an effective way to help the poor in developing countries and is already being implemented in India. There is no reason why the same approach cannot be implemented in the developed world either.

Bookmark and Share

Written by Ashwin Parameswaran

November 7th, 2011 at 2:25 am

Innovation, Stagnation and Unemployment

with 18 comments

All economists assert that wants are unlimited. From this follows the view that technological unemployment is impossible in the long run. Yet there are a growing number of commentators (such as Brian Arthur) who insist that increased productivity from automation and improvements in artificial intelligence has a part to play in the current unemployment crisis. At the same time, a growing chorus laments the absence of innovation – Tyler Cowen’s thesis that the recent past has been a ‘Great Stagnation’ is compelling.

But don’t the two assertions contradict each other? Can we have an increase in technological unemployment as well as an innovation deficit? Is the concept of technological unemployment itself valid? Is there anything about the current phase of labour-displacing technological innovation that is different from the past 150 years? To answer these questions, we need a deeper understanding of the dynamics of innovation in a capitalist economy i.e. how exactly has innovation and productivity growth proceeded in a manner consistent with full employment in the past? In the process, I also hope to connect the long-run structural dynamic with the Minskyian business cycle dynamic. It is common to view the structural dynamic of technological change as a sort of ‘deus ex machine’ – if not independent, certainly as a phenomenon that is unconnected with the business cycle. I hope to convince some of you that our choices regarding business cycle stabilisation have a direct bearing on the structural dynamic of innovation. I have touched upon many of these topics in a scattered fashion in previous posts but this post is an attempt to present many of these thoughts in a coherent fashion with all my assumptions explicitly laid out in relation to established macroeconomic theory.

Micro-Foundations

Imperfectly competitive markets are the norm in most modern economies. In instances where economies of scale or network effects dominate, a market may even be oligopolistic or monopolistic (e.g. Google, Microsoft) This assumption is of course nothing new to conventional macroeconomic theory. Where my analysis differs is in viewing the imperfectly competitive process as one that is permanently in disequilibrium. Rents or “abnormal” profits are a persistent feature of the economy at the level of the firm and are not competed away even in the long run. The primary objective of incumbent rent-earners is to build a moat around their existing rents whereas the primary objective of competition from new entrants is not to drive rents down to zero, but to displace the incumbent rent-earner. It is not the absence of rents but the continuous threat to the survival of the incumbent rent-earner that defines a truly vibrant capitalist economy i.e. each niche must be continually contested by new entrants. This does not imply, even if the market for labour is perfectly competitive, that an abnormal share of GDP goes to “capital”. Most new entrants fail and suffer economic losses in their bid to capture economic rents and even a dominant incumbent may lose a significant proportion of past earned rents in futile attempts to defend its competitive position before its eventual demise.

This emphasis on disequilibrium points to the fact that the “optimum” state for a dynamically competitive capitalist economy is one of constant competitive discomfort and disorder. This perspective leads to a dramatically different policy emphasis from conventional theory which universally focuses on increasing positive incentives to economic players and relying on the invisible hand to guide the economy to a better equilibrium. Both Schumpeter and Marx understood the importance of this competitive discomfort for the constant innovative dynamism of a capitalist economy – my point is simply that a universal discomfort of capital is also important to maintain the distributive justice in a capitalist economy. in fact it is the only way to do so without sacrificing the innovative dynamism of the economy.

Competition in monopolistically competitive markets manifests itself through two distinct forms of innovation: exploitation and exploration. Exploitation usually takes the form of what James Utterback identified as process innovation with an emphasis on “real or potential cost reduction, improved product quality, and wider availability, and movement towards more highly integrated and continuous production processes.” As Utterback noted, such innovation is almost always driven by the incumbent firms. Exploitation is an act of optimisation under a known distribution i.e. it falls under the domain of homo economicus. In the language of fitness landscapes, exploitative process innovation is best viewed as competition around a local peak. On the other hand, exploratory product innovation (analogous to what Utterback identified as product innovation) occurs under conditions of significant irreducible uncertainty. Exploration is aimed at finding a significantly higher peak on the fitness landscape and as Utterback noted, is almost always driven by new entrants (For a more detailed explanation of incumbent preference for exploitation and organisational rigidity, see my earlier post).

An Investment Theory of the Business Cycle

Soon after publishing the ‘General Theory’, Keynes summarised his thesis as follows: “given the psychology of the public, the level of output and employment as a whole depends on the amount of investment. I put it in this way, not because this is the only factor on which aggregate output depends, but because it is usual in a complex system to regard as the causa causans that factor which is most prone to sudden and wide fluctuation.” In Keynes‘ view, the investment decision was undertaken in a condition of irreducible uncertainty, “influenced by our views of the future about which we know so little”. Just how critical the level of investment is in maintaining full employment is highlighted by GLS Shackle in his interpretation of Keynes’ theory: “In a money-using society which wishes to save some of the income it receives in payment for its productive efforts, it is not possible for the whole (daily or annual) product to be sold unless some of it is sold to investors and not to consumers. Investors are people who put their money on time-to-come. But they do not have to be investors. They can instead be liquidity-preferrers; they can sweep up their chips from the table and withdraw. If they do, they will give no employment to those who (in face of society’s propensity to save) can only be employed in making investment goods, things whose stream of usefulness will only come out over the years to come.”

If we accept this thesis, then it is no surprise that the post–2008 recovery has been quite so anaemic. Investment spending has remained low throughout the developed world, nowhere more so than in the United Kingdom. What makes this low level of investment even more surprising is the strength of the rebound in corporate profits and balance sheets – corporate leverage in the United States is as low as it has been for two decades and the proportion of cash in total assets as high as it has been for almost half a century. Specifically, the United States has also experienced an unusual increase in labour productivity during the recession which has exacerbated the disconnect between the recovery in GDP and employment. Some of these unusual patterns have been with us for a much longer time than the 2008 financial crisis. For example, the disconnect between GDP and employment in the United States has been obvious since atleast 1990, and the 2003 recession too saw an unusual rise in labour productivity. The labour market has been slack for at least a decade. It is hard to differ from Paul Krugman’s intuition that the character of post–1980 business cycles has changed. Europe and Japan are not immune from these “structural” patterns either – the ‘corporate savings glut’ has been a problem in the United Kingdom since atleast 2002, and Post-Keynesian economists have been pointing out the relationship between ‘capital accumulation’ and unemployment for a while, even attributing the persistently high unemployment in Europe to a lack of investment. Japan’s condition for the last decade is better described as a ‘corporate savings trap’ rather than a ‘liquidity trap’. Even in Greece, that poster child for fiscal profligacy, the recession is accompanied by a collapse in private sector investment.

A Theory of Business Investment

Business investments can typically either operate upon the scale of operations (e.g. capacity,product mix) or they can change the fundamental character of operations (e.g. changes in process, product). The degree of irreducible uncertainty in capacity and product mix decisions has reduced dramatically in the last half-century. The ability of firms to react quickly and effectively to changes in market conditions has improved dramatically with improvements in production processes and information technology – Zara being a well-researched example. Investments that change the very nature of business operations are what we typically identify as innovations. However, not all innovation decisions are subject to irreducible uncertainty either. In a seminal article, James March distinguished between “the exploration of new possibilities and the exploitation of old certainties. Exploration includes things captured by terms such as search, variation, risk taking, experimentation, play, flexibility, discovery, innovation. Exploitation includes such things as refinement, choice, production, efficiency, selection, implementation, execution.” Exploratory innovation operates under conditions of irreducible uncertainty whereas exploitation is an act of optimisation under a known distribution.

Investments in scaling up operations are most easily influenced by monetary policy initiatives which reduce interest rates and raise asset prices or direct fiscal policy initiatives which operate via the multiplier effect. In recent times, especially in the United States and United Kingdom, the reduction in rates has also directly facilitated the levering up of the consumer balance sheet and a reduction in the interest servicing burden of past consumer debt taken on. The resulting boost to consumer spending and demand also stimulates businesses to invest in expanding capacity. Exploitative innovation requires the presence of price competition within the industry i.e. monopolies or oligopolies have little incentive to make their operations more efficient beyond the price point where demand for their product is essentially inelastic. This sounds like an exceptional case but is in fact very common in critical industries such as finance and healthcare. Exploratory innovation requires not only competition amongst incumbent firms but competition from a constant and robust stream of new entrants into the industry. I outlined the rationale for this in a previous post:

Let us assume a scenario where the entry of new firms has slowed to a trickle, the sector is dominated by a few dominant incumbents and the S-curve of growth is about to enter its maturity/decline phase. To trigger off a new S-curve of growth, the incumbents need to explore. However, almost by definition, the odds that any given act of exploration will be successful is small. Moreover, the positive payoff from any exploratory search almost certainly lies far in the future. For an improbable shot at moving from a position of comfort to one of dominance in the distant future, an incumbent firm needs to divert resources from optimising and efficiency-increasing initiatives that will deliver predictable profits in the near future. Of course if a significant proportion of its competitors adopt an exploratory strategy, even an incumbent firm will be forced to follow suit for fear of loss of market share. But this critical mass of exploratory incumbents never comes about. In essence, the state where almost all incumbents are content to focus their energies on exploitation is a Nash equilibrium.
On the other hand, the incentives of any new entrant are almost entirely skewed in favour of exploratory strategies. Even an improbable shot at glory is enough to outweigh the minor consequences of failure. It cannot be emphasised enough that this argument does not depend upon the irrationality of the entrant. The same incremental payoff that represents a minor improvement for the incumbent is a life-changing event for the entrepreneur. When there exists a critical mass of exploratory new entrants, the dominant incumbents are compelled to follow suit and the Nash equilibrium of the industry shifts towards the appropriate mix of exploitation and exploration.

A Theory of Employment

My fundamental assertion is that a constant and high level of uncertain, exploratory investment is required to maintain a sustainable and resilient state of full employment. And as I mentioned earlier, exploratory investment driven by product innovation requires a constant threat from new entrants.

Long-run increases in aggregate demand require product innovation. As Rick Szostak notes:

While in the short run government spending and investment have a role to play, in the long run it is per capita consumption that must rise in order for increases in per capita output to be sustained…..the reason that we consume many times more than our great-grandparents is not to be found for the most part in our consumption of greater quantities of the same items which they purchased…The bulk of the increase in consumption expenditures, however, has gone towards goods and services those not-too-distant forebears had never heard of, or could not dream of affording….Would we as a society of consumers/workers have striven as hard to achieve our present incomes if our consumption bundle had only deepened rather than widened? Hardly. It should be clear to all that the tremendous increase in per capita consumption in the past century would not have been possible if not for the introduction of a wide range of different products. Consumers do not consume a composite good X. Rather, they consume a variety of goods, and at some point run into a steeply declining marginal utility from each. As writers as diverse as Galbraith and Marshall have noted, if declining marginal utility exists with respect to each good it holds over the whole basket of goods as well…..The simple fact is that, in the absence of the creation of new goods, aggregate demand can be highly inelastic, and thus falling prices will have little effect on output.

Therefore, when cost-cutting and process optimisation in an industry enables a product to be sold at a lower cost, the economy may not be able to reorganise back to full employment with simply an increased demand for that particular product. In the early stages of a product when demand is sufficiently elastic, process innovation can increase employment. But as the product ages, process improvements have a steadily negative effect on employment.

Eventually, a successful reorganisation back to full employment entails creating demand for new products. If such new products were simply an addition to the set of products that we consumed, disruption would be minimal. But almost any significant new product that arises from exploratory investment also destroys an old product. The tablet cannibalises the netbook, the smartphone cannibalises the camera etc. This of course is the destruction in Schumpeter’s creative destruction. It is precisely because of this cannibalistic nature of exploratory innovation that established incumbents rarely engage in it, unless compelled to do so by the force of new entrants. Burton Klein put it well: “ firms involved in such competition must compare two risks: the risk of being unsuccessful when promoting a discovery or bringing about an innovation versus the risk of having a market stolen away by a competitor: the greater the risk that a firm’s rivals take, the greater must be the risks to which must subject itself for its own survival.” Even when new firms enter a market at a healthy pace, it is rare that incumbent firms are successful at bringing about disruptive exploratory changes. When the pace of dynamic competition is slow, incumbents can choose to simply maintain slack and wait for any promising new technology to emerge which it can buy up rather than risking investment in some uncertain new technology.

We need exploratory investment because this expansion of the economy into its ‘adjacent possible’ does not derive its thrust from the consumer but from the entrepreneur. In other words, new wants are not demanded by the consumers but are instead created by entrepreneurs such as Steve Jobs. In the absence of dynamic competition from new entrants, wants remain limited.

In essence, this framework incorporates technological innovation into a distinctly “Chapter 12” Keynesian view of the business cycle. Although my views are far removed from macroeconomic orthodoxy, they are not quite so radical that they have no precedents whatsoever. My views can be seen as a simple extension of Burton Klein’s seminal work outlined in his books ‘Dynamic Economics’ and ‘Prices, wages, and business cycles: a dynamic theory’. But the closest parallels to this explanation can be found in Rick Szostak’s book ‘Technological innovation and the Great Depression’. Szostak uses an almost identical rationale to explain unemployment during the Great Depression, “how an abundance of labor-saving production technology coupled with a virtual absence of new product innovation could affect consumption, investment and the functioning of the labor market in such a way that a large and sustained contraction in employment would result.”

As I have hinted at in a previous post, this is not a conventional “structural” explanation of unemployment. Szostak explains the difference: “An alternative technological argument would be that the skills required of the workforce changed more rapidly in the interwar period than did the skills possessed by the workforce. Thus, there were enough jobs to go around; workers simply were not suited to them, and a painful decade of adjustment was required…I argue that in fact there simply were not enough jobs of any kind available.” In other words, this is a partly technological explanation for the shortfall in aggregate demand.

The Invisible Foot and New Firm Entry

The concept of the “Invisible Foot” was introduced by Joseph Berliner as a counterpoint to Adam Smith’s “Invisible Hand” to explain why innovation was so hard in the centrally planned Soviet economy:

Adam Smith taught us to think of competition as an “invisible hand” that guides production into the socially desirable channels….But if Adam Smith had taken as his point of departure not the coordinating mechanism but the innovation mechanism of capitalism, he may well have designated competition not as an invisible hand but as an invisible foot. For the effect of competition is not only to motivate profit-seeking entrepreneurs to seek yet more profit but to jolt conservative enterprises into the adoption of new technology and the search for improved processes and products. From the point of view of the static efficiency of resource allocation, the evil of monopoly is that it prevents resources from flowing into those lines of production in which their social value would be greatest. But from the point of view of innovation, the evil of monopoly is that it enables producers to enjoy high rates of profit without having to undertake the exacting and risky activities associated with technological change. A world of monopolies, socialist or capitalist, would be a world with very little technological change.” 

For disruptive innovation to persist, the invisible foot needs to be “applied vigorously to the backsides of enterprises that would otherwise have been quite content to go on producing the same products in the same ways, and at a reasonable profit, if they could only be protected from the intrusion of competition”Burton Klein’s great contribution along with Gunnar Eliasson was to highlight the critical importance of entry of new firms in maintaining the efficacy of the invisible foot. Klein believed that

the degree of risk taking is determined by the robustness of dynamic competition, which mainly depends on the rate of entry of new firms. If entry into an industry is fairly steady, the game is likely to have the flavour of a highly competitive sport. When some firms in an industry concentrate on making significant advances that will bear fruit within several years, others must be concerned with making their long-run profits as large as possible, if they hope to survive. But after entry has been closed for a number of years, a tightly organised oligopoly will probably emerge in which firms will endeavour to make their environments highly predictable in order to make their environments highly predictable in order to make their short-run profits as large as possible….Because of new entries, a relatively concentrated industry can remain highly dynamic. But, when entry is absent for some years, and expectations are premised on the future absence of entry, a relatively concentrated industry is likely to evolve into a tight oligopoly. In particular, when entry is long absent, managers are likely to be more and more narrowly selected; and they will probably engage in such parallel behaviour with respect to products and prices that it might seem that the entire industry is commanded by a single general!

This argument does not depend on incumbent firms leaving money on the table – on the contrary, they may redouble their attempts at cost reduction via process innovation in times of deficient demand. Rick Szostak documents how “despite the availability of a massive amount of inexpensive labour, process innovation would continue in the 1930s. Output per man-hour in manufacturing rose by 25% in the 1930s…..national output was higher in 1939 than in 1929, while employment was over two million less.”

Macroeconomic Policy and Exploratory Product Innovation

Monetary policy has been the preferred cure for insufficient aggregate demand throughout and since the Great Moderation. The argument goes that lower real rates, inflation and higher asset prices will increase investment via Tobin’s Q and increase consumption via the wealth effect and reduction in rewards to savings, all bound together in the virtuous cycle of the multiplier. If monetary policy is insufficient, fiscal policy may be deployed with a focus on either directly increasing aggregate demand or providing businesses with supply-side incentives such as tax cuts.

There is a common underlying theme to all of the above policy options – they focus on the question “how do we make businesses want to invest?” i.e. on positively incentivising incumbent business and startups and trusting that the invisible hand will do the rest. In the context of exploratory investments, the appropriate question is instead “how do we make businesses have to invest?” i.e. on compelling incumbent firms to invest in speculative projects in order to defend their rents or lose out to new entrants if they fail to do so. But the problem isn’t just that these policies are ineffectual. Many of the policies that focus on positive incentives weaken the competitive discomfort from the invisible foot by helping to entrench the competitive position of incumbent corporates and reducing their incentive to engage in exploratory investment. It is in this context that interventions such as central bank purchase of assets and fiscal stimulus measures that dole out contracts to the favoured do permanent harm to the economy.

The division that matters from the perspective of maintaining the appropriate level of exploratory investment and product innovation is not monetary vs fiscal but the division between existing assets and economic interests and new firms/entrepreneurs. Almost all monetary policy initiatives focus on purchasing existing assets from incumbent firms or reducing real rates for incumbent banks and their clients. A significant proportion of fiscal policy does the same. The implicit assumption is, as Nick Rowe notes, that there is “high substitutability between old and new investment projects, so the previous owners of the old investment projects will go looking for new ones with their new cash”. This assumption does not hold in the case of exploratory investments – asset-holders will likely chase after a replacement asset but this asset will likely be an existing investment project, not a new one. The result of the intervention will be an increase in prices of such assets but it will not feed into any “real” new investment activity. In other words, the Tobin’s q effect is negligible for exploratory investments in the short run and in fact negative in the long run as the accumulated effect of rents derived from monetary and fiscal intervention reduces the need for incumbent firms to engage in such speculative investment.

A Brief History of the Post-WW2 United States Macroeconomy

In this section, I’m going to use the above framework to make sense of the evolution of the macroeconomy in the United States after WW2. The framework is relevant for post–70s Europe and Japan as well which is why the ‘investment deficit problem’ afflicts almost the entire developed world today. But the details differ quite significantly especially with regards to the distributional choices made in different countries.

The Golden Age

The 50s and the 60s are best characterised as a period of “order for all” characterised by as Bill Lazonick put it, “oligopolistic competition, career employment with one company, and regulated financial markets”. The ‘Golden Age’ delivered prosperity for a few reasons:

  • As Minsky noted, the financial sector had only just begun the process of adapting to and circumventing regulations designed to constrain and control it. As a result, the Fed had as much control over credit creation and bank policies as it would ever have.
  • The pace of both product and process innovation had slowed down significantly in the real economy, especially in manufacturing. Much of the productivity growth came from product innovations that had already been made prior to WW2. As Alexander Field explains (on the slowdown in manufacturing TFP): “Through marketing and planned obsolescence, the disruptive force of technological change – what Joseph Schumpeter called creative destruction – had largely been domesticated, at least for a time. Whereas large corporations had funded research leading to a large number of important innovations during the 1930s, many critics now argued that these behemoths had become obstacles to transformative innovation, too concerned about the prospect of devaluing rent-yielding income streams from existing technologies. Disruptions to the rank order of the largest U.S. industrial corporations during this quarter century were remarkably few. And the overall rate of TFP growth within manufacturing fell by more than a percentage point compared with the 1930s and more than 3.5 percentage points compared with the 1920s.”
  • Apart from the fact that the economy had to catch up to earlier product innovation, the dominant position of the US in the global economy post WW2 limited the impact from foreign competition.

It was this peculiar confluence of factors that enabled a system of “order and stability for all” without triggering a complete collapse in productivity or financial instability – a system where both labour and capital were equally strong and protected and shared in the rents available to all.

Stagflation

The 70s are best described as the time when this ordered, stabilised system could not be sustained any longer.

  • By the late 60s, the financial sector had adapted to the regulatory environment. Innovations such as Fed Funds market and the Eurodollar market gradually came into being such that by the late 60s, credit creation and bank lending were increasingly difficult for the Fed to control. Reserves were no longer a binding constraint on bank operations.
  • The absence of real competition either on the basis of price or from new entrants meant that both process and product innovation were low just like during the Golden Age but the difference was that there were no more low-hanging fruit to pick from past product innovations. Therefore, a secular slowdown in productivity took hold.
  • The rest of world had caught up and foreign competition began to intensify.

As Burton Klein noted, “competition provides a deterrent to wage and price increases because firms that allow wages to increase more rapidly than productivity face penalties in the form of reduced profits and reduced employment”. In the absence of adequate competition, demand is inelastic and there is little pressure to reduce costs. As the level of price/cost competition reduces, more and more unemployment is required to keep inflation under control. Even worse, as Klein noted, it only takes the absence of competition in a few key sectors for the disease to afflict the entire economy. Controlling overall inflation in the macroeconomy when a few key sectors are sheltered from competitive discomfort requires monetary action that will extract a disproportionate amount of pain from the remainder of the economy. Stagflation is the inevitable consequence in a stabilised economy suffering from progressive competitive sclerosis.

The “Solution”

By the late 70s, the pressures and conflicts of the system of “order for all” meant that change was inevitable. The result was what is commonly known as the neoliberal revolution. There are many different interpretations of this transition. To right-wing commentators, neoliberalism signified a much-needed transition towards a free-market economy. Most left-wing commentators lament the resultant supremacy of capital over labour and rising inequality. For some, the neoliberal era started with Paul Volcker having the courage to inflict the required pain to break the back of inflationary forces and continued with central banks learning the lessons of the past which gave us the Great Moderation.

All these explanations are relevant but in my opinion, they are simply a subset of a larger and simpler explanation. The prior economic regime was a system where both the invisible hand and the invisible foot were shackled – firms were protected but their profit motive was also shackled by the protection provided to labour. The neoliberal transition unshackled the invisible hand (the carrot of the profit motive) without ensuring that all key sectors of the economy were equally subject to the invisible foot (the stick of failure and losses and new firm entry). Instead of tackling the root problem of progressive competitive and democratic sclerosis and cronyism, the neoliberal era provided a stop-gap solution. “Order for all” became “order for the classes and disorder for the masses”. As many commentators have noted, the reality of neoliberalism is not consistent with the theory of classical liberalism. Minsky captured the hypocrisy well: “Conservatives call for the freeing of markets even as their corporate clients lobby for legislation that would institutionalize and legitimize their market power; businessmen and bankers recoil in horror at the prospect of easing entry into their various domains even as technological changes and institutional evolution make the traditional demarcations of types of business obsolete. In truth, corporate America pays lip service to free enterprise and extols the tenets of Adam Smith, while striving to sustain and legitimize the very thing that Smith abhorred – state-mandated market power.”

The critical component of this doctrine is the emphasis on macroeconomic and financial sector stabilisation implemented primarily through monetary policy focused on the banking and asset price channels of policy transmission:
Any significant fall in asset prices (especially equity prices) has been met with a strong stimulus from the Fed i.e. the ‘Greenspan Put’. In his plea for increased quantitative easing via purchase of agency MBS, Joe Gagnon captured the logic of this policy: ““This avalanche of money would surely push up stock prices, push down bond yields, support real estate prices, and push up the value of foreign currencies. All of these financial developments would stimulate US economic activity.” In other words, prop up asset prices and the real economy will mend itself.
Similarly, Fed and Treasury policy has ensured that none of the large banks can fail. In particular, bank creditors have been shielded from any losses. The argument is that allowing banks to fail will cripple the flow of credit to the real economy and result in a deflationary collapse that cannot be offset by conventional monetary policy alone. This is the logic for why banks were allowed access to a panoply of Federal Reserve liquidity facilities at the height of the crisis. In other words, prop up the banks and the real economy will mend itself.

In this increasingly financialised economy, “the increased market-sensitivity combined with the macro-stabilisation commitment encourages low-risk process innovation and discourages uncertain and exploratory product innovation.” This tilt towards exploitation/cost-reduction without exploration kept inflation in check but it also implied a prolonged period of sub-par wage growth and a constant inability to maintain full employment unless the consumer or the government levered up. For the neo-liberal revolution to sustain a ‘corporate welfare state’ in a democratic system, the absence of wage growth necessitated an increase in household leverage for consumption growth to be maintained. The monetary policy doctrine of the Great Moderation exacerbated the problem of competitive sclerosis and the investment deficit but it also provided the palliative medicine that postponed the day of reckoning. The unshackling of the financial sector was a necessary condition for this cure to work its way through the economy for as long as it did.

It is this focus on the carrot of higher profits that also triggered the widespread adoption of high-powered incentives such as stock options and bonuses to align manager and stockholder incentives. When the risk of being displaced by innovative new entrants is low, high-powered managerial incentives help to tilt the focus of the firm towards a focus on process innovation and cost reduction, optimisation of leverage etc. From the stockholders and the managers’ perspective, the focus on short-term profits is a feature, not a bug.

The Dénouement

So long as unemployment and consumption could be propped up by increasing leverage from the consumer and/or the state, the long-run shortage in exploratory product innovation and the stagnation in wages could be swept under the rug and economic growth could be maintained. But there is every sign that the household sector has reached a state of peak debt and the financial system has reached its point of peak elasticity. The policy that worked so well during the Great Moderation is now simply focused on preventing the collapse of the cronyist and financialised economy. The system has become so fragile that Minsky’s vision is more correct than ever – an economy at full employment will yo-yo uncontrollably between a state of debt-deflation and high,variable inflation. Instead the goal of full employment seems to have been abandoned in order to postpone the inevitable collapse. This only substitutes an economic fragility with a deeper social fragility.

The aim of full employment is made even harder with the acceleration of process innovation due to advances in artificial intelligence and computerisation. Process innovation gives us technological unemployment while at the same time the absence of exploratory product innovation leaves us stuck in the Great Stagnation.

 

The solution preferred by the left is to somehow recreate the golden age of the 50s and the 60s i.e. order for all. Apart from the impossibility of retrieving the docile financial system of that age (which Minsky understood), the solution of micro-stability for all is an environment of permanent innovative stagnation. The Schumpeterian solution is to transform the system into one of disorder for all, masses and classes alike. Micro-fragility is the key to macro-resilience but this fragility must be felt by all economic agents, labour and capital alike. In order to end the stagnation and achieve sustainable full employment, we need to allow incumbent banks and financialised corporations to collapse and dismantle the barriers to entry of new firms that pervade the economy (e.g. occupational licensing, the patent system). But this does not imply that the macroeconomy should suffer from a deflationary contraction. Deflation can be prevented in a simple and effective manner with a system of direct transfers to individuals as Steve Waldman has outlined. This solution reverses the flow of rents that have exacerbated inequality over the past few decades, as well as tackling the cronyism and demosclerosis that is crippling innovation and preventing full employment.

Bookmark and Share

Written by Ashwin Parameswaran

November 2nd, 2011 at 7:29 pm

The Case For Allowing Banks to Fail

with 55 comments

In many previous posts on this blog, I outlined why allowing the incumbent banks to fail when they become insolvent is a pre-requisite for achieving macroeconomic resilience. In my previous post I outlined how allowing such failure can be managed without causing a deflationary economic collapse in the process. Nevertheless, there are many who believe that a no-bailouts policy is tantamount to ‘financial romanticism’. In criticising the no-bailouts approach, Krugman deploys three arguments:

Policy makers will intervene anyway

It is undeniably true that policy makers will almost certainly move to stabilise the banking sector in times of economic distress. The aim of my ‘program’ was simply to sketch out a possible alternative that could be deployed rapidly during a crisis. Although I have some sympathy for policy makers asked to stabilise the economy during the largest financial crisis since the Great Depression, it is worth noting that the same policy of implicit and explicit support has been extended to failing banks at almost every point since WW2 – even in many instances when the fallout would have been much smaller. It is this prolonged stabilisation that has left us with such a fragile financial system.

Are guarantees and safety net plus regulation the only feasible strategy?

I have no disagreement with the argument that “ bank regulation is important even in the absence of bailouts”. There are many industries which are regulated simply for the purposes of protecting their customers and banking is no different. However I disagree strongly with the notion that regulation can prevent the abuse of these guarantees. The history of banking is one of repeated circumvention of regulations by banks, a process that has only accelerated with the increased completeness of markets. Just because deregulation may have accelerated the extraction of the moral hazard subsidy (which it almost certainly did) does not imply that re-regulation can solve the problem. Banks now have at their disposal the ability to engineer synthetic exposures tailored to maximise rent extraction – the ‘synthetic CDO super-senior tranche’ that was at the heart of the losses in the investment banks in 2008 was one such invention. It is the completeness of this menu of options that banks possess to game regulations that distinguishes banking from other regulated industries. Minsky was well aware of the impact of financial innovation on the resilience of the financial system which is why he understood that the so-called golden age of the 50s and the 60s was “an accident of history, which was due to the financial residue of World War 2 following fast upon a great depression”.

Maturity Transformation and the Diamond-Dybvig framework

The core rationale of the Diamond-Dybvig framework is that banks are susceptible to self-fulfilling runs due to their unstable balance sheet comprising of long-maturity illiquid assets and on-demand liquid liabilities i.e. deposits. The implicit rationale is that maturity transformation has a beneficial impact. As William Dudley explains it, “the need for maturity transformation arises from the fact that the preferred habitat of borrowers tends toward longer-term maturities used to finance long-lived assets such as a house or a manufacturing plant, compared with the preferred habitat of investors, who generally have a preference to be able to access their funds quickly. Financial intermediaries act to span these preferences, earning profits by engaging in maturity transformation—borrowing shorter-term in order to finance longer-term lending”.

But what if there is no maturity mismatch for banks to intermediate? In a previous post I have argued that “structural changes in the economy have drastically reduced and even possibly eliminated the need for society to promote and subsidise maturity transformation.” The primary change in this regard is the increasing assets invested in pension funds and life insurers. Through these vehicles, households provide capital that strongly prefers long-maturity investments that match its long-tenor liabilities.

But how significant is this phenomenon and what does it mean for the economy-wide mismatch? In a recent research report, Patrick Artus at Natixis dug out the relevant numbers which I have summarised below:

In both the United States and Europe, household long-term savings (which includes pensions) is more than sufficient to meet the long-term borrowing needs of both the corporate and the household sector. In the case of the United States which issues its own currency, the need for maturity transformation can simply be eliminated by adjusting the government debt maturity profile accordingly. It is worth noting that even a significant proportion of the government debt in the above table is of a fairly short maturity.

The expansion that ended in 2008 was characterised by an expansion in the volume of long-term credit investments, but as Lord Adair Turner observed, in the United Kingdom “only a small proportion of those ended up in the balance sheets of long term hold-to-maturity investors such as pension funds or insurance companies. Instead the majority of UK residential mortgage-backed securities (RMBS) in particular were held by investing institutions, such as SIVs and mutual funds, behind which stood – at the end of the chain – short-term investors.” As Minsky might have predicted, maturity transformation was simply a tool to enter into a levered carry trade at the taxpayers’ expense.

In a world where maturity transformation does not even improve the efficiency of the economic system, Diamond-Dybvig and much of the rationale for our current banking and monetary system simply do not hold. The implications of this are not that we must ban maturity transformation. As Rajiv Sethi points out, even non-banking firms engage in maturity transformation and any attempt to stamp it out is futile. However, it is crucial that firms (banks or otherwise) that engage in maturity transformation are allowed to fail when they run into trouble.

Bookmark and Share

Written by Ashwin Parameswaran

October 10th, 2011 at 3:53 pm

Posted in Uncategorized

A Simple Policy Program for Macroeconomic Resilience

with 49 comments

The core logic behind my critique of macroeconomic stabilisation is that stability (and stabilisation) breeds systemic fragility. But this does not imply an opposition to all macroeconomic intervention, especially in a scenario when past stabilisation has left the macroeconomy in a fragile state. It simply insists on restricting our interventions to actions that preserve the essential adaptive character and creative destruction of our economic system.

A resilient framework of macroeconomic interventions must satisfy the following conditions:

  • a focus on mitigating the most damaging consequences of disturbances on the macroeconomy rather than stamping out the disturbance at its source.
  • a focus on discretionary interventions targeted at individuals rather than corporate limited-liability entities and limited to times of systemic crises.
  • emphasis on maintaining general economic capacities and competences rather than protecting the specific incumbent entities that provide an economic function at any given point of time.

In theory monetary and fiscal policy interventions can easily fulfil all these criteria. In practise however, the history of both interventions is characterised by a systematic violation of all of them. The long history of propping up insolvent financial institutions via the TBTF guarantee and central bank ‘liquidity facilities’ combined with the doling out of fiscal favours to incumbent corporates has left us with a fragile and unequal economic system. As Michael Lewis puts it, we have “socialism for the capitalists and capitalism for everybody else” and the system shows no signs of changing despite the abysmal results so far. To paraphrase Robert Reich, behind every potential “resolution” of a debt crisis lies yet another bailout for the banks.

The pro-bailout proponents argue that there is no other option. According to them, allowing the banks to fail will bring about a certain economic collapse. In this post, I will argue against this notion that bank bailouts are inevitable and unavoidable. I will also lay out a coherent and simple alternative policy program to get us out of the mess that we’re currently in without having to undergo a systemic collapse to do so.

My policy proposal has three legs all of which need to be implemented simultaneously:

  • Allow Failure: Allow insolvent banks and financialised corporations to fail.
  • The Helicopter Drop: Institute a system of direct transfers to individuals (a helicopter drop) to mitigate the deflationary fallout from bank failure.
  • Entry of New Banks: Allow fast-track approvals of new banks to restore banking capacity in the economy.

The argument against allowing bank and corporate failure is that it will trigger off a catastrophic deflationary collapse in the economy while at the same time crippling the lending capacity available to businesses and households. The helicopter drop of direct transfers helps prevent a deflationary collapse and the entry of new banks helps maintain lending capacity thus negating both concerns.

The Helicopter Drop

In order to promote system resilience and minimise moral hazard, any system of direct transfers must be directed only at individuals and it must be a discretionary policy tool utilised only to mitigate against the risk of systemic crises. The discretionary element is crucial as tail risk protection directed at individuals has minimal moral hazard implications if it is uncertain even to the slightest degree. Transfers must not be directed to corporate entities – even uncertain tail-risk protection provided to corporates will eventually be gamed. The critical difference between individuals and corporates in this regard is the ability of stockholders and creditors to spread their bets across corporate entities and ensure that failure of any one bet has only a limited impact on the individual investors’ finances. In an individual’s case, the risk of failure is by definition concentrated and the uncertain nature of the transfer will ensure that moral hazard implications are minimal. This conception of transfers as a macro-intervention tool is very different from ideas that assume constant, regular transfers or a steady safety net such as an income guarantee, job guarantee or a social credit.

Entry of New Banks

I have discussed in a previous post why entry of new banks allows us to preserve bank lending capacity without bailing out the incumbent banks. A similar idea has been laid out by David Merkel as a more resilient way to undertake TARP-like interventions. The fundamental principle is quite simple – system resilience refers to the ability to retain the same function while adapting to a disturbance. It does not imply that the function must be provided by the same incumbent entities. In fact, we are already beginning to see an expansion in non-bank credit as the era of low borrowing costs due to the implicit guarantee to bank creditors comes to an end. New banks unencumbered by the need to make up their past losses will be much better positioned to meet the credit demand from the real economy.The process of new firm entry in banking can be encouraged in many ways:

  • Fast-track approvals
  • Reduced capital requirements
  • TARP-like seed capital participation as David Merkel has laid out.

 

 

Many commentators have criticised the ‘Occupy Wall Street’ movement for not having an agenda and a list of demands. But as Michael Lewis points out, their protests are not without merit. The slogan ‘We are the 99 percent’ captures the essence of the problem which is the explosion of the share of the national income captured by the richest 1% of the population. If this inequality was perceived to be fair or if it had occurred at a time of prosperity for the masses, it is unlikely that there would have been any protest at all. But as I have pointed out, the rise in income captured by the richest 1% is primarily driven by the rents captured by and through the financial sector. The same doctrine of macroeconomic stabilisation that acted as the source of these rents has also transformed the economy into a financialised and cronyist system unable to sustain a broad-based and sustainable recovery. Simply allowing the failure of insolvent banks and financialised corporations and putting an end to the flow of rents towards the banks will go a long way towards reducing the level of inequality in the economy. At the same time, the entry of new firms will restore the economy’s competitive and innovative dynamism.

Bookmark and Share

Written by Ashwin Parameswaran

October 5th, 2011 at 5:38 pm

Macroeconomic Stabilisation and Financialisation in The Real Economy

with 35 comments

The argument against stabilisation is akin to a broader, more profound form of the moral hazard argument. But the ecological ‘systems’ approach is much more widely applicable than the conventional moral hazard argument for a couple of reasons:

  • The essence of the Minskyian explanation is not that economic agents get fooled by the period of stability or that they are irrational. It is that there are sufficient selective forces (especially amongst principal-agent relationships) in the modern economy that the moral hazard outcome can be achieved even without any active intentionality on the part of economic agents to game the system.
  • The micro-prudential consequences of stabilisation and moral hazard are dwarfed by their macro-prudential systemic consequences. The composition of agents changes and becomes less diverse as those firms and agents that try to follow more resilient or less leveraged strategies will be outcompeted and weeded out – this loss of diversity is exacerbated by banks’ adaptation to the intervention strategies preferred by central banks in order to minimise their losses. And most critically, the suppression of disturbances increases the connectivity and reduces the ‘patchiness’ and modularity of the macroeconomic system. In the absence of disturbances, connectivity builds up within the network, both within and between scales. Increased within-scale connectivity increases the severity of disturbances and increased between-scale connectivity increases the probability that a disturbance at a lower level will propagate up to higher levels and cause systemic collapse.

Macro-stabilisation therefore breeds fragility in the financial sector. But what about the real economy? One could argue that in the long run, it is creative destruction in the real economy that drives economic growth and surely macro-stabilisation does not impede the pace of long-run innovation? Moreover, even if non-financial economic agents were ‘Ponzi borrowers’, wouldn’t real economic shocks be sufficient to deliver the “disturbances” consistent with macroeconomic resilience? Unfortunately, the assumption that nominal income stabilisation has no real impact is too simplistic. Macroeconomic stabilisation is one of the key drivers of the process of financialisation through which it transmits financial fragility throughout the real economy and hampers the process of exploratory innovation and creative destruction.

Financialisation is a term with many definitions. Since my focus is on financialisation in the corporate domain (rather than in the household sector), Greta Krippner’s definition of financialisation as a ““pattern of accumulation in which profit making occurs increasingly through financial channels rather than through trade and commodity production” is closest to the mark. But from a resilience perspective, it is more accurate to define financialisation as a “pattern of accumulation in which risk-taking occurs increasingly through financial channels rather than through trade and commodity production”.

In the long run, creating any source of stability in a capitalist economy incentivises economic agents to realign themselves to exploit that source of security and thereby reduce risk. Similar to how banks adaptation to the intervention strategies preferred by central banks by taking on more “macro” risks, macro-stabilisation incentivises real economy firms to shed idiosyncratic micro-risks and take on financial risks instead. Suppressing nominal volatility encourages economic agents to shed real risks and take on nominal risks. In the presence of the Greenspan/Bernanke put, a strategy focused on “macro” asset price risks and leverage outcompetes strategies focused on “risky” innovation.  Just as banks that exploit the guarantees offered by central banks outcompete those that don’t, real economy firms that realign themselves to become more bank-like outcompete those that choose not to.

The poster child for this dynamic is the transformation of General Electric during the Jack Welch Era, when “GE’s no-growth, blue-chip industrial businesses were run for profits and to maintain the AAA credit rating which was then used to expand GE Capital.” Again, the financialised strategy outcompetes all others and drives out “real economy” firms. As Doug Rushkoff observed, “the closer to the creation of value you get under this scheme, the farther you are from the money”. General Electric’s strategy is an excellent example of how financialisation is not just a matter of levering up the balance sheet. It could just as easily be focused on aggressively extending leverage to one’s clients, a strategy that is just as adept at delivering low-risk profits in an environment where the central bank is focused on avoiding even the smallest snap-back in an elastic, over-extended monetary system. When central bankers are focused on preventing significant pullbacks in equity prices (the Greenspan/Bernanke put), then real-economy firms are incentivised to take on more systematic risk and reduce their idiosyncratic risk exposure.

Some Post-Keynesian and Marxian economists also claim that this process of financialisation is responsible for the reluctance of corporates to invest in innovation. As Bill Lazonick puts it, “the financialization of corporate resource allocation undermines investment in innovation”. This ‘investment deficit’ has in turn led to the secular downturn in productivity growth across the Western world since the 1970s, a phenomenon that Tyler Cowen has coined as ‘The Great Stagnation’. This thesis, appealing though it is, is too simplistic. The increased market-sensitivity combined with the macro-stabilisation commitment encourages low-risk process innovation and discourages uncertain and exploratory product innovation. The collapse in high-risk, exploratory innovation is exacerbated by the rise in the influence of special interests that accompanies any extended period of stability, a dynamic that I discussed in an earlier post.

The easiest way to explain the above dynamic is to take a slightly provocative example. Let us assume that the Fed decides to make the ‘Bernanke Put’ more explicit by either managing a floor on equity prices or buying a significant amoubt of equities outright. The initial result may be positive but in the long run, firms will simply align their risk profile to that of the broader market. The end result will be a homogenous corporate sector free of any disruptive innovation – a state of perfect equilibrium but also a state of rigor mortis.

Bookmark and Share

Written by Ashwin Parameswaran

October 3rd, 2011 at 4:16 pm