macroresilience

resilience, not stability

Archive for the ‘Complex Adaptive Systems’ Category

Minsky and Hayek: Connections

with one comment

As Tyler Cowen argues, there are many similarities between Hayek’s and Minsky’s views on business cycles. Fundamentally, they both describe the “fundamental impossibility in maintaining orderly credit relations over time”.

Minsky saw Keynes’ theory as an ‘investment theory of the business cycle’ and his contribution as being a ‘financial theory of investment’. This financial theory was based on the credit/financing-focused endogenous theory of money of Joseph Schumpeter, whom Minsky studied under. Schumpeter’s views are best described in Chapter 3 (’Credit and Capital’) of his book ‘Theory of Economic Development’. The gist of this view is that “investment, and expenditures more generally, require financing, not saving” (Borio and Disyatat).

Schumpeter viewed the ability of banks to create money ex nihilo as the differentia specifica of capitalism. He saw bankers as ‘capitalists par excellence’ and viewed this ‘elastic’ nature of credit as an unambiguously positive phenomenon. Many people see Schumpeter’s view of money and banking as the antithesis of the Austrian view. But as Agnes Festre has highlighted, Hayek had a very similar view on the empirical reality of the credit process. Hayek however saw this elasticity of the monetary supply as a negative phenomenon. The similarity between Hayek and Minksy comes from the fact that Minsky also focused on the downside of an elastic monetary system in which overextension of credit was inevitably brought back to a halt by the violent snapback of the Minsky Moment.

Where Hayek and Minsky differed was that Minsky favoured a comprehensive stabilisation of the financial and monetary system through fiscal and monetary intervention after the Minsky moment. Hayek only supported the prevention of secondary deflationary spirals. Minsky supported aggressive and early monetary interventions (e.g. lender-of-last-resort programs) as well as fiscal stimulus. However, although Minsky supported stabilisation he was well aware of the damaging long-run consequences of stabilising the economic system. He understood that such a system would inevitably deteriorate into crony capitalism if fundamental reforms did not follow the stabilisation. Minsky supported a “policy strategy that emphasizes high consumption, constraints upon income inequality, and limitations upon permissible liability structures”. He also advocated “an industrial-organization strategy that limits the power of institutionalized giant firms”. Minsky was under no illusions that a stabilised capitalist economy could carry on with business as usual.

I disagree with Minsky on two fundamental points – I believe that a capitalist economy with sufficient low-level instability is resilient. Allow small failures of banks and financial players, tolerate small recessions and we can dramatically reduce the impact and probability of large-scale catastrophic recessions such as the 2008 financial crisis. A little bit of chaos is an essential ingredient in a resilient capitalist economy. I also believe that we must avoid stamping out the disturbance at its source and instead focus our efforts on mitigating the wider impact of the disturbance on the masses. In other words, bail out the masses with helicopter drops rather than bailing out the banks.

But although I disagree with Minsky his ideas are coherent. The same cannot be said for the current popular interpretation of Minsky which believes that so long as we deal with sufficient force when the Minsky moment arrives, capitalism can carry on as usual. As Minsky has argued in his book ‘John Maynard Keynes’, and as I have argued based on experiences in stabilising other complex adaptive systems such as rivers, forest fires and our brain, stabilised capitalism is an oxymoron.

What about Hayek’s views on credit elasticity? As I argued in an earlier post, “we live in a world where maturity transformation is no longer required to meet our investment needs. The evolution and malformation of the financial system means that Hayek’s analysis is more relevant now than it probably was during his own lifetime”. An elastic credit system is no longer beneficial to economic growth in the modern economy. This does not mean that we should ban the process of endogenous credit creation – it simply means that we must allow the maturity-transforming entities to collapse when they get in trouble1.


  1. Because we do not need an elastic, maturity-transforming financial system, we can firewall basic deposit banking from risky finance. This will enable us to allow the banks to fail when the next crisis hits us. The solution is not to ban casino banking but to suck the lifeblood out of it by constructing an alternative 100% reserve-like system. I have advocated that each resident should be given a deposit account with the central bank which can be backed by Treasuries, a ‘public option’ for basic deposit banking. John Cochrane has also argued for a similar system. In his words, “the Federal Reserve should continue to provide abundant reserves to banks, paying market interest. The Treasury could offer reserves to the rest of us—floating-rate, fixed-value, electronically-transferable debt. There is no reason that the Fed and Treasury should artificially starve the economy of completely safe, interest-paying cash”. ↩

Written by Ashwin Parameswaran

August 23rd, 2013 at 4:56 pm

Radical Centrism: Uniting the Radical Left and the Radical Right

with 22 comments

Pragmatic Centrism Is Crony Capitalism

Neoliberal crony capitalism is driven by a grand coalition between the pragmatic centre-left and the pragmatic centre-right. Crony capitalist policies are always justified as the pragmatic solution. The range of policy options is narrowed down to a pragmatic compromise that maximises the rent that can be extracted by special interests. Instead of the government providing essential services such as healthcare and law and order, we get oligopolistic private healthcare and privatised prisons. Instead of a vibrant and competitive private sector with free entry and exit of firms we get heavily regulated and licensed industries, too-big-to-fail banks and corporate bailouts.

There’s no better example of this dynamic than the replacement of the public option in Obamacare by a ‘private option’. As Glenn Greenwald argues, “whatever one’s views on Obamacare were and are: the bill’s mandate that everyone purchase the products of the private health insurance industry, unaccompanied by any public alternative, was a huge gift to that industry.” Public support is garnered by presenting the private option as the pragmatic choice, the compromise option, the only option. To middle class families who fear losing their healthcare protection due to unemployment, the choice is framed as either the private option or nothing.

In a recent paper (h/t Chris Dillow), Pablo Torija asks the question ‘Do Politicians Serve the One Percent?’ and concludes that they do. This is not a surprising result but what is more interesting is his research on the difference between leftwing and rightwing governments which he summarises as follows: “In 2009 center-right parties maximized the happiness of the 100th-98th richest percentile and center-left parties the 100th-95th richest percentile. The situation has evolved from the seventies when politicians represented, approximately, the median voter”.

Nothing illustrates the irrelevance of democratic politics in the neo-liberal era more than the sight of a supposedly free-market right-wing government attempting to reinvent Fannie Mae/Freddie Mac in Britain. On the other side of the pond, we have a supposedly left-wing government which funnels increasing amounts of taxpayer money to crony capitalists in the name of public-private partnerships. Politics today is just internecine warfare between the various segments of the rentier class. As Pete Townshend once said, “Meet the new boss, same as the old boss”.

The Core Strategy of Pragmatic Crony Capitalism: Increase The Scope and Reduce the Scale of Government

Most critics of neoliberalism on the left point to the dramatic reduction in the scale of government activities since the 80s – the privatisation of state-run enterprises, the increased dependence upon private contractors for delivering public services etc. Most right-wing critics lament the increasing regulatory burden faced by businesses and individuals and the preferential treatment and bailouts doled out to the politically well-connected. Neither the left nor the right is wrong. But both of them only see one side of what is the core strategy of neoliberal crony capitalism – increase the scope and reduce the scale of government intervention. Where the government was the sole operator, such as prisons and healthcare, “pragmatic” privatisation leaves us with a mix of heavily regulated oligopolies and risk-free private contracting relationships. On the other hand, where the private sector was allowed to operate without much oversight the “pragmatic” reform involves the subordination of free enterprise to a “sensible” regulatory regime and public-private partnerships to direct capital to social causes. In other words, expand the scope of government to permeate as many economic activities as possible and contract the scale of government within its core activities.

Some of the worst manifestations of crony capitalism can be traced to this perverse pragmatism. The increased scope and reduced scale are the main reasons for the cosy revolving door between incumbent crony capitalists and the government. The left predictably blames it all on the market, the right blames government corruption, while the revolving door of “pragmatic” politicians and crony capitalists rob us blind.

Radical Centrism: Increase The Scale and Reduce The Scope of Government

The essence of a radical centrist approach is government provision of essential goods and services and a minimal-intervention, free enterprise environment for everything else. In most countries, this requires both a dramatic increase in the scale of government activities within its core domain as well as a dramatic reduction in the scope of government activities outside it. In criticising the shambolic privatisation of National Rail in the United Kingdom, Christian Wolmar argued that: “once you have government involvement, you might as well have government ownership”. This is an understatement. The essence of radical centrism is: ‘once you have government involvement, you must have government ownership’. Moving from publicly run systems “towards” free-enterprise systems or vice versa is never a good idea. The road between the public sector and the private sector is the zone of crony capitalist public-private partnerships. We need a narrowly defined ‘pure public option’ rather than the pragmatic crony capitalist ‘private option’.

The idea of radical centrism is not just driven by vague ideas of social justice or increased competition. It is driven by ideas and concepts that lie at the heart of complex system resilience. All complex adaptive systems that successfully balance the need to maintain robustness while at the same time generating novelty and innovation utilise a similar approach.

Barbell Approach: Conservative Core, Aggressive Periphery

Radical centrism follows what Nassim Taleb has called the ‘barbell approach’. Taleb also provides us with an excellent example of such a policy in his book ‘Antifragile’, “hedge funds need to be unregulated and banks nationalized.” The idea here is that you bring the essential utility-like component of banking into the public domain and leave the rest alone. It is critical that the common man must not be compelled to use oligopolistic rent-fuelled services for his essential needs. In the modern world, the ability to hold money and transact is an essential service. It is also critical that there is only a public option, not a public imperative. The private sector must be allowed to compete against the public option.

A bimodal strategy of combining a conservative core with an aggressive periphery is common across complex adaptive systems in many different domains. It is true of the gene regulatory networks in our body which contains a conservative “kernel”. The same phenomenon has even been identified in technological systems such as the architecture of the Internet where a conservative kernel “represent(s) a stable basis on which diversity and complexity of higher-level processes can evolve”.

Stress, fragility and disorder in the periphery generates novelty and variation that enables the system to innovate and adapt to new environments. The stable core not only promotes robustness but paradoxically also promotes long-run innovation by by avoiding systemic collapse. Innovation is not opposed to robustness. In fact, the long-term ability of a system to innovate is dependent upon system robustness. But robustness does not imply stability, it simply means a stable core. The progressive agenda is consistent with creative destruction so long as we focus on a safety net, not a hammock.

Restore the ‘Invisible Foot’ of Competition

The neo-liberal era is often seen as the era of deregulation and market supremacy. But as many commentators have noticed, “”deregulation typically means reregulation under new rules that favor business interests.” As William Davies notes, “the guiding assumption of neoliberalism is not that markets work perfectly, but that private actors make better decisions than public ones”. And this is exactly what happened. Public sector employees were moved onto incentive-based contracts that relied on their “greed” and the invisible hand to elicit better outcomes. Public services were increasingly outsourced to private contractors who were theoretically incentivised to keep costs down and improve service delivery. Nationalised industries like telecom were replaced with heavily licensed private oligopolies. But there was a fatal flaw in these “reforms” which Allen Schick identifies as follows (emphasis mine):

one should not lose sight of the fact that these are not real markets and that they do not operate with real contracts. Rather, the contracts are between public entities—
the owner and the owned. The government has weak redress when its own organizations fail to perform, and it may be subject to as much capture in negotiating and enforcing its contracts as it was under pre-reform management. My own sense is that while some gain may come from mimicking markets, anything less than the real thing
denies government the full benefits of vigorous competition and economic redress
.

One difference between the “real thing” and the neoliberal version of the real thing is what the economist Joseph Berliner has called the ‘invisible foot’ of capitalism. Incumbent firms rarely undertake disruptive innovation unless compelled to do so by the force of dynamic competition from new entrants. The critical factor in this competitive dynamic is not the temptation of higher profits but the fear of failure and obsolescence. To sustain long-run innovation in the economy , the invisible foot needs to be “applied vigorously to the backsides of enterprises that would otherwise have been quite content to go on producing the same products in the same ways, and at a reasonable profit, if they could only be protected from the intrusion of competition.”

The other critical difference is just how vulnerable the half-way house solutions of neo-liberalism were to being gamed and abused by opportunistic private actors. The neo-liberal era saw a rise in incentive-based contracts across the private and public sector but without the invisible foot of the threat of failure. The predictable result was not only a stagnant economy but an increase in rent extraction as private actors gamed the positive incentives on offer. As an NHS surgeon quipped with respect to the current NHS reform project: “I think there’s a model there, but it’s whether it can be delivered and won’t be corrupted. I can see a very idealistic model, but by God, it’s vulnerable to people ripping it off”.

Most people view the failure of the Soviet model as being due to the inefficiency of the planned economy. But the problem that consumed the attention of Soviet leaders since the 1950s was the inability of the Soviet economy to innovate. Brezhnev once quipped that Soviet enterprises shied away from innovation “as the devil shies away from incense”. In his work on the on-the-ground reality of the Soviet economy, Joseph Berliner analysed the efforts of Soviet planners to counter this problem of insufficient innovation. The Soviets tried a number of positive incentive schemes (e.g. innovation “bonuses”) that we commonly associate with capitalist economies. But what it could not replicate was the threat of firm failure. Managers safe in the knowledge that competitive innovation would not cause their firm or their jobs to vanish were content to focus on low-risk process innovation and cost-reduction rather than higher-risk, disruptive innovation. In fact, the presence of bonuses that rewarded efficiency further reduced exploratory innovation as exploratory innovation required managers to undertake actions that often reduced short-term efficiency.

Unwittingly, the neoliberal era has replicated the Soviet system. Incumbent firms have no fear of failure and can game the positive incentives on offer to extract rents while at the same time shying away from any real disruptive innovation. We are living in a world where rentier capitalists game the half-baked schemes of privatisation and fleece the taxpayer and the perverse dynamics of safety for the classes and instability for the masses leaves us in the Great Stagnation.

Bailouts For People, Not Firms

Radical centrism involves a strengthening of the safety net for individuals combined with a dramatic increase in the competitive pressures exerted on incumbent firms. Today, we bail out banks because a banking collapse threatens the integrity of the financial system. We bail out incumbent firms because firm failure leaves the unemployed without even catastrophic health insurance. The principle of radical centrism aims to build a firewall that protects the common man from the worst impact of economic disturbances while simultaneously increasing the threat of failure at firm level. The presence of the ‘public option’ and a robust safety net is precisely what empowers us to allow incumbent firms to fail.

The safety net that protects individuals ensures robustness while the presence of a credible ‘invisible foot’ at the level of the firm boosts innovation. Moreover, as Taleb notes programs that bail out people are much less susceptible to being gamed and abused than programs that bail out limited liability firms. As I noted in an earlier post, “even uncertain tail-risk protection provided to corporates will eventually be gamed. The critical difference between individuals and corporates in this regard is the ability of stockholders and creditors to spread their bets across corporate entities and ensure that failure of any one bet has only a limited impact on the individual investors’ finances. In an individual’s case, the risk of failure is by definition concentrated and the uncertain nature of the transfer will ensure that moral hazard implications are minimal.”

The irony of the current policy debate is that policy interventions that prop up banks, asset prices and incumbent firms are viewed as the pragmatic option and policy interventions focused on households are viewed as radical and therefore beyond the pale of discussion. Preventing rent-seeking is a problem that both the left and the right should be concerned with. But both the radical left and the radical right need to realise the misguided nature of many of their disagreements. A robust safety net is as important to maintaining an innovative free enterprise economy as the dismantling of entry barriers and free enterprise are to reducing inequality.

Note: For a more rigorous treatment of the tradeoff between innovation and robustness in complex adaptive systems, see my essay ‘All Systems Need A Little Disorder’.

Written by Ashwin Parameswaran

April 8th, 2013 at 2:54 pm

The Ever-Increasing Cost of Propping Up A Fragile And Dysfunctional System

with 2 comments

Monetary Medication And The Economy

Mervyn King:

We’re now in a position where you can see it’s harder and harder for monetary policy to push spending back up to the old path . . . It’s as if you’re running up an ever steeper hill.

Psychotropic Medication And The Brain

Robert Whitaker quoted in an earlier essay:

Over time….the dopaminergic pathways tended to become permanently dysfunctional. They became irreversibly stuck in a hyperactive state….Doctors would then need to prescribe higher doses of antipsychotics.

Fire Suppression And The Forest

From an earlier essay:

The initial low cost of suppression is short-lived and the cumulative effect of the fragilisation of the system has led to rapidly increasing costs of wildfire suppression and levels of area burned in the last three decades.

Written by Ashwin Parameswaran

February 13th, 2013 at 1:08 pm

The Resilience Approach vs Minsky/Bagehot: When and Where to Intervene

with 5 comments

There are many similarities between a resilience approach to macroeconomics and the Minsky/Bagehot approach – the most significant being a common focus on macroeconomies as systems in permanent disequilibrium. Although both approaches largely agree on the descriptive characteristics of macroeconomic systems, there are some significant differences when it comes to the preferred policy prescriptions. In a nutshell, the difference boils down to the question of when and where to intervene.

A resilience approach focuses its interventions on severe disturbances, whilst allowing small and moderate disturbances to play themselves out. Even when the disturbance is severe, a resilience approach avoids stamping out the disturbance at source and focuses its efforts on mitigating the wider impact of the disturbance on the macroeconomy. The primary aim is the minimisation of the long-run fragilising consequences of the intervention which I have explored in detail in many previous posts(1, 2, 3). Just as small fires and floods are integral to ecological resilience, small disturbances are integral to macroeconomic resilience. Although it is difficult to identify ex-ante whether disturbances are moderate or not, the Greenspan-Bernanke era nevertheless contains some excellent examples of when not to intervene. The most obvious amongst all the follies of Greenspan-era monetary policy were the rate cuts during the LTCM collapse which were implemented with the sole purpose of “saving” financial markets at a time when the real economy showed no signs of stress1.

The Minsky/Bagehot approach focuses on tackling all disturbances with debt-deflationary consequences at their source. Bagehot asserted in ‘Lombard Street’ that “in wild periods of alarm, one failure makes many, and the best way to prevent the derivative failures is to arrest the primary failure which causes them”. Minsky emphasised the role of both the lender-of-last-resort (LOLR) mechanism as well as fiscal stabilisers in tackling such “failures”. However Minsky was not ignorant of the long-term damage inflicted by a regime where all disturbances were snuffed out at source – the build-up of financial “innovation” designed to take advantage of this implicit protection, the descent into crony capitalism and the growing fragility of a private-investment driven economy2, an understanding that was also reflected in his fundamental reform proposals3. Minsky also appreciated that the short-run cycle from hedge finance to Ponzi finance does not repeat itself in the same manner. The long-arc of stabilised cycles is itself a disequilibrium process (a sort of disequilibrium super-cycle) where performance in each cycle deteriorates compared to the last one – an increasing amount of stabilisation needs to be applied in each short-run cycle to achieve poorer results compared to the previous cycle.

Resilience Approach: Policy Implications

As I have outlined in an earlier post, an approach that focuses on minimising the adaptive consequences of macroeconomic interventions implies that macroeconomic policy must allow the “river” of the macroeconomy to flow in a natural manner and restrict its interventions to insuring individual economic agents rather than corporate entities against the occasional severe flood. In practise, this involves:

  • De-emphasising the role of conventional and unconventional monetary policy (interest-rate cuts, LOLR, quantitative easing, LTRO) in tackling debt-deflationary disturbances.
  • De-emphasising the role of industrial policy and explicit bailouts of banks and other firms4.
  • Establishing neutral monetary-fiscal hybrid policies such as money-financed helicopter drops as the primary tool of macroeconomic stabilisation. Minsky’s insistence on the importance of LOLR operations was partly driven by his concerns that alternative policy options could not be implemented quickly enough5. This concern is less relevant with regards to helicopter drops in today’s environment where they can be implemented almost instantaneously6.

Needless to say, the policies we have followed throughout the ‘Great Moderation’ and continue to follow are anything but resilient. Nowhere is the farce of orthodox policy more apparent than in Europe where countries such as Spain are compelled to enforce austerity on the masses whilst at the same time being forced to spend tens of billions of dollars in bailing out incumbent banks. Even within the structurally flawed construct of the Eurozone, a resilient strategy would take exactly the opposite approach which will not only drag us out of the ‘Great Stagnation’ but it will do so in a manner that delivers social justice and reduced inequality.

 

 


  1. Of course this “success” also put Greenspan, Rubin and Summers onto the cover of TIME magazine, which goes to show just how biased political incentives are in favour of stabilisation and against resilience.  ↩
  2. From pages 163-165 of Minsky’s book ‘John Maynard Keynes’:
    “The success of a high-private-investment strategy depends upon the continued growth of relative needs to validate private investment. It also requires that policy be directed to maintain and increase the quasi-rents earned by capital – i.e.,rentier and entrepreneurial income. But such high and increasing quasi-rents are particularly conducive to speculation, especially as these profits are presumably guaranteed by policy. The result is experimentation with liability structures that not only hypothecate increasing proportions of cash receipts but that also depend upon continuous refinancing of asset positions. A high-investment, high-profit strategy for full employment – even with the underpinning of an active fiscal policy and an aware Federal Reserve system – leads to an increasingly unstable financial system, and an increasingly unstable economic performance. Within a short span of time, the policy problem cycles among preventing a deep depression, getting a stagnant economy moving again, reining in an inflation, and offsetting a credit squeeze or crunch…….
    In a sense, the measures undertaken to prevent unemployment and sustain output “fix” the game that is economic life; if such a system is to survive, there must be a consensus that the game has not been unfairly fixed…….
    As high investment and high profits depend upon and induce speculation with respect to liability structures, the expansions become increasingly difficult to control; the choice seems to become whether to accomodate to an increasing inflation or to induce a debt-deflation process that can lead to a serious depression……
    The high-investment, high-profits policy synthesis is associated with giant firms and giant financial institutions, for such an organization of finance and industry seemingly makes large-scale external finance easier to achieve. However, enterprises on the scale of the American giant firms tend to become stagnant and inefficient. A policy strategy that emphasizes high consumption, constraints upon income inequality, and limitations upon permissible liability structures, if wedded to an industrial-organization strategy that limits the power of institutionalized giant firms, should be more conducive to individual initiative and individual enterprise than is the current synthesis.
    As it is now, without controls on how investment is to be financed and without a high-consumption, low private-investment strategy, sustained full employment apparently leads to treadmill affluence, accelerating inflation, and recurring threats of financial crisis.”
     ↩
  3. Just like Keynes, Minsky understood completely the dynamic of stabilisation and its long-term strategic implications. Given the malformation of private investment by the interventions needed to preserve the financial system, Keynes preferred the socialisation of investment and Minsky a shift to a high-consumption, low-investment system. But the conventional wisdom, which takes Minsky’s tactical advice on stabilisation and ignores his strategic advice on the need to abandon the private-investment led model of growth, is incoherent. ↩
  4. In his final work ‘Power and Prosperity’, Mancur Olson expressed a similar sentiment: “subsidizing industries, firms and localities that lose money…at the expense of those that make money…is typically disastrous for the efficiency and dynamism of the economy, in a way that transfers unnecessarily to poor individuals…A society that does not shift resources from the losing activities to those that generate a social surplus is irrational, since it is throwing away useful resources in a way that ruins economic performance without the least assurance that it is helping individuals with low incomes. A rational and humane society, then, will confine its distributional transfers to poor and unfortunate individuals.” ↩
  5. From pg 44 of ‘Stabilising an Unstable Economy’: “The need for lender-of-Iast-resort operations will often occur before income falls steeply and before the well nigh automatic income and financial stabilizing effects of Big Government come into play. If the institutions responsible for the lender-of-Iast-resort function stand aside and allow market forces to operate, then the decline in asset values relative to current output prices will be larger than with intervention; investment and debt- financed consumption will fall by larger amounts; and the decline in income, employment, and profits will be greater. If allowed to gain momentum, the financial crisis and the subsequent debt deflation may, for a time, overwhelm the income and financial stabilizing capacity of Big Government. Even in the absence of effective lender-of-Iast-resort action, Big Government will eventually produce a recovery, but, in the interval, a high price will be paid in the form of lost income and collapsing asset values.” ↩
  6. As Charlie Bean of the BoE suggests, helicopter drops could be implemented in the UK via the PAYE system. ↩

Written by Ashwin Parameswaran

May 8th, 2012 at 1:31 pm

The Control Revolution And Its Discontents

with 19 comments

One of the key narratives on this blog is how the Great Moderation and the neo-liberal era has signified the death of truly disruptive innovation in much of the economy. When macroeconomic policy stabilises the macroeconomic system, every economic actor is incentivised to take on more macroeconomic systemic risks and shed idiosyncratic, microeconomic risks. Those that figured out this reality early on and/or had privileged access to the programs used to implement this macroeconomic stability, such as banks and financialised corporates, were the big winners – a process that is largely responsible for the rise in inequality during this period. In such an environment the pace of disruptive product innovation slows but the pace of low-risk process innovation aimed at cost-reduction and improving efficiency flourishes. therefore we get the worst of all worlds – the Great Stagnation combined with widespread technological unemployment.

This narrative naturally begs the question: when was the last time we had a truly disruptive Schumpeterian era of creative destruction. In a previous post looking at the evolution of the post-WW2 developed economic world, I argued that the so-called Golden Age was anything but Schumpeterian – As Alexander Field has argued, much of the economic growth till the 70s was built on the basis of disruptive innovation that occurred in the 1930s. So we may not have been truly Schumpeterian for at least 70 years. But what about the period from at least the mid 19th century till the Great Depression? Even a cursory reading of economic history gives us pause for thought – after all wasn’t a significant part of this period supposed to be the Gilded Age of cartels and monopolies which sounds anything but disruptive.

I am now of the opinion that we have never really had any long periods of constant disruptive innovation – this is not a sign of failure but simply a reality of how complex adaptive systems across domains manage the tension between efficiency,robustness, evolvability and diversity. What we have had is a subverted control revolution where repeated attempts to achieve and hold onto an efficient equilibrium fail. Creative destruction occurs despite our best efforts to stamp it out. In a sense, disruption is an outsider to the essence of the industrial and post-industrial period of the last two centuries, the overriding philosophy of which is automation and algorithmisation aimed at efficiency and control. And much of our current troubles are a function of the fact that we have almost perfected the control project.

The operative word and the source of our problems is “almost”. Too many people look at the transition from the Industrial Revolution to the Algorithmic Revolution as a sea-change in perspective. But in reality, the current wave of reducing everything to a combination of “data & algorithm” and tackling every problem with more data and better algorithms is the logical end-point of the control revolution that started in the 19th century. The difference between Ford and Zara is overrated – Ford was simply the first step in a long process that focused on systematising each element of the industrial process (production,distribution,consumption) but also crucially putting in place a feedback loop between each element. In some sense, Zara simply follows a much more complex and malleable algorithm than Ford did but this algorithm is still one that is fundamentally equilibriating (not disruptive) and focused on introducing order and legibility into a fundamentally opaque environment via a process that reduces human involvement and discretion by replacing intuitive judgements with rules and algorithms. Exploratory/disruptive innovation on the other hand is a disequilibriating force that is created by entrepreneurs and functions outside this feedback/control loop. Both processes are important – the longer period of the gradual shedding of diversity and homogenisation in the name of efficiency as well as the periodic “collapse” that shakes up the system and puts it eventually on the path to a new equilibrium.

Of course, control has been a aim of western civilisation for a lot longer but it was only in the 19th century that the tools of control were good enough for this desire to be implemented in any meaningful sense. And even more crucially, as James Beniger has argued, it was only in the last 150 years that the need for large-scale control arose. And now the tools and technologies in our hands to control and stabilise the economy are more powerful than they’ve ever been, likely too powerful.

If we had perfect information and everything could be algorithmised right now i.e. if the control revolution had been perfected, then the problem disappears. Indeed it is arguable that the need for disruption in the innovation process no longer exists. If we get to a world where radical uncertainty has been eliminated, then the problem of systemic fragility is moot and irrelevant. It is easy to rebut the stabilisation and control project by claiming that we cannot achieve this perfect world.

But even if the techno-utopian project can achieve all that it claims it can, the path matters. We need to make it there in one piece. The current “algorithmic revolution” is best viewed as a continuation of the process through which human beings went from being tool-users to minders and managers of automated systems. The current transition is simply one where the many of these algorithmic and automated systems can essentially run themselves with human beings simply performing the role of supervisors who only need to intervene in extraordinary circumstances. Therefore, it would seem logical that the same process of increased productivity that has occurred during the modern era of automation will continue during the creation of the “vast,automatic and invisible” ‘second economy’. However there are many signs that this may not be the case. What has made things better till now and has been genuine “progress” may make things worse in higher doses and the process of deterioration can be quite dramatic.

The Uncanny Valley on the Path towards “Perfection”

In 1970, Masahiro Mori coined the term ‘uncanny valley’ to denote the phenomenon that “as robots appear more humanlike, our sense of their familiarity increases until we come to a valley”. When robots are almost but not quite human-like, they invoke a feeling of revulsion rather than empathy. As Karl McDorman notes, “Mori cautioned robot designers not to make the second peak their goal — that is, total human likeness — but rather the first peak of humanoid appearance to avoid the risk of their robots falling into the uncanny valley.”

A similar valley exists in the path of increased automation and algorithmisation. Much of the discussion in this section of the post builds upon concepts I explored via a detailed case study in a previous post titled ‘People Make Poor Monitors for Computers’.

The 21st century version of the control project i.e. the algorithmic project consists of two components:
1. More Data – ‘Big Data’.
2. Better and more comprehensive Algorithm.

The process goes hand in hand therefore with increased complexity and crucially, poorer and less intuitive feedback for the human operator. This results in increased fragility and a system prone to catastrophic breakdowns. The typical solution chosen is either further algorithmisation i.e. an improved algorithm and more data and if necessary increased slack and redundancy. This solution exacerbates the problem of feedback and temporarily pushes the catastrophic scenario further out to the tail but it does not eliminate it. Behavioural adaptation by human agents to the slack and the “better” algorithm can make a catastrophic event as likely as it was before but with a higher magnitude. But what is even more disturbing is that this cycle of increasing fragility can occur even without any such adaptation. This is the essence of the fallacy of the ‘defence in depth’ philosophy that lies at the core of most fault-tolerant algorithmic designs that I discussed in my earlier postthe increased “safety” of the automated system allows the build up of human errors without any feedback available from deteriorating system performance.

A thumb rule to get around this problem is to use slack only in those domains where failure is catastrophic and to prioritise feedback when failure is not critical and cannot kill you. But in an uncertain environment, this rule is very difficult to manage. How do you really know that a particular disturbance will not kill you? Moreover the loop of automation -> complexity -> redundancy endogenously turns a non-catastrophic event into one with catastrophic consequences.

This is a trajectory which is almost impossible to reverse once it has gone beyond a certain threshold without undergoing an interim collapse. The easy short-term fix is always to make a patch to the algorithm, get more data and build in some slack if needed. An orderly rollback is almost impossible due to the deskilling of the human workforce and risk of collapse due to other components in the system having adapted to new reality. Even simply reverting to the old more tool-like system makes things a lot worse because the human operators are no longer experts at using those tools – the process of algorithmisation has deskilled the human operator. Moreover, the endogenous nature of this buildup of complexity eventually makes the system fundamentally illegible to the human operator – a phenomenon that is ironic given that the fundamental aim of the control revolution is to increase legibility.

The Sweet Spot Before the Uncanny Valley: Near-Optimal Yet Resilient

Although it is easy to imagine the characteristics of an inefficient and dramatically sub-optimal system that is robust, complex adaptive systems operate at a near-optimal efficiency that is also resilient. Efficiency is not only important due to the obvious reality that resources are scarce but also because slack at the individual and corporate level is a significant cause of unemployment. Such near-optimal robustness in both natural and economic systems is not achieved with simplistically diverse agent compositions or with significant redundancies or slack at agent level.

Diversity and redundancy carry a cost in terms of reduced efficiency. Precisely due to this reason, real-world economic systems appear to exhibit nowhere near the diversity that would seem to ensure system resilience. Rick Bookstaber noted recently, that capitalist competition if anything seems to lead to a reduction in diversity. As Youngme Moon’s excellent book ‘Different’ lays out, competition in most markets seems to result in less diversity, not more. We may have a choice of 100 brands of toothpaste but most of us would struggle to meaningfully differentiate between them.

Similarly, almost all biological and ecological complex adaptive systems are a lot less diverse and contain less pure redundancy than conventional wisdom would expect. Resilient biological systems tend to preserve degeneracy rather than simple redundancy and resilient ecological systems tend to contain weak links rather than naive ‘law of large numbers’ diversity. The key to achieving resilience with near-optimal configurations is to tackle disturbances and generate novelty/innovation with an an emergent systemic response that reconfigures the system rather than simply a localised response. Degeneracy and weak links are key to such a configuration. The equivalent in economic systems is a constant threat of new firm entry.

The viewpoint which emphasises weak links and degeneracy also implies that it is not the keystone species and the large firms that determine resilience but the presence of smaller players ready to reorganise and pick up the slack when an unexpected event occurs. Such a focus is further complicated by the fact that in a stable environment, the system may become less and less resilient with no visible consequences – weak links may be eliminated, barriers to entry may progressively increase etc with no damage done to system performance in the stable equilibrium phase. Yet this loss of resilience can prove fatal when the environment changes and can leave the system unable to generate novelty/disruptive innovation. This highlights the folly of statements such as ‘what’s good for GM is good for America’. We need to focus not just on the keystone species, but on the fringes of the ecosystem.

 THE UNCANNY VALLEY AND THE SWEET SPOT

The Business Cycle in the Uncanny Valley – Deterioration of the Median as well as the Tail

Many commentators have pointed out that the process of automation has coincided with a deskilling of the human workforce. For example, below is a simplified version of the relation between mechanisation and skill required by the human operator that James Bright documented in 1958 (via Harry Braverman’s ‘Labor and Monopoly Capital’). But till now, it has been largely true that although human performance has suffered, the performance of the system has gotten vastly better. If the problem was just a drop in human performance while the system got better, our problem is less acute.

AUTOMATION AND DESKILLING OF THE HUMAN OPERATOR

But what is at stake is a deterioration in system performance – it is not only a matter of being exposed to more catastrophic setbacks. Eventually mean/median system performance deteriorates as more and more pure slack and redundancy needs to be built in at all levels to make up for the irreversibly fragile nature of the system. The business cycle is an oscillation between efficient fragility and robust inefficiency. Over the course of successive cycles, both poles of this oscillation get worse which leads to median/mean system performance falling rapidly at the same time that the tails deteriorate due to the increased illegibility of the automated system to the human operator.

THE UNCANNY VALLEY BUSINESS CYCLE

The Visible Hand and the Invisible Foot, Not the Invisible Hand

The conventional economic view of the economy is one of a primarily market-based equilibrium punctuated by occasional shocks. Even the long arc of innovation is viewed as a sort of benign discovery of novelty without any disruptive consequences. The radical disequilibrium view (which I have been guilty of espousing in the past) is one of constant micro-fragility and creative destruction. However, the history of economic evolution in the modern era has been quite different – neither market-based equilibrium nor constant disequilibrium, but a series of off-market attempts to stabilise relations outside the sphere of the market combined with occasional phase transitions that bring about dramatic change. The presence of rents is a constant and the control revolution has for the most part succeeded in preserving the rents of incumbents, barring the occasional spectacular failure. It is these occasional “failures” that have given us results that in some respect resemble those that would have been created by a market over the long run.

As Bruce Wilder puts it (sourced from 1, 2, 3 and mashed up together by me):

The main problem with the standard analysis of the “market economy”, as well as many variants, is that we do not live in a “market economy”. Except for financial markets and a few related commodity markets, markets are rare beasts in the modern economy. The actual economy is dominated by formal, hierarchical, administrative organization and transactions are governed by incomplete contracts, explicit and implied. “Markets” are, at best, metaphors…..
Over half of the American labor force works for organizations employing over 100 people. Actual markets in the American economy are extremely rare and unusual beasts. An economics of markets ought to be regarded as generally useful as a biology of cephalopods, amid the living world of bones and shells. But, somehow the idealized, metaphoric market is substituted as an analytic mask, laid across a vast variety of economic relations and relationships, obscuring every important feature of what actually is…..
The elaborate theory of market price gives us an abstract ideal of allocative efficiency, in the absence of any firm or household behaving strategically (aka perfect competition). In real life, allocative efficiency is far less important than achieving technical efficiency, and, of course, everyone behaves strategically.
In a world of genuine uncertainty and limitations to knowledge, incentives in the distribution of income are tied directly to the distribution of risk. Economic rents are pervasive, but potentially beneficial, in that they provide a means of stable structure, around which investments can be made and production processes managed to achieve technical efficiency.
In the imaginary world of complete information of Econ 101, where markets are the dominant form of economic organizations, and allocative efficiency is the focus of attention, firms are able to maximize their profits, because they know what “maximum” means. They are unconstrained by anything.
In the actual, uncertain world, with limited information and knowledge, only constrained maximization is possible. All firms, instead of being profit-maximizers (not possible in a world of uncertainty), are rent-seekers, responding to instituted constraints: the institutional rules of the game, so to speak. Economic rents are what they have to lose in this game, and protecting those rents, orients their behavior within the institutional constraints…..
In most of our economic interactions, price is not a variable optimally digesting information and resolving conflict, it is a strategic instrument, held fixed as part of a scheme of administrative control and information discovery……The actual, decentralized “market” economy is not coordinated primarily by market prices—it is coordinated by rules. The dominant relationships among actors is not one of market exchange at price, but of contract: implicit or explicit, incomplete and contingent.

James Beniger’s work is the definitive document on how the essence of the ‘control revolution’ has been an attempt to take economic activity out of the sphere of direct influence of the market. But that is not all – the long process of algorithmisation over the last 150 years has also, wherever possible, replaced implicit rules/contracts and principal-agent relationships with explicit processes and rules. Beniger also notes that after a certain point, the increasing complexity of the system is an endogenous phenomenon i.e. further iterations are aimed at controlling the control process itself. As I have illustrated above, after a certain threshold, the increasing complexity, fragility and deterioration in performance becomes a self-fulfilling positive feedback process.

Although our current system bears very little resemblance to the market economy of the textbook, there was a brief period during the transition from the traditional economy to the control economy during the early part of the 19th century when this was the case. 26% of all imports into the United States in 1827 sold in an auction. But the displacement of traditional controls (familial ties) with the invisible hand of market controls was merely a transitional phase, soon to be displaced by the visible hand of the control revolution.

The Soviet Project, Western Capitalism and High Modernity

Communism and Capitalism are both pillars of the high-modernist control project. The signature of modernity is not markets, but technocratic control projects. Capitalism has simply done it in a manner that is more easily and more regularly subverted. It is the occasional failure of the control revolution that is the source of the capitalist economy’s long-run success. Conversely, the failure of the Soviet Project was due to its too successful adherence and implementation of the high-modernist ideal. The significance of the threat from crony capitalism is a function of the fact that by forming a coalition and partnership of the corporate and state control projects, it enables the implementation of the control revolution to be that much more effective.

The Hayekian argument of dispersed knowledge and its importance in seeking equilibrium is not as important as it seems in explaining why the Soviet project failed. As Joseph Berliner has illustrated, the Soviet economy did not fail to reach local equilibria. Where it failed so spectacularly was in extracting itself out of these equilibria. The dispersed knowledge argument is open to the riposte that better implementation of the control revolution will eventually overcome these problems – indeed much of the current techno-utopian version of the control revolution is based on this assumption. It is a weak argument for free enterprise, a much stronger argument for which is the need to maintain a system that retains the ability to reinvent itself and find a new, hitherto unknown trajectory via the destruction of the incumbents combined with the emergence of the new. Where the Soviet experiment failed is that it eliminated the possibility of failure, that Berliner called the ‘invisible foot’. The success of the free enterprise system has been built not upon the positive incentive of the invisible hand but the negative incentive of the invisible foot to counter the visible hand of the control revolution. It is this threat and occasional realisation of failure and disorder that is the key to maintaining system resilience and evolvability.

 

 

Notes:

  • Borrowing from Beniger, control here simply means “purposive influence towards a predetermined goal”. Similarly, equilibrium in this context is best defined as a state in which economic agents are not forced to change their routines, theories and policies.
  • On the uncanny valley, I wrote a similar post on why perfect memory does not lead to perfect human intelligence. Even if a computer benefits from more data and better memory, we may not. And the evidence suggests that the deterioration in human performance is steepest in the zone close to “perfection”.
  • An argument similar to my assertion on the misconception of a free enterprise economy as a market economy can be made about the nature of democracy. Rather than as a vehicle that enables the regular expression of the political will of the electorate, democracy may be more accurately thought of as the ability to effect a dramatic change when the incumbent system of plutocratic/technocratic rule diverges too much from popular opinion. As always, stability and prevention of disturbances can cause the eventual collapse to be more catastrophic than it needs to be.
  • Although James Beniger’s ‘Control Revolution’ is the definitive reference, Antoine Bousquet’s book ‘The Scientific Way of Warfare’ on the similar revolution in military warfare is equally good. Bousquet’s book highlights the fact that the military is often the pioneer of the key projects of the control revolution and it also highlights just how similar the latest phase of this evolution is to early phases – the common desire for control combined with its constant subversion by reality. Most commentators assume that the threat to the project is external – by constantly evolving guerrilla warfare for example. But the analysis of the uncanny valley suggests that an equally great threat is endogenous – of increasing complexity and illegibility of the control project itself. Bousquet also explains how the control revolution is a child of the modern era and the culmination of the philosophy of the Enlightenment.
  • Much of the “innovation” of the control revolution was not technological but institutional – limited liability, macroeconomic stabilisation via central banks etc.
  • For more on the role of degeneracy in biological systems and how it enables near-optimal resilience, this paper by James Whitacre and Axel Bender is excellent.

Written by Ashwin Parameswaran

February 21st, 2012 at 5:38 pm

The Pathology of Stabilisation in Complex Adaptive Systems

with 72 comments

The core insight  of the resilience-stability tradeoff is that stability leads to loss of resilience. Therefore stabilisation too leads to increased systemic fragility. But there is a lot more to it. In comparing economic crises to forest fires and river floods, I have highlighted the common patterns to the process of system fragilisation which eventually leaves the system “manager” in a situation where there are no good options left.

Drawing upon the work of Mancur Olson, I have explored how the buildup of special interests means that stability is self-reinforcing. Once rent-seeking has achieved sufficient scale, “distributional coalitions have the incentive and..the power to prevent changes that would deprive them of their enlarged share of the social output”. But what if we “solve” the Olsonian problem? Would that mitigate the problem of increased stabilisation and fragility? In this post, I will argue that the cycle of fragility and collapse has much deeper roots than any particular form of democracy.

In this analysis, I am going to move away from ecological analogies and instead turn to an example from modern medicine. In particular, I am going to compare the experience and history of psychiatric medication in the second half of the twentieth century to some of the issues we have already looked at in macroeconomic and ecological stabilisation. I hope to convince you that the uncanny similarities in the patterns observed in stabilised systems across such diverse domains are not a coincidence. In fact, the human body provides us with a much closer parallel to economic systems than even ecological systems with respect to the final stages of stabilisation. Most ecological systems collapse sooner simply because the limits to which resources will be spent in an escalating fashion to preserve stability are much smaller. For example, there are limits to the resources that will be deployed to prevent a forest fire, no matter how catastrophic. On the other hand, the resources that will be deployed to prevent collapse of any system that is integral to human beings are much larger.

Even by the standards of this blog, this will be a controversial article. In my discussion of psychiatric medicine I am relying primarily on Robert Whitaker’s excellent but controversial and much-disputed book ‘Anatomy of an Epidemic’. Nevertheless, I want to emphasise that my ultimate conclusions are much less incendiary than those of Whitaker. In the same way that I want to move beyond an explanation of the economic crisis that relies on evil bankers, crony capitalists and self-interested technocrats, I am trying to move beyond an explanation that blames evil pharma and misguided doctors for the crisis in mental health. I am not trying to imply that fraud and rent-seeking does not have a role to play. I am arguing that even if we eliminate them, the aim of a resilient economic and social system would not be realised.

THE PUZZLE

The puzzle of the history of macroeconomic stabilisation post-WW2 can be summarised as follows. Clearly every separate event of macroeconomic stabilisation works. Most monetary and fiscal interventions result in a rise in the financial markets, NGDP expectations and economic performance in the short run. Yet,

  • we are in the middle of a ‘great stagnation’ and have been for a few decades.
  • the frequency of crises seems to have risen dramatically in the last fifty years culminating in the environment since 2008 which is best described as a perpetual crisis.
  • each recovery seems to be weaker than the previous one and requires an increased injection of stimulus to achieve results that were easily achieved by a simple rate cut not that long ago.

Similarly, the history of mental health post-WW2 too has been a puzzle and is summarised by Whitaker as follows:

The puzzle can now be precisely summed up. On the one hand, we know that many people are helped by psychiatric medications. We know that many people stabilize well on them and will personally attest to how the drugs have helped them lead normal lives. Furthermore, as Satcher noted in his 1999 report, the scientific literature does document that psychiatric medications, at least over the short term, are “effective.” Psychiatrists and other physicians who prescribe the drugs will attest to that fact, and many parents of children taking psychiatric drugs will swear by the drugs as well. All of that makes for a powerful consensus: Psychiatric drugs work and help people lead relatively normal lives. And yet, at the same time, we are stuck with these disturbing facts: The number of disabled mentally ill has risen dramatically since 1955, and during the past two decades, a period when the prescribing of psychiatric medications has exploded, the number of adults and children disabled by mental illness has risen at a mind-boggling rate.

Whitaker then asks the obvious but heretical question – “Could our drug-based paradigm of care, in some unforeseen way, be fueling this modern-day plague?” and answers the question in the affirmative. But what are the precise mechanisms and patterns that underlie this deterioration?

Adaptive Response to Intervention and Drug Dependence

The fundamental reason why interventions fail in complex adaptive systems is the adaptive response triggered by the intervention that subverts the aim of the intervention. Moreover once the system is artificially stabilised and system agents have adapted to this new stability, the system cannot cope with any abrupt withdrawal of the stabilising force. For example, Whitaker notes that

Neuroleptics put a brake on dopamine transmission, and in response the brain puts down the dopamine accelerator (the extra D2 receptors). f the drug is abruptly withdrawn, the brake on dopamine is suddenly released while the accelerator is still pressed to the floor. The system is now wildly out of balance, and just as a car might careen out of control, so too the dopaminergic pathways in the brain……In short, initial exposure to neuroleptics put patients onto a path where they would likely need the drugs for life.

Whitaker makes the same observation for benzodiazepines and antidepressants:

benzodiazepines….work by perturbing a neurotransmitter system, and in response, the brain undergoes compensatory adaptations, and as a result of this change, the person becomes vulnerable to relapse upon drug withdrawal. That difficulty in turn may lead some to take the drugs indefinitely.

(antidepressants) perturb neurotransmitter systems in the brain. This leads to compensatory processes that oppose the initial acute effects of a drug…. When drug treatment ends, these processes may operate unopposed, resulting in appearance of withdrawal symptoms and increased vulnerability to relapse.

Similarly, when a central bank protects incumbent banks against liquidity risk, the banks choose to hold progressively more illiquid portfolios. When central banks provide incumbent banks with cheap funding in times of crisis to prevent failure and creditor losses, the banks choose to take on more leverage. This is similar to what John Adams has termed the ‘risk thermostat’ – the system readjusts to get back to its preferred risk profile. The protection once provided is almost impossible to withdraw without causing systemic havoc as agents adapt to the new stabilised reality and lose the ability to survive in an unstabilised environment.

Of course, in economic systems when agents actively intend to arbitrage such commitments by central banks, it is simply a form of moral hazard. But such an adaptation can easily occur via the natural selective forces at work in an economy – those who fail to take advantage of the Greenspan/Bernanke put simply go bust or get fired. In our brain the adaptation simply reflects homeostatic mechanisms selected for by the process of evolution.

Transformation into a Pathological State, Loss of Core Functionality and Deterioration of the Baseline State

I have argued in many posts that the successive cycles of Minskyian stabilisation have a role to play in the deterioration in the structural performance of the real economy which has manifested itself as ‘The Great Stagnation’. The same conclusion holds for many other complex adaptive systems and our brain is no different. Stabilisation kills much of what makes human beings creative. Innovation and creativity are fundamentally disequilibrium processes so it is no surprise that an environment of stability does not foster them. Whitaker interviews a patient on antidepressants who said: “I didn’t have mood swings after that, but instead of having a baseline of functioning normally, I was depressed. I was in a state of depression the entire time I was on the medication.”

He also notes disturbing research on the damage done to children who were treated for ADHD with Ritalin:

when researchers looked at whether Ritalin at least helped hyperactive children fare well academically, to get good grades and thus succeed as students, they found that it wasn’t so. Being able to focus intently on a math test, it turned out, didn’t translate into long-term academic achievement. This drug, Sroufe explained in 1973, enhances performance on “repetitive, routinized tasks that require sustained attention,” but “reasoning, problem solving and learning do not seem to be [positively] affected.”……Carol Whalen, a psychologist from the University of California at Irvine, noted in 1997 that “especially worrisome has been the suggestion that the unsalutary effects [of Ritalin] occur in the realm of complex, high-order cognitive functions such as flexible problem-solving or divergent thinking.”

Progressive Increase in Required Dosage

In economic systems, this steady structural deterioration means that increasing amounts of stimulus need to be applied in successive cycles of stabilisation to achieve the same levels of growth. Whitaker too identifies a similar tendency:

Over time, Chouinard and Jones noted, the dopaminergic pathways tended to become permanently dysfunctional. They became irreversibly stuck in a hyperactive state, and soon the patient’s tongue was slipping rhythmically in and out of his mouth (tardive dyskinesia) and psychotic symptoms were worsening (tardive psychosis). Doctors would then need to prescribe higher doses of antipsychotics to tamp down those tardive symptoms.

At this point, some of you may raise the following objection: so what if the new state is pathological? Maybe capitalism with its inherent instability is itself pathological. And once the safety nets of the Greenspan/Bernanke put, lender-of-last-resort programs and too-big-to-fail bailouts are put in place why would we need or want to remove them? If we simply medicate the economy ad infinitum, can we not avoid collapse ad infinitum?

This argument however is flawed.

  • The ability of economic players to reorganise to maximise the rents extracted from central banking and state commitments far exceeds the the resources available to the state and the central bank. The key reason for this is the purely financial nature of this commitment. For example, if the state decided to print money and support the price of corn at twice its natural market price, then it could conceivably do so forever. Sooner or later, rent extractors will run up against natural resource limits – for example,limits on arable land. But when the state commits to support a credit money dominant financial system and asset prices then the economic system can and will generate financial “assets” without limit to take advantage of this commitment. The only defense that the CB and the state possess is regulations aimed at maintaining financial markets in an incomplete, underdeveloped state where economic agents do not possess the tools to game the system. Unfortunately as Minsky and many others have documented, the pace of financial innovation over the last half-century has meant that banks and financialised corporates have all the tools they need to circumvent regulations and maximise rent extraction.
  • Even in a modern state that can print its own fiat currency, the ability to maintain financial commitments is subordinate to the need to control inflation. But doesn’t the complete absence of inflationary pressures in the current environment prove that we are nowhere close to any such limits? Not quite – As I have argued before, the current macroeconomic policy is defined by an abandonment of the full employment target in order to mitigate any risk of inflation whatsoever. The inflationary risk caused by rent extraction from the stabilisation commitment is being counterbalanced by a “reserve army of labour”. The reason for giving up the full employment is simple – As Minsky identified, once the economy has gone through successive cycles of stabilisation, it is prone to ‘rapid cycling’.

Rapid Cycling and Transformation of an Episodic Illness into a Chronic Illness

Minsky noted that

A high-investment, high-profit strategy for full employment – even with the underpinning of an active fiscal policy and an aware Federal Reserve system – leads to an increasingly unstable financial system, and an increasingly unstable economic performance. Within a short span of time, the policy problem cycles among preventing a deep depression, getting a stagnant economy moving again, reining in an inflation, and offsetting a credit squeeze or crunch.

In other words, an economy that attempts to achieve full employment will yo-yo uncontrollably between a state of debt-deflation and high, variable inflation – somewhat similar to a broken shower that only runs either too hot or too cold. The abandonment of the full employment target enables the system to postpone this point of rapid cycling.

The structural malformation of the economic system due to the application of increasing levels of stimulus to the task of stabilisation means that the economy has lost the ability to generate the endogenous growth and innovation that it could before it was so actively stabilised. The system has now been homogenised and is entirely dependent upon constant stimulus. The phenomenon of ‘rapid cycling’ explains a phenomenon I noted in an earlier post which is the apparently schizophrenic nature of the markets, turning from risk-on to risk-off at the drop of a hat. It is the lack of diversity that causes this as the vast majority of agents change their behaviour based on absence or presence of stabilising interventions.

Whitaker again notes the connection between medication and rapid cycling in many instances:

As early as 1965, before lithium had made its triumphant entry into American psychiatry, German psychiatrists were puzzling over the change they were seeing in their manic-depressive patients. Patients treated with antidepressants were relapsing frequently, the drugs “transforming the illness from an episodic course with free intervals to a chronic course with continuous illness,” they wrote. The German physicians also noted that in some patients, “the drugs produced a destabilization in which, for the first time, hypomania was followed by continual cycling between hypomania and depression.””

(stimulants) cause children to cycle through arousal and dysphoric states on a daily basis. When a child takes the drug, dopamine levels in the synapse increase, and this produces an aroused state. The child may show increased energy, an intensified focus, and hyperalertness. The child may become anxious, irritable, aggressive, hostile, and unable to sleep. More extreme arousal symptoms include obsessive-compulsive and hypomanic behaviors. But when the drug exits the brain, dopamine levels in the synapse sharply drop, and this may lead to such dysphoric symptoms as fatigue, lethargy, apathy, social withdrawal, and depression. Parents regularly talk of this daily “crash.”

THE PATIENT WANTS STABILITY TOO

At this point, I seem to be arguing that stabilisation is all just a con-game designed to enrich evil bankers, evil pharma etc. But such an explanation underestimates just how deep-seated the temptation and need to stabilise really is. The most critical component that it misses out on is the fact that the “patient” in complex adaptive systems is as eager to choose stability over resilience as the doctor is.

The Short-Term vs The Long-Term

As Daniel Carlat notes, the reality is that on the whole, psychiatric drugs “work” at least in the short term. Similarly, each individual act of macroeconomic stabilisation such as a lender-of-last-resort intervention, quantitative easing or a rate cut clearly has a positive impact on the short-term performance of both asset markets and the economy.

Whitaker too acknowledges this:

Those are the dueling visions of the psychopharmacology era. If you think of the drugs as “anti-disease” agents and focus on short-term outcomes, the young lady springs into sight. If you think of the drugs as “chemical imbalancers” and focus on long-term outcomes, the old hag appears. You can see either image, depending on where you direct your gaze.

The critical point here is that just like in forest fires and macroeconomies, the initial attempts to stabilise can be achieved easily and with very little medication. The results may seem even miraculous. But this initial period does not last. From one of many cases Whitaker quotes:

at first, “it was like a miracle,” she says. Andrew’s fears abated, he learned to tie his shoes, and his teachers praised his improved behavior. But after a few months, the drug no longer seemed to work so well, and whenever its effects wore off, there would be this “rebound effect.” Andrew would “behave like a wild man, out of control.” A doctor increased his dosage, only then it seemed that Andrew was like a “zombie,” his sense of humor reemerging only when the drug’s effects wore off. Next, Andrew needed to take clonidine in order to fall asleep at night. The drug treatment didn’t really seem to be helping, and so Ritalin gave way to other stimulants, including Adderall, Concerta, and dextroamphetamine. “It was always more drugs,” his mother says.

Medication Seen as Revealing Structural Flaws

One would think that the functional and structural deterioration that follows constant medication would cause both the patient and the doctor to reconsider the benefits of stabilisation. But this deterioration too can be interpreted in many different ways. Whitaker gives an example where the stabilised state is seen to be beneficial by revealing hitherto undiagnosed structural problems:

in 1982, Michael Strober and Gabrielle Carlson at the UCLA Neuropsychiatric Institute put a new twist into the juvenile bipolar story. Twelve of the sixty adolescents they had treated with antidepressants had turned “bipolar” over the course of three years, which—one might think—suggested that the drugs had caused the mania. Instead, Strober and Carlson reasoned that their study had shown that antidepressants could be used as a diagnostic tool. It wasn’t that antidepressants were causing some children to go manic, but rather the drugs were unmasking bipolar illness, as only children with the disease would suffer this reaction to an anti-depressant. “Our data imply that biologic differences between latent depressive subtypes are already present and detectable during the period of early adolescence, and that pharmacologic challenge can serve as one reliable aid in delimiting specific affective syndromes in juveniles,” they said.

Drug Withdrawal as Proof That It Works

The symptoms of drug withdrawal can also be interpreted to mean that the drug was necessary and that the patient is fundamentally ill. The reduction in withdrawal symptoms when the patient goes back on provides further “proof” that the drug works. Withdrawal symptoms can also be interpreted as proof that the patient needs to be treated for a longer period. Again, quoting from Whitaker:

Chouinard and Jones’s work also revealed that both psychiatrists and their patients would regularly suffer from a clinical delusion: They would see the return of psychotic symptoms upon drug withdrawal as proof that the antipsychotic was necessary and that it “worked.” The relapsed patient would then go back on the drug and often the psychosis would abate, which would be further proof that it worked. Both doctor and patient would experience this to be “true,” and yet, in fact, the reason that the psychosis abated with the return of the drug was that the brake on dopamine transmission was being reapplied, which countered the stuck dopamine accelerator. As Chouinard and Jones explained: “The need for continued neuroleptic treatment may itself be drug-induced.”

while they acknowledged that some alprazolam patients fared poorly when the drug was withdrawn, they reasoned that it had been used for too short a period and the withdrawal done too abruptly. “We recommend that patients with panic disorder be treated for a longer period, at least six months,” they said.

Similarly, macroeconomic crises can and frequently are interpreted as a need for better and more stabilisation. The initial positive impact of each intervention and the negative impact of reducing stimulus only reinforces this belief.

SCIENCE AND STABILISATION

A typical complaint against Whitaker’s argument is that his thesis is unproven. I would argue that within the confines of conventional “scientific” data analysis, his thesis and others directly opposed to it are essentially unprovable. To take an example from economics, is the current rush towards “safe” assets a sign that we need to produce more “safe” assets? Or is it a sign that our fragile economic system is addicted to the need for an ever-increasing supply of “safe” assets and what we need is a world in which no assets are safe and all market participants are fully aware of this fact?

In complex adaptive systems it can also be argued that the modern scientific method that relies on empirical testing of theoretical hypotheses against the data is itself fundamentally biased towards stabilisation and against resilience. The same story that I trace out below for the history of mental health can be traced out for economics and many other fields.

Desire to Become a ‘Real’ Science

Whitaker traces out how the theory attributing mental disorders to chemical imbalances was embraced as it enabled psychiatrists to become “real” doctors and captures the mood of the profession in the 80s:

Since the days of Sigmund Freud the practice of psychiatry has been more art than science. Surrounded by an aura of witchcraft, proceeding on impression and hunch, often ineffective, it was the bumbling and sometimes humorous stepchild of modern science. But for a decade and more, research psychiatrists have been working quietly in laboratories, dissecting the brains of mice and men and teasing out the chemical formulas that unlock the secrets of the mind. Now, in the 1980s, their work is paying off. They are rapidly identifying the interlocking molecules that produce human thought and emotion…. As a result, psychiatry today stands on the threshold of becoming an exact science, as precise and quantifiable as molecular genetics.

Search for the Magic Bullet despite Complexity of Problem

In the language of medicine, a ‘magic bullet’ is a drug that counters the root cause of the disease without adversely affecting any other part of the patient. The chemical-imbalance theory took a ‘magic bullet’ approach which reduced the complexity of our mental system to “a simple disease mechanism, one easy to grasp. In depression, the problem was that the serotonergic neurons released too little serotonin into the synaptic gap, and thus the serotonergic pathways in the brain were “underactive”. Antidepressants brought serotonin levels in the synaptic gap up to normal, and that allowed these pathways to transmit messages at a proper pace.”

Search for Scientific Method and Objective Criteria

Whitaker traces out the push towards making psychiatry an objective science with a defined method and its implications:

Congress had created the NIMH with the thought that it would transform psychiatry into a more modern, scientific discipline…..Psychiatrists and nurses would use “rating scales” to measure numerically the characteristic symptoms of the disease that was to be studied. Did a drug for schizophrenia reduce the patient’s “anxiety”? His or her “grandiosity”? “Hostility”? “Suspiciousness”? “Unusual thought content”? “Uncooperativeness”? The severity of all of those symptoms would be measured on a numerical scale and a total “symptom” score tabulated, and a drug would be deemed effective if it reduced the total score significantly more than a placebo did within a six-week period. At least in theory, psychiatry now had a way to conduct trials of psychiatric drugs that would produce an “objective” result. Yet the adoption of this assessment put psychiatry on a very particular path: The field would now see short-term reduction of symptoms as evidence of a drug’s efficacy. Much as a physician in internal medicine would prescribe an antibiotic for a bacterial infection, a psychiatrist would prescribe a pill that knocked down a “target symptom” of a “discrete disease.” The six-week “clinical trial” would prove that this was the right thing to do. However, this tool wouldn’t provide any insight into how patients were faring over the long term.

It cannot be emphasised enough that even increasing the period of the scientific trial is not enough to give us definitive answers. The argument that structural flaws are being uncovered or that withdrawal proves that the drug works cannot be definitively refuted. Moreover, at every point of time after medication is started, the short-term impact of staying on or increasing the level of medication is better than the alternative of going off the medication. The deeper issue here is also that in such a system, statistical analysis that tries to determine the efficacy of the intervention cannot deal with the fact that the nature of the intervention itself is to shift the distribution of outcomes into the tail and continue to do so as long as the level of medication keeps increasing.

The Control Agenda and High Modernism

The desire for stability and the control agenda is not simply a consequence of the growth of Olsonian special interests in the economy. The title of this post is inspired by Holling and Meffe’s classic paper on this topic in ecology. Their paper highlights that stabilisation is embedded within the command-and-control approach which itself is inherent to the high modernist way that James Scott has criticised.

Holling and Meffe also recognise that it is a simplistic application of “scientific” methods that underpins this command-and-control philosophy:

much of present ecological theory uses the equilibrium definition of resilience, even though that definition reinforces the pathology of equilibrium-centered command and control. That is because much of that theory draws predominantly from traditions of deductive mathematical theory (Pimm 1984) in which simplified, untouched ecological systems are imagined, or from traditions of engineering in which the motive is to design systems with a single operating objective (Waide & Webster 1976; De Angelis et. al. 1980; O’Neill et al. 1986), or from small-scale quadrant experiments in nature (Tilman & Downing 1994) in which long-term, large-scale successional or episodic transformations are not of concern. That makes the mathematics more tractable, it accommodates the engineer’s goal to develop optimal designs, and it provides the ecologist with a rationale for utilizing manageable, small sized, and short-term experiments, all reasonable goals. But these traditional concepts and techniques make the world appear more simple, tractable, and manageable than it really is. They carry an implicit assumption that there is global stability – that there is only one equilibrium steady-state, or, if other operating states exist, they should be avoided with safeguards and regulatory controls. They transfer the command-and-control myopia of exploitive development to similarly myopic demands for environmental regulations and prohibitions.

Those who emphasize ecosystem resilience, on the other hand, come from traditions of applied mathematics and applied resource ecology at the scale of ecosystems, such as the dynamics and management of freshwater systems (Fiering 1982) forests (Clark et al. 19759, fisheries (Walters 1986) semiarid grasslands (Walker et al. 1969), and interacting populations in nature (Dublin et al. 1990; Sinclair et al. 1990). Because these studies are rooted in inductive rather than deductive theory formation and in experience with the effects of large-scale management disturbances, the reality of flips from one stable state to another cannot be avoided (Helling 1986).

 

My aim in this last section is not to argue against the scientific method but simply to state that we have adopted too narrow a definition of what constitutes a scientific endeavour. Even this is not a coincidence. High modernism has its roots firmly planted in Enlightenment rationality and philosophical viewpoints that lie at the core of our idea of progress. In many uncertain domains, genuine progress and stabilisation that leads to fragility cannot be distinguished from each other. These are topics that I hope to explore in future posts.

Written by Ashwin Parameswaran

December 14th, 2011 at 10:51 am

Forest Fire Suppression and Macroeconomic Stabilisation

with 24 comments

In an earlier post, I compared Minsky’s Financial Instability Hypothesis with Buzz Holling’s work on ecological resilience and briefly touched upon the consequences of wildfire suppression as an example of the resilience-stability tradeoff. This post expands upon the lessons we can learn from the history of fire suppression and its impact on the forest ecosystem in the United States and draws some parallels between the theory and history of forest fire management and macroeconomic management.

Origins of Stabilisation as the Primary Policy Objective and Initial Ease of Implementation

The impetus for both fire suppression and macroeconomic stabilisation came from a crisis. In economics, this crisis was the Great Depression which highlighted the need for stabilising fiscal and monetary policy during a crisis. Out of all the initiatives, the most crucial from a systems viewpoint was the expansion of lender-of-last-resort operations and bank bailouts which tried to eliminate all disturbances at their source. In Minsky’s words: “The need for lender-of-Iast-resort operations will often occur before income falls steeply and before the well nigh automatic income and financial stabilizing effects of Big Government come into play.” (Stabilizing an Unstable Economy pg 46)

SImilarly, the battle for complete fire suppression was won after the Great Idaho Fires of 1910. “The Great Idaho Fires of August 1910 were a defining event for fire policy and management, indeed for the policy and management of all natural resources in the United States. Often called the Big Blowup, the complex of fires consumed 3 million acres of valuable timber in northern Idaho and western Montana…..The battle cry of foresters and philosophers that year was simple and compelling: fires are evil, and they must be banished from the earth. The federal Weeks Act, which had been stalled in Congress for years, passed in February 1911. This law drastically expanded the Forest Service and established cooperative federal-state programs in fire control. It marked the beginning of federal fire-suppression efforts and effectively brought an end to light burning practices across most of the country. The prompt suppression of wildland fires by government agencies became a national paradigm and a national policy” (Sara Jensen and Guy McPherson). In 1935, the Forest Service implemented the ‘10 AM policy’, a goal to extinguish every new fire by 10 AM the day after it was reported.

In both cases, the trauma of a catastrophic disaster triggered a new policy that would try to stamp out all disturbances at the source, no matter how small. This policy also had the benefit of initially being easy to implement and cheap. In the case of wildfires, “the 10 am policy, which guided Forest Service wildfire suppression until the mid 1970s, made sense in the short term, as wildfires are much easier and cheaper to suppress when they are small. Consider that, on average, 98.9% of wildfires on public land in the US are suppressed before they exceed 120 ha, but fires larger than that account for 97.5% of all suppression costs” (Donovan and Brown). As Minsky notes, macroeconomic stability was helped significantly by the deleveraged nature of the American economy from the end of WW2 till the 1960s. Even in interventions by the Federal Reserve in the late 60s and 70s, the amount of resources needed to shore up the system was limited.

Consequences of Stabilisation

Wildfire suppression in forests that are otherwise adapted to regular, low-intensity fires (e.g. understory fire regimes) causes the forest to become more fragile and susceptible to a catastrophic fire. As Holling and Meffe note, “fire suppression in systems that would frequently experience low-intensity fires results in the systems becoming severely affected by the huge fires that finally erupt; that is, the systems are not resilient to the major fires that occur with large fuel loads and may fundamentally change state after the fire”. This increased fragility arises from a few distinct patterns and mechanisms:

Increased Fuel Load: Just like channelisation of a river results in increased silt load within the river banks, the absence of fires leads to a fuel buildup thus making the eventual fire that much more severe. In Minskyian terms, this is analogous to the buildup of leverage and ‘Ponzi finance’ within the economic system.

Change in Species Composition: Species compositions inevitably shift towards less fire resistant trees when fires are suppressed (Allen et al 2002). In an economic system, it is not simply that ‘Ponzi finance’ players thrive but that more prudently financed actors get outcompeted in the cycle. This has critical implications for the ability of the system to recover after the fire. This is an important problem in the financial sector where as Richard Fisher observed, “more prudent and better-managed banks have been denied the market share that would have been theirs if mismanaged big banks had been allowed to go out of business”.

Reduction in Diversity: As I mentioned here, “In an environment free of disturbances, diversity of competing strategies must reduce dramatically as the optimal strategy will outcompete all others. In fact, disturbances are a key reason why competitive exclusion is rarely observed in ecosystems”. Contrary to popular opinion, the post-disturbance environment is incredibly productive and diverse. Even after a fire as severe as the Yellowstone fires of 1988, the regeneration of the system was swift and effective as the ecosystem was historically adapted to such severe fires.

Increased Connectivity: This is the least appreciated impact of eliminating all disturbances in a complex adaptive system. Disturbances perform a critical role by breaking connections within a network. Frequent forest fires result in a “patchy” modularised forest where no one fire can cause catastrophic damage. As Thomas Bonnicksen notes: “Fire seldom spread over vast areas in historic forests because meadows, and patches of young trees and open patches of old trees were difficult to burn and forced fires to drop to the ground…..Unlike the popular idealized image of historic forests, which depicts old trees spread like a blanket over the landscape, a real historic forest was patchy. It looked more like a quilt than a blanket. It was a mosaic of patches. Each patch consisted of a group of trees of about the same age, some young patches, some old patches, or meadows depending on how many years passed since fire created a new opening where they could grow. The variety of patches in historic forests helped to contain hot fires. Most patches of young trees, and old trees with little underneath did not burn well and served as firebreaks. Still, chance led to fires skipping some patches. So, fuel built up and the next fire burned a few of them while doing little harm to the rest of the forest”. Suppressing forest fires converts the forest into one connected whole, at risk of complete destruction from the eventual fire that cannot be suppressed.

In the absence of disturbances, connectivity builds up within the network, both within and between scales. Increased within-scale connectivity increases the severity but between-scale connectivity increases the probability of a disturbance at a lower level propagating up to higher levels and causing systemic collapse. Fire suppression in forests adapted to frequent undergrowth fires can cause an accumulation of ladder fuels which connect the undergrowth to the crown of the forest. The eventual undergrowth ignition then risks a crown fire by a process known as “torching”. Unlike understory fires, crown fires can spread across firebreaks such as rivers by a process known as “spotting” where the wind carries burning embers through the air – the fire can spread in this manner even without direct connectivity. Such fires can easily cause systemic collapse and a state from which natural forces cannot regenerate the forest. In this manner, stabilisation can cause changes which cause a fundamental change in the nature of the system rather than simply an increased severity of disturbances. For example, “extensive stand-replacing fires are in many cases resulting in “type conversions” from ponderosa pine forest to other physiognomic types (for example, grassland or shrubland) that may be persistent for centuries or perhaps even millennia” (Allen 2007).

Long-Run Increase in Cost of Stabilisation and Area Burned: The initial low cost of suppression is short-lived and the cumulative effect of the fragilisation of the system has led to rapidly increasing costs of wildfire suppression and levels of area burned in the last three decades (Donovan and Brown 2007).

Dilemmas in the Management of a Stabilised System

In my post on river flood management, I claimed that managing a stabilised and fragile system is “akin to choosing between the frying pan and the fire”. This has been the case in many forests around the United States for the last few decades and is the condition into which the economies of the developed world are heading into. Once the forest ecosystem has become fragile, the resultant large fire exacerbates the problem thus triggering a vicious cycle. As Thomas Bonnicksen observed, “monster fires create even bigger monsters. Huge blocks of seedlings that grow on burned areas become older and thicker at the same time. When it burns again, fire spreads farther and creates an even bigger block of fuel for the next fire. This cycle of monster fires has begun”. The system enters an “unending cycle of monster fires and blackened landscapes”.

Minsky of course understood this end-state very well: “The success of a high-private-investment strategy depends upon the continued growth of relative needs to validate private investment. It also requires that policy be directed to maintain and increase the quasi-rents earned by capital – i.e.,rentier and entrepreneurial income. But such high and increasing quasi-rents are particularly conducive to speculation, especially as these profits are presumably guaranteed by policy. The result is experimentation with liability structures that not only hypothecate increasing proportions of cash receipts but that also depend upon continuous refinancing of asset positions. A high-investment, high-profit strategy for full employment – even with the underpinning of an active fiscal policy and an aware Federal Reserve system – leads to an increasingly unstable financial system, and an increasingly unstable economic performance. Within a short span of time, the policy problem cycles among preventing a deep depression, getting a stagnant economy moving again, reining in an inflation, and offsetting a credit squeeze or crunch….As high investment and high profits depend upon and induce speculation with respect to liability structures, the expansions become increasingly difficult to control; the choice seems to become whether to accomodate to an increasing inflation or to induce a debt-deflation process that can lead to a serious depression”. (John Maynard Keynes pg163–164)

The evolution of the system means that turning back the clock to a previous era of stability is not an option. As Minsky observed in the context of our financial system, “the apparent stability and robustness of the financial system of the 1950s and early 1960s can now be viewed as an accident of history, which was due to the financial residue of World War 2 following fast upon a great depression”. Re-regulation is not enough because it cannot undo the damage done by decades of financial “innovation” in a manner that does not risk systemic collapse.

At the same time, simply allowing an excessively stabilised system to burn itself out is a recipe for disaster. For example, on the role that controlled burns could play in restoring America’s forests to a resilient state, Thomas Bonnicksen observed: “Prescribed fire would come closer than any tool toward mimicking the effects of the historic Indian and lightning fires that shaped most of America’s native forests. However, there are good reasons why it is declining in use rather than expanding. Most importantly, the fuel problem is so severe that we can no longer depend on prescribed fire to repair the damage caused by over a century of fire exclusion. Prescribed fire is ineffective and unsafe in such forests. It is ineffective because any fire that is hot enough to kill trees over three inches in diameter, which is too small to eliminate most fire hazards, has a high probability of becoming uncontrollable”. The same logic applies to a fragile economic system.

Update: corrected date of Idaho fires from 2010 to 1910 in para 3 thanks to Dean.

Written by Ashwin Parameswaran

June 8th, 2011 at 11:35 am

The Great Recession through a Crony Capitalist Lens

with 9 comments

In this post, I apply the framework outlined previously to some empirical patterns in the financial markets and the broader economy. The objective is not to posit crony capitalism as the sole explanation of the below patterns, but merely to argue that the below patterns are consistent with an increasingly crony capitalist economy.

The Paradox of Low Volatility and High Correlation

As many commentators have pointed out [1,2,3], the spike in volatility experienced during the depths of the financial crisis has largely reversed itself but correlation within equities and between various risky asset classes has kept on moving higher. The combination of high volatility and high correlation is associated with the process of collapse and typical of the Minsky moment when the system undergoes a rapid delevering. However the combination of high correlation and low volatility post the Minsky moment is unusual. In the absence of bailouts or protectionism, the economy should undergo a process of creative destruction and intense exploratory activity which by its diffuse nature results in low correlation. The combination of high correlation and low volatility instead signifies stasis and the absence of sufficient exploration in the economy, alongwith the presence of significant slack at firm level (micro-resilience).

As I mentioned in a previous post, financing constraints faced by small businesses hinder new firm entry across industries. Expanding lending to new firms is an act of exploration and incumbent banks are almost certainly content with exploiting their known and low-risk sources of income instead.

The Paradox of High Corporate Profitability, Rising Productivity and High Unemployment and The Paradox of High Cash Balances and High Debt Issuance

Although corporate profitability is not at an all-time high, it has recovered at an unusually rapid pace compared to the nonexistent recovery in employment and wages. The recovery in corporate profits has been driven by a rise in worker productivity and increased efficiency but the lag between an output recovery and an employment recovery seems to have increased dramatically. So far, this increased profitability has led not to increased business investment but to increased cash holdings by corporates. Big corporates with easy access to debt markets have even chosen to tap the debt markets simply for the purpose of increasing cash holdings.

Again, incumbent corporates are eager to squeeze efficiencies out of their current operations including downsizing the labour force but instead of channeling the savings from this increased efficiency into exploratory investment, they choose to increase holdings of liquid assets. In an environment where incumbents are under limited threat of being superceded by exploratory new entrants, holding cash is an extremely effective way to retain optionality (a strategy that is much less effective if the pace of exploratory innovation is high as an extended period of standing on the sidelines of exploratory activity can degrade the ability of the incumbent to rejoin the fray). Old jobs are being destroyed by the optimising activities of incumbents but the exploration required to create new jobs does not take place.

This discussion of profitability and unemployment echoes many of the common concerns of the far left. This is not a coincidence – one of the most damaging effects of Olsonian cronyism is its malformation of the economy from a positive-sum game into an increasingly zero-sum game. The dynamics of a predominantly crony capitalist economy are closer to a Marxian class struggle than they are to a competitive free-market economy. However, where I differ significantly from the left is in the proposed cure for the disease. For example, incumbent investment can be triggered by an increase in leverage by another sector – given the indebted state of the consumer, the government is the most likely candidate. But such a policy does nothing to tackle the reduced evolvability of the economy or the dominance of the incumbent special interest groups. Moreover, increased taxation and transfers of wealth to other organised groups such as labour only aggravate the ossification of the economic system into an increasingly zero-sum game. A sustainable solution must restore the positive-sum dynamics that are the essence of Schumpeterian capitalism. Such a solution involves reducing the power of the incumbent corporates and transferring wealth from incumbent corporates towards households not by taxation or protectionism but by restoring the invisible foot of new firm entry.

Written by Ashwin Parameswaran

November 30th, 2010 at 7:27 am

The Cause and Impact of Crony Capitalism: the Great Stagnation and the Great Recession

with 23 comments

STABILITY AS THE PRIMARY CAUSE OF CRONY CAPITALISM

The core insight of the Minsky-Holling resilience framework is that stability and stabilisation breed fragility and loss of system resilience . TBTF protection and the moral hazard problem is best seen as a subset of the broader policy of stabilisation, of which policies such as the Greenspan Put are much more pervasive and dangerous.

By itself, stabilisation is not sufficient to cause cronyism and rent seeking. Once a system has undergone a period of stabilisation, the system manager is always tempted to prolong the stabilisation for fear of the short-term disruption or even collapse. However, not all crisis-mitigation strategies involve bailouts and transfers of wealth to the incumbent corporates. As Mancur Olson pointed out, society can confine its “distributional transfers to poor and unfortunate individuals” rather than bailing out incumbent firms and still hope to achieve the same results.

To fully explain the rise of crony capitalism, we need to combine the Minsky-Holling framework with Mancur Olson’s insight that extended periods of stability trigger a progressive increase in the power of special interests and rent-seeking activity. Olson also noted the self-preserving nature of this phenomenon.  Once rent-seeking has achieved sufficient scale, “distributional coalitions have the incentive and..the power to prevent changes that would deprive them of their enlarged share of the social output”.

SYSTEMIC IMPACT OF CRONY CAPITALISM

Crony capitalism results in a homogenous, tightly coupled and fragile macroeconomy. The key question is: Via which channels does this systemic malformation occur? As I have touched upon in some earlier posts [1,2], the systemic implications of crony capitalism arise from its negative impact on new firm entry. In the context of the exploration vs exploitation framework, absence of new firm entry tilts the system towards over-exploitation1 .

Exploration vs Exploitation: The Importance of New Firm Entry in Sustaining Exploration

In a seminal article, James March distinguished between “the exploration of new possibilities and the exploitation of old certainties. Exploration includes things captured by terms such as search, variation, risk taking, experimentation, play, flexibility, discovery, innovation. Exploitation includes such things as refinement, choice, production, efficiency, selection, implementation, execution.” True innovation is an act of exploration under conditions of irreducible uncertainty whereas exploitation is an act of optimisation under a known distribution.

The assertion that dominant incumbent firms find it hard to sustain exploratory innovation is not a controversial one. I do not intend to reiterate the popular arguments in the management literature, many of which I explored in a previous post. Moreover, the argument presented here is more subtle: I do not claim that incumbents cannot explore effectively but simply that they can explore effectively only when pushed to do so by a constant stream of new entrants. This is of course the “invisible foot” argument of Joseph Berliner and Burton Klein for which the exploration-exploitation framework provides an intuitive and rigorous rationale.

Let us assume a scenario where the entry of new firms has slowed to a trickle, the sector is dominated by a few dominant incumbents and the S-curve of growth is about to enter its maturity/decline phase. To trigger off a new S-curve of growth, the incumbents need to explore. However, almost by definition, the odds that any given act of exploration will be successful is small. Moreover, the positive payoff from any exploratory search almost certainly lies far in the future. For an improbable shot at moving from a position of comfort to one of dominance in the distant future, an incumbent firm needs to divert resources from optimising and efficiency-increasing initiatives that will deliver predictable profits in the near future. Of course if a significant proportion of its competitors adopt an exploratory strategy, even an incumbent firm will be forced to follow suit for fear of loss of market share. But this critical mass of exploratory incumbents never comes about. In essence, the state where almost all incumbents are content to focus their energies on exploitation is a Nash equilibrium.

On the other hand, the incentives of any new entrant are almost entirely skewed in favour of exploratory strategies. Even an improbable shot at glory is enough to outweigh the minor consequences of failure2 . It cannot be emphasised enough that this argument does not depend upon the irrationality of the entrant. The same incremental payoff that represents a minor improvement for the incumbent is a life-changing event for the entrepreneur. When there exists a critical mass of exploratory new entrants, the dominant incumbents are compelled to follow suit and the Nash equilibrium of the industry shifts towards the appropriate mix of exploitation and exploration.

The Crony Capitalist Boom-Bust Cycle: A Tradeoff between System Resilience and Full Employment

Due to insufficient exploratory innovation, a crony capitalist economy is not diverse enough. But this does not imply that the system is fragile either at firm/micro level or at the level of the macroeconomy. In the absence of any risk of being displaced by new entrants, incumbent firms can simply maintain significant financial slack3. If incumbents do maintain significant financial slack, sustainable full employment is impossible almost by definition.  However, full employment can be achieved temporarily in two ways: Either incumbent corporates can gradually give up their financial slack and lever up as the period of stability extends as Minsky’s Financial Instability Hypothesis (FIH) would predict, or the household or government sector can lever up to compensate for the slack held by the corporate sector.

Most developed economies went down the route of increased household and corporate leverage with the process aided and abetted by monetary and regulatory policy. But it is instructive that developing economies such as India faced exactly the same problem in their “crony socialist” days. In keeping with its ideological leanings pre-1990, India tackled the unemployment problem via increased government spending. Whatever the chosen solution, full employment is unsustainable in the long run unless the core problem of cronyism is tackled. The current over-leveraged state of the consumer in the developed world can be papered over by increased government spending but in the face of increased cronyism, it only kicks the can further down the road. Restoring corporate animal spirits depends upon corporate slack being utilised in exploratory investment, which as discussed above is inconsistent with a cronyist economy.

Micro-Fragility as the Key to a Resilient Macroeconomy and Sustainable Full Employment

At the appropriate mix of exploration and exploitation, individual incumbent and new entrant firms are both incredibly vulnerable. Most exploratory investments are destined to fail as are most firms, sooner or later. Yet due to the diversity of firm-level strategies, the macroeconomy of vulnerable firms is incredibly resilient. At the same time, the transfer of wealth from incumbent corporates to the household sector via reduced corporate slack and increased investment means that sustainable full employment can be achieved without undue leverage. The only question is whether we can break out of the Olsonian special interest trap without having to suffer a systemic collapse in the process.

  1. It cannot be emphasized enough that absence of new firm entry is simply the channel through which crony capitalism malforms the macroeconomy. Therefore, attempts to artificially boost new firm entry are likely to fail unless they tackle the ultimate cause of the problem which is stabilisation []
  2. It is critical that the personal consequences of firm failure are minor for the entrepreneur – this is not the case for cultural and legal reasons in many countries around the world but is largely still true in the United States. []
  3. It could be argued that incumbents could follow this strategy even when new entrants threaten them. This strategy however has its limits – an extended period of standing on the sidelines of exploratory activity can degrade the ability of the incumbent to rejoin the fray. As Brian Loasby remarked : “For many years, Arnold Weinberg chose to build up GEC’s reserves against an uncertain technological future in the form of cash rather than by investing in the creation of technological capabilities of unknown value. This policy, one might suggest, appears much more attractive in a financial environment where technology can often be bought by buying companies than in one where the market for corporate control is more tightly constrained; but it must be remembered that some, perhaps substantial, technological capability is likely to be needed in order to judge what companies are worth acquiring, and to make effective use of the acquisitions. As so often, substitutes are also in part complements.” []

Written by Ashwin Parameswaran

November 24th, 2010 at 6:01 pm

The Resilience Stability Tradeoff: Drawing Analogies between River Flood Management and Macroeconomic Management

with 9 comments

In an earlier post, I drew an analogy between Minsky’s Financial Instability Hypothesis (FIH) and the ecologist Buzz Holling’s work on the resilience-stability tradeoff in ecosystems. Extended periods of stability reduce system resilience in complex adaptive systems such as ecologies and economies. By extension, policies that focus on stabilisation cause a loss of system resilience. Holling and Meffe called this the Pathology of Natural Resource Management which they described as follows: “when the range of natural variation in a system is reduced, the system loses resilience.That is, a system in which natural levels of variation have been reduced through command-and-control activities will be less resilient than an unaltered system when subsequently faced with external perturbations.” This pathology is as relevant to macroeconomic systems as it is to ecosystems and I briefly drew an analogy between forest fire management and economic management in the earlier post. In this post, I analyse the dilemmas faced in river flood management and their relevance to macroeconomic management.

A Case Study of River Flood Management: River Kosi

The Kosi is one of the most flood-prone rivers in India. The brunt of its fury is borne by the northern Indian state of Bihar and the Kosi is aptly also known as the “Sorrow of Bihar”. Like many other flood-prone rivers, the root cause lies in the extraordinary amount of silt that the Kosi carries from the Himalayas to the plains of Bihar. The silt deposition raises the river bed and gravity causes the river to seek out a new course – in this manner, it has been estimated that the river Kosi may have moved westwards by an incredible 210 km in the last 250 years. During the 1950s, in an effort to provide “permanent salvation from floods” the Indian government embarked on a program of building embankments on the river to curb the periodic shifting of the Kosi’s course – the embankments were aimed at converting the unpredictable behaviour of the river into something more predictable and by extension, more manageable. It was assumed that the people of Bihar would benefit from a stabilised and predictable river.

Unfortunately, the reality of the flood management program on the river Kosi has turned out to be anything but beneficial. The culmination of the failure of the program was the 2008 Bihar flood which was one of the most disastrous floods in the history of the state. So what went wrong? Was this just a result of an extraordinary natural event? Most certainly not – As Dinesh Mishra notes, in 2008 the Kosi carried only  1/7th of the capacity of the embankments and at various points of time since the 50s, the river had carried far greater quantities of water without causing anywhere near the damage it caused in 2008. This was a disaster caused by the loss of system resilience, highlighted by the inability of the system to “withstand even modest adverse shocks” after prolonged periods of stability.

So what caused this loss of system resilience? As Dinesh Mishra explains: “By building embankments on either side of a river and trying to confine it to its channel, its heavy silt and sand load is made to settle within the embanked area itself, raising the river bed and the flood water level. The embankments too are therefore raised progressively until a limit is reached when it is no longer possible to do so. The population of the surrounding areas is then at the mercy of an unstable river with a dangerous flood water level , which could any day flow over or make a disastrous breach.” As expected, the eventual breach was catastrophic – the course of the Kosi moved more than 120 kilometres eastwards in a matter of weeks. In the absence of the embankments, such a dramatic shift would have taken decades. With the passage of time, a progressively greater degree of resources were required to maintain system stability and the eventual failure was a catastrophic one rather than a moderate one.

As the above analysis highlights, the stabilisation did not merely substitute a series of regular moderately damaging outcomes for an occasional catastrophic outcome (although this alone would be a cause for concern if a catastrophic outcome was capable of triggering systemic collapse). In fact, the stabilisation transformed the system into a state where eventually even minor and frequently observed disturbances would trigger a catastrophic outcome. As Jon Stewart put it, even “regular storms” would topple a fragile boat. When faced with the possibility of a catastrophic outcome, the managing agency has two choices, neither of which are attractive.

Either it can continue to stabilise the system using ever-increasing resources in an effort to avoid the catastrophic outcome. But this option must only be followed if the managing agency has infinite resources or if there is some absolute limit to this vicious cycle of cost escalation that is within the resource capabilities of the agency. Or it can allow the catastrophic outcome to occur in an effort to restore the system to its unstabilised state. But this option risks systemic collapse – it is not just the unprecedented nature of the outcome that we have to fear from, but the very fact that the adaptive agents of the complex system may have lost the ability to deal with even the occasional moderate failures that the unstabilised system would throw up. In other words, once the system has lost resilience, managing it is akin to choosing between the frying pan and the fire.

For example, in the pre-embankment era when the Kosi was allowed to meander and change course in a natural manner, the villagers on its banks had a deep understanding of the river’s patterns and its vagaries. The floods sustained the fertility of the soil and ensured that groundwater resources were plentiful. This is not to deny that the Kosi caused damage but because the people had adapted to its regular flooding patterns, systemic damage only occured during the proverbial 100-year flood. This highlights an important lesson in complex adaptive systems: The impact of disturbances cannot be analysed in isolation to the adaptive capacities of the agents in the system. If disturbances are regular and predictable, agents will likely be adapted to them and conversely, prolonged periods of stability will render agents vulnerable to even the smallest disturbance.

The problems of managing floods on the river Kosi are not unique – many rivers around the world pose similar challenges. For example, the Yellow River, aptly named the “Sorrow of China” and the Mississippi river basin, the story of which was captured so well by John McPhee. So is there any way to avoid this evolutionary arms race against nature? Are we to conclude that the only sustainable strategy is to avoid any intervention in the complex adaptive system? Not necessarily - interventions on the system must avoid tampering with the fundamental patterns and evolutionary dynamics of the system. Indeed the best example of river management that works with the natural flow of the river rather than against it is the Dutch government’s aptly named “Room for the River” project in the Rhine river valley. Instead of building higher dikes, the Dutch have chosen to build lower dikes that allow the Rhine to flood over a larger area thus easing the pressure on the dike system as a whole. This program has been adopted despite the fact that many farmers need to be relocated out of the newly expanded flood zones of the river.

Macroeconomic Parallels

Axel Leijonhufvud’s “Corridor Hypothesis” postulates that a macroeconomy will adapt well to small shocks but “outside of a certain zone or “corridor” around its long-run growth path, it will only very sluggishly react to sufficiently large, infrequent shocks.” The adaptive nature of the macroeconomy implies that stability and by extension stabilisation reduces the width of the corridor to the point where even a small shock is enough to push the system outside the corridor. Just as embankments induced fragility in the river Kosi, bailouts and other economic transfers to specific firms and industries induce fragility into the macroeconomic system. Economic policy must allow the “river” of the macroeconomy to flow in a natural manner and restrict its interventions to insuring individual economic agents against the occasional severe flood.

This sentiment was also expressed by that great evolutionary macroeconomist of our time, Mancur Olson. In his final work “Power and Prosperity”, Olson notes: “subsidizing industries, firms and localities that lose money…at the expense of those that make money…is typically disastrous for the efficiency and dynamism of the economy, in a way that transfers unnecessarily to poor individuals…A society that does not shift resources from the losing activities to those that generate a social surplus is irrational, since it is throwing away useful resources in a way that ruins economic performance without the least assurance that it is helping individuals with low incomes. A rational and humane society, then, will confine its distributional transfers to poor and unfortunate individuals.” Olson understood the damage inflicted by rent-seeking not only from a systemic perspective but from a perspective of social justice. The logical consequence of micro-stabilisation is a crony capitalist economy - rents invariably flow to the strong and the result is a sluggish and an inegalitarian economic system, not unlike many developing economies. Contrary to popular opinion, it is not limiting handouts to the poor that defines a free and dynamic economy but limiting rents that flow to the privileged.

On the Damage Done by the Greenspan Put Variant of Monetary Policy

Clearly, some fiscal policies aimed at firm and industry stabilisation harm the economic system. But what about monetary policy? Isn’t monetary policy close-to-neutral and therefore exempt from the above criticism? On the contrary – the Greenspan Put variant of monetary policy damages macroeconomic resilience as well as being inegalitarian and unjust. Monetary policy during the Greenspan-Bernanke era has focused on stabilising incumbent banks and helping them shore up their capital in response to every economic shock, as well as a focus on asset prices as a transmission channel of monetary policy i.e. the Greenspan Put. Unlike a river system where the buildup of silt is a clear indicator of growing fragility, there are no clear signs of loss of system resilience in a macroeconomy. However, we can infer loss of macroeconomic resilience from the ever-increasing resources that are required to maintain system stability. Just as the embankments of the Kosi were raised higher and higher to combat even a minor flood, the resources needed to stabilise the financial system have grown over the last 25 years. In the early 90s, bank capital could be rebuilt by a few years of low rates but now we need a panoply of “liquidity” facilities, near-zero rates and quantitative easing aimed at compressing the entire yield curve to achieve the same result.

As I mentioned earlier, such a stabilisation policy may be credible if there is a limit to the costs of stabilisation. For example, the rents that can be extracted by any small, isolated sector of the economy are limited. Unfortunately, and this is a point that cannot be emphasised enough, there is no limit to the rents that can be extracted by the financial sector. Every commitment by the Central Bank to insure the financial sector against bad outcomes will be arbitraged for all its worth until the cost of maintaining the commitment becomes so prohibitive that it is no longer tenable. Of course, as long as the stabilising policy is in operation it appears to be a “free lunch” – the costs of programs such as the TARP appear to be limited and well worth their macroeconomic benefits just like flood protection appears to be a successful choice in the long period of calm before the eventual disaster. The loss of resilience and rent extraction is exacerbated as other financial market players are encouraged to mimic banks and take on similarly negatively skewed bets such as investing the proceeds from securities lending in “safe” assets.

In my last post, I noted the connection between inequality and rents emanating from the moral hazard subsidy but the larger culprit is the toxic combination of Greenspan Put monetary policy and a dynamically uncompetitive cronyist financial sector. Even if the sector were more competitive it is inevitable that monetary policy focused on shoring up asset prices will benefit the primary asset-holders in the economy, which in itself is a regressive transfer of wealth to the rich. The idea that supporting asset prices is the best way to support the wider economy is not far away from the notion of trickle-down economics (or as Will Rogers put it: “money was all appropriated for the top in hopes that it would trickle down to the needy.”).

Finally, although it goes without saying that even a fiat currency-issuing central bank does not have infinite resources, the move over the last century from a gold standard to a fiat money regime does have some important implications for system resilience. In evolving from a decentralised gold standard monetary system to a fiat-currency issuing central bank regime, the flexibility and resources at the monetary authority’s disposal have increased significantly. In the hands of a responsible central bank the ability to issue a fiat currency is beneficial, but in an excessively stabilised economy, it allows the process of stabilisation to be maintained for far longer than it would otherwise be. And just like in the case of the river Kosi, the longer the period of the stabilisation the more catastrophic are the results of the inevitable normal disturbance.

Written by Ashwin Parameswaran

October 18th, 2010 at 11:35 am