macroresilience

resilience, not stability

Archive for the ‘Resilience’ Category

Radical Centrism: Uniting the Radical Left and the Radical Right

with 22 comments

Pragmatic Centrism Is Crony Capitalism

Neoliberal crony capitalism is driven by a grand coalition between the pragmatic centre-left and the pragmatic centre-right. Crony capitalist policies are always justified as the pragmatic solution. The range of policy options is narrowed down to a pragmatic compromise that maximises the rent that can be extracted by special interests. Instead of the government providing essential services such as healthcare and law and order, we get oligopolistic private healthcare and privatised prisons. Instead of a vibrant and competitive private sector with free entry and exit of firms we get heavily regulated and licensed industries, too-big-to-fail banks and corporate bailouts.

There’s no better example of this dynamic than the replacement of the public option in Obamacare by a ‘private option’. As Glenn Greenwald argues, “whatever one’s views on Obamacare were and are: the bill’s mandate that everyone purchase the products of the private health insurance industry, unaccompanied by any public alternative, was a huge gift to that industry.” Public support is garnered by presenting the private option as the pragmatic choice, the compromise option, the only option. To middle class families who fear losing their healthcare protection due to unemployment, the choice is framed as either the private option or nothing.

In a recent paper (h/t Chris Dillow), Pablo Torija asks the question ‘Do Politicians Serve the One Percent?’ and concludes that they do. This is not a surprising result but what is more interesting is his research on the difference between leftwing and rightwing governments which he summarises as follows: “In 2009 center-right parties maximized the happiness of the 100th-98th richest percentile and center-left parties the 100th-95th richest percentile. The situation has evolved from the seventies when politicians represented, approximately, the median voter”.

Nothing illustrates the irrelevance of democratic politics in the neo-liberal era more than the sight of a supposedly free-market right-wing government attempting to reinvent Fannie Mae/Freddie Mac in Britain. On the other side of the pond, we have a supposedly left-wing government which funnels increasing amounts of taxpayer money to crony capitalists in the name of public-private partnerships. Politics today is just internecine warfare between the various segments of the rentier class. As Pete Townshend once said, “Meet the new boss, same as the old boss”.

The Core Strategy of Pragmatic Crony Capitalism: Increase The Scope and Reduce the Scale of Government

Most critics of neoliberalism on the left point to the dramatic reduction in the scale of government activities since the 80s – the privatisation of state-run enterprises, the increased dependence upon private contractors for delivering public services etc. Most right-wing critics lament the increasing regulatory burden faced by businesses and individuals and the preferential treatment and bailouts doled out to the politically well-connected. Neither the left nor the right is wrong. But both of them only see one side of what is the core strategy of neoliberal crony capitalism – increase the scope and reduce the scale of government intervention. Where the government was the sole operator, such as prisons and healthcare, “pragmatic” privatisation leaves us with a mix of heavily regulated oligopolies and risk-free private contracting relationships. On the other hand, where the private sector was allowed to operate without much oversight the “pragmatic” reform involves the subordination of free enterprise to a “sensible” regulatory regime and public-private partnerships to direct capital to social causes. In other words, expand the scope of government to permeate as many economic activities as possible and contract the scale of government within its core activities.

Some of the worst manifestations of crony capitalism can be traced to this perverse pragmatism. The increased scope and reduced scale are the main reasons for the cosy revolving door between incumbent crony capitalists and the government. The left predictably blames it all on the market, the right blames government corruption, while the revolving door of “pragmatic” politicians and crony capitalists rob us blind.

Radical Centrism: Increase The Scale and Reduce The Scope of Government

The essence of a radical centrist approach is government provision of essential goods and services and a minimal-intervention, free enterprise environment for everything else. In most countries, this requires both a dramatic increase in the scale of government activities within its core domain as well as a dramatic reduction in the scope of government activities outside it. In criticising the shambolic privatisation of National Rail in the United Kingdom, Christian Wolmar argued that: “once you have government involvement, you might as well have government ownership”. This is an understatement. The essence of radical centrism is: ‘once you have government involvement, you must have government ownership’. Moving from publicly run systems “towards” free-enterprise systems or vice versa is never a good idea. The road between the public sector and the private sector is the zone of crony capitalist public-private partnerships. We need a narrowly defined ‘pure public option’ rather than the pragmatic crony capitalist ‘private option’.

The idea of radical centrism is not just driven by vague ideas of social justice or increased competition. It is driven by ideas and concepts that lie at the heart of complex system resilience. All complex adaptive systems that successfully balance the need to maintain robustness while at the same time generating novelty and innovation utilise a similar approach.

Barbell Approach: Conservative Core, Aggressive Periphery

Radical centrism follows what Nassim Taleb has called the ‘barbell approach’. Taleb also provides us with an excellent example of such a policy in his book ‘Antifragile’, “hedge funds need to be unregulated and banks nationalized.” The idea here is that you bring the essential utility-like component of banking into the public domain and leave the rest alone. It is critical that the common man must not be compelled to use oligopolistic rent-fuelled services for his essential needs. In the modern world, the ability to hold money and transact is an essential service. It is also critical that there is only a public option, not a public imperative. The private sector must be allowed to compete against the public option.

A bimodal strategy of combining a conservative core with an aggressive periphery is common across complex adaptive systems in many different domains. It is true of the gene regulatory networks in our body which contains a conservative “kernel”. The same phenomenon has even been identified in technological systems such as the architecture of the Internet where a conservative kernel “represent(s) a stable basis on which diversity and complexity of higher-level processes can evolve”.

Stress, fragility and disorder in the periphery generates novelty and variation that enables the system to innovate and adapt to new environments. The stable core not only promotes robustness but paradoxically also promotes long-run innovation by by avoiding systemic collapse. Innovation is not opposed to robustness. In fact, the long-term ability of a system to innovate is dependent upon system robustness. But robustness does not imply stability, it simply means a stable core. The progressive agenda is consistent with creative destruction so long as we focus on a safety net, not a hammock.

Restore the ‘Invisible Foot’ of Competition

The neo-liberal era is often seen as the era of deregulation and market supremacy. But as many commentators have noticed, “”deregulation typically means reregulation under new rules that favor business interests.” As William Davies notes, “the guiding assumption of neoliberalism is not that markets work perfectly, but that private actors make better decisions than public ones”. And this is exactly what happened. Public sector employees were moved onto incentive-based contracts that relied on their “greed” and the invisible hand to elicit better outcomes. Public services were increasingly outsourced to private contractors who were theoretically incentivised to keep costs down and improve service delivery. Nationalised industries like telecom were replaced with heavily licensed private oligopolies. But there was a fatal flaw in these “reforms” which Allen Schick identifies as follows (emphasis mine):

one should not lose sight of the fact that these are not real markets and that they do not operate with real contracts. Rather, the contracts are between public entities—
the owner and the owned. The government has weak redress when its own organizations fail to perform, and it may be subject to as much capture in negotiating and enforcing its contracts as it was under pre-reform management. My own sense is that while some gain may come from mimicking markets, anything less than the real thing
denies government the full benefits of vigorous competition and economic redress
.

One difference between the “real thing” and the neoliberal version of the real thing is what the economist Joseph Berliner has called the ‘invisible foot’ of capitalism. Incumbent firms rarely undertake disruptive innovation unless compelled to do so by the force of dynamic competition from new entrants. The critical factor in this competitive dynamic is not the temptation of higher profits but the fear of failure and obsolescence. To sustain long-run innovation in the economy , the invisible foot needs to be “applied vigorously to the backsides of enterprises that would otherwise have been quite content to go on producing the same products in the same ways, and at a reasonable profit, if they could only be protected from the intrusion of competition.”

The other critical difference is just how vulnerable the half-way house solutions of neo-liberalism were to being gamed and abused by opportunistic private actors. The neo-liberal era saw a rise in incentive-based contracts across the private and public sector but without the invisible foot of the threat of failure. The predictable result was not only a stagnant economy but an increase in rent extraction as private actors gamed the positive incentives on offer. As an NHS surgeon quipped with respect to the current NHS reform project: “I think there’s a model there, but it’s whether it can be delivered and won’t be corrupted. I can see a very idealistic model, but by God, it’s vulnerable to people ripping it off”.

Most people view the failure of the Soviet model as being due to the inefficiency of the planned economy. But the problem that consumed the attention of Soviet leaders since the 1950s was the inability of the Soviet economy to innovate. Brezhnev once quipped that Soviet enterprises shied away from innovation “as the devil shies away from incense”. In his work on the on-the-ground reality of the Soviet economy, Joseph Berliner analysed the efforts of Soviet planners to counter this problem of insufficient innovation. The Soviets tried a number of positive incentive schemes (e.g. innovation “bonuses”) that we commonly associate with capitalist economies. But what it could not replicate was the threat of firm failure. Managers safe in the knowledge that competitive innovation would not cause their firm or their jobs to vanish were content to focus on low-risk process innovation and cost-reduction rather than higher-risk, disruptive innovation. In fact, the presence of bonuses that rewarded efficiency further reduced exploratory innovation as exploratory innovation required managers to undertake actions that often reduced short-term efficiency.

Unwittingly, the neoliberal era has replicated the Soviet system. Incumbent firms have no fear of failure and can game the positive incentives on offer to extract rents while at the same time shying away from any real disruptive innovation. We are living in a world where rentier capitalists game the half-baked schemes of privatisation and fleece the taxpayer and the perverse dynamics of safety for the classes and instability for the masses leaves us in the Great Stagnation.

Bailouts For People, Not Firms

Radical centrism involves a strengthening of the safety net for individuals combined with a dramatic increase in the competitive pressures exerted on incumbent firms. Today, we bail out banks because a banking collapse threatens the integrity of the financial system. We bail out incumbent firms because firm failure leaves the unemployed without even catastrophic health insurance. The principle of radical centrism aims to build a firewall that protects the common man from the worst impact of economic disturbances while simultaneously increasing the threat of failure at firm level. The presence of the ‘public option’ and a robust safety net is precisely what empowers us to allow incumbent firms to fail.

The safety net that protects individuals ensures robustness while the presence of a credible ‘invisible foot’ at the level of the firm boosts innovation. Moreover, as Taleb notes programs that bail out people are much less susceptible to being gamed and abused than programs that bail out limited liability firms. As I noted in an earlier post, “even uncertain tail-risk protection provided to corporates will eventually be gamed. The critical difference between individuals and corporates in this regard is the ability of stockholders and creditors to spread their bets across corporate entities and ensure that failure of any one bet has only a limited impact on the individual investors’ finances. In an individual’s case, the risk of failure is by definition concentrated and the uncertain nature of the transfer will ensure that moral hazard implications are minimal.”

The irony of the current policy debate is that policy interventions that prop up banks, asset prices and incumbent firms are viewed as the pragmatic option and policy interventions focused on households are viewed as radical and therefore beyond the pale of discussion. Preventing rent-seeking is a problem that both the left and the right should be concerned with. But both the radical left and the radical right need to realise the misguided nature of many of their disagreements. A robust safety net is as important to maintaining an innovative free enterprise economy as the dismantling of entry barriers and free enterprise are to reducing inequality.

Note: For a more rigorous treatment of the tradeoff between innovation and robustness in complex adaptive systems, see my essay ‘All Systems Need A Little Disorder’.

Bookmark and Share

Written by Ashwin Parameswaran

April 8th, 2013 at 2:54 pm

The Greenspan Fed’s Biggest Mistake: The LTCM Rate Cuts

with 6 comments

The debate as to whether the Greenspan Fed’s easy money policies are to blame for the 2008 financial crisis tends to focus on the Fed’s actions after the bursting of the dot-com bubble in 2001. Some (like Stanley Druckenmiller) argue that the Fed should have allowed a recession and a “cleanup” while others such as Paul Krugman argue that it is ludicrous to tighten monetary policy in the face of high unemployment and low inflation.

The simplistic criticism of Greenspan-era monetary policy is that we should have simply allowed recessions such as the 2001 recession to play themselves out. In other words “let it burn”. But the more nuanced criticism of the ‘Greenspan Put’ school of monetary policy is not that it supports the economy, but that it does so via a monomaniacal obsession with supporting asset prices and hoping that the resulting wealth effect trickles down to the masses. I have made this point many times in the past as have many others (e.g. Cullen Roche).

There are many ways to support the economy and only our current system focuses entirely on bailing out banks and shoring up asset prices in exclusion to all other policy options. Why can’t we allow the banks and the market to fail and send helicopter money to individuals instead? Why can’t we start money-financed deficits and increase interest rates at the same time? By narrowing our options to a choice between ‘save everybody!’ or ‘let it burn!’, we choose an economic system that favours the 1% at all times. The Fed Kremlinologist extracts rents from the Greenspan Put during the times of stability and the sharks come out during the collapse1.

Krugman is correct in arguing that a recession is no time to stop the firefighting so that an asset market bubble may be prevented. But the original sin of the Greenspan era was not in triggering bubbles during a recession. It was in using monetary policy to support asset markets and the financial sector even when the real economy was in no need of monetary stimulus. The most egregious example of such an intervention were the rate cuts during the LTCM collapse which were implemented with the sole purpose of “saving” financial markets at a time when the real economy showed no signs of stress. Between September and November 1998, the Fed cut rates by 75 basis points solely for the purpose of supporting asset prices and avoiding even a small failure within the financial sector. Even a cursory glance at market events would have told you that the wider economy was in no need of monetary stimulus. Predictably, the rate cuts also provided the fuel for the final exponential ascent of the dot-com bubble2. This “success” also put Greenspan, Robert Rubin and Larry Summers on the cover of TIME magazine, which goes to show just how biased political incentives are in favour of stabilisation and against resilience.

The problem with the ‘Greenspan Put’ doctrine is not that it fails to prevent bubbles when a recession is on. The problem is that it creates conditions such that eventually there are only two states possible for the economic system – a bubble or a collapse. Market participants could assume that any fall in asset prices would be countered with monetary stimulus and took on more macroeconomic asset-price risk. They then substituted for the market risk they had been relieved of by the Fed with increased leverage. Rates of return across asset classes settle down to permanently low “bubble-like” levels except during times of collapse which are increasingly catastrophic due to the increased system-wide leverage. The stark choice faced by Greenspan in 2001, either an asset bubble or a recession, was a result of his many misguided interventions from 1987 till 2001. Of all these interventions, the LTCM affair was his biggest mistake.


  1. As Constantino Bresciani-Turroni notes in his book on the Weimar hyperinflation ‘The Economics of Inflation: A Study of Currency Depreciation in Post-War Germany’, “even in the past times of economic regressions, of social dissolution, and of profound political disturbances have often been characterised by a concentration of property. In those periods the strong recovered their primitive habits as beasts of prey.” ↩
  2. For example, the last rate cut by the Fed came three days after TheGlobe.com set a record for the largest first-day gain of any IPO on Nov 13, 1998. ↩
Bookmark and Share

Written by Ashwin Parameswaran

March 4th, 2013 at 1:44 pm

The Ever-Increasing Cost of Propping Up A Fragile And Dysfunctional System

with 2 comments

Monetary Medication And The Economy

Mervyn King:

We’re now in a position where you can see it’s harder and harder for monetary policy to push spending back up to the old path . . . It’s as if you’re running up an ever steeper hill.

Psychotropic Medication And The Brain

Robert Whitaker quoted in an earlier essay:

Over time….the dopaminergic pathways tended to become permanently dysfunctional. They became irreversibly stuck in a hyperactive state….Doctors would then need to prescribe higher doses of antipsychotics.

Fire Suppression And The Forest

From an earlier essay:

The initial low cost of suppression is short-lived and the cumulative effect of the fragilisation of the system has led to rapidly increasing costs of wildfire suppression and levels of area burned in the last three decades.

Bookmark and Share

Written by Ashwin Parameswaran

February 13th, 2013 at 1:08 pm

Monetary and Fiscal Economics for a Near-Credit Economy

with 38 comments

In an earlier post, I argued that our current monetary system is close to being to a Wicksellian ‘pure credit economy’. In Hans Trautwein’s words, this is “a state of affairs in which all money is held in interest- bearing bank deposits and in which all payments are effected by means of book-keeping transfers in the banking system”. One significant way in which our current system is not quite a pure credit economy is that economic agents still retain the option to hold currency notes. This option is not very important in positive-rate environments but it denies the central bank the ability to enforce negative interest rates (which can be avoided by simply hoarding zero-interest physical notes). The dominance of interest-bearing money combined with the inability to enforce negative interest rates implies that the quantity of base money in the system is irrelevant, not just now in a ‘liquidity trap’, but at all points in the future.

The Irrelevance of The Quantity of Base Money and The Absence of The Monetarist Hot Potato

It is trivially obvious that interest-bearing money cannot be a hot potato in the monetarist sense. There is no reason to get rid of interest-bearing money balances and interest-bearing money holdings only need to be minimised if the interest rate is insufficient relative to the ‘natural’ real rates and safety premium implied in holding money. To put it simply, if interest rates are 5% and inflation is 15% then interest-bearing money will act as a hot potato and fuel inflation. But in the current environment of possibly negative natural real rates and a high demand for safety, prolonged negative real rate regimes are perfectly sustainable without triggering any ‘hot potato’ inflation. The above holds not only at the zero-bound but at all positive interest rates. If the central bank wants to sustain positive bank rates, it must either pay interest on reserves or mop up all excess reserves. In either scenario, we have no hot potato.

Interest-Bearing Money: Debt as Money

The history of interest-bearing money is essentially the history of debt as money. The modern history of transferable debt as money is exemplified by the use of bills of exchange in post-Renaissance Europe. As Philip Coggan explains:

trading systems were an early form of our modern economy, with its layers of debt and reliance on paper money. A merchant might extend credit to his customers; in turn, he would need such credit from his own suppliers, who might only have bought the goods with money borrowed from someone else. The default of one party would ripple through the system. This system was formalized in the form of bills of exchange, promissory notes offered as payment from one trader to another. The recipient might then use the bill as collateral to raise cash from a bank or other lender. The bill would be accepted at a discount, depending on a number of factors, most crucially the creditworthiness of the merchant concerned. This was, in effect, a paper money system outside the government’s control.

It is instructive to examine the evolution of this private credit economy in order to fully understand where we stand now. The rest of this section is primarily drawn from Carl Wennerlind’s excellent book ‘Casualties of Credit’. The private credit economy was an essential component of the English economy due to the perennial shortage of metallic currency. As Wennerlind notes, it wasn’t just merchants but the bulk of the English population who were participants in the credit economy. In the early days in the seventeenth century , the supply of private credit alone was nowhere near enough to make up for the scarcity of metallic currency. One reason was the limited transferability of private debt, a problem that was solved by the passage of the Promissory Notes Act of 1704 which made all debt instruments negotiable. Nevertheless, the limited elasticity of the private credit system in responding to demand from commerce remained a problem. A related and equally severe problem was fragility induced by the possibility of default. This was the Achilles heel of the private credit economy in the 17th century and it remains the case in the 21st century. Just as in the 21st century, the real systemic risk is the threat of a wave of cascading defaults brought upon by the tightly interconnected nature of private credit agreements.

The solution to this problem that we are all aware of and that has been well-documented is the growth of modern banking ultimately backstopped by central banks (usually via lender-of-last-resort actions). This architecture was initially limited by the restrictions placed upon the central bank by the metallic/gold standard, Bretton Woods etc which were finally thrown away in 1971 to construct the “perfectly elastic” monetary system that we have today. This is the logical conclusion of the process of abstracting away from the prior personal nature of “money as debt” to a decentralised impersonal system.

Less documented but equally important are the attempts to improve the supply and safety of the credit economy via collateral. The idea is simple – assets can be used as security to back the credit, thus improving the supply as well as the safety of credit. There were many recommendations as to what constitutes eligible collateral in the 17th and 18th century but by far the most popular suggestion was land. It is worth quoting Wennerlind on this subject (who in turn quotes William Potter):

Potter also offered a proposal for a land bank, which was remarkably similar to that of Culpeper. Since “Credit grounded upon the best security is the same thing with Money,” the key was to establish a bank that used a different asset than precious metals as security backing the credit money. Since land was considered the most concrete and stable commodity at the time, there could be no better security than land to induce people to part with their commodities in exchange. By mortgaging land, which “would serve as well and better for such a pawn,” the land bank created a credit currency that would have “as true intrinsick value, as Gold and Silver”

Others, such as Hugh Chamberlen advocated a general storehouse of goods that would serve as collateral. Nevertheless, none of these ideas were adopted in 17th/18th century England for good reasons – none of these choices for collateral were liquid enough or permanent enough for the purpose. To a modern investor, government bonds are the obvious answer to this dilemma. But in 17th century Europe, government debt was neither liquid nor safe. However, a series of institutional changes after the ‘Glorious Revolution’ in 1688 changed all this (see North and Weingast 1989 for details). With the setting up of the Bank of England, British government bonds began to resemble the “risk-free” counterparts of the modern world by the mid-18th century. It is obvious how the ability of the new more representative English Parliament to credibly commit to repay its debts enabled England to fund itself at a much lower cost. What is less appreciated is the fillip that the institution of a liquid, comparatively safe government bond market gave to the private credit economy. As Baskin and Miranti note, these government obligations could be used to collateralise private borrowing in a manner that is uncannily similar to the modern-day term repo contract.

The Hot Potato Constraint in a Credit Economy

What does all this have to do with the modern monetary system? In the modern pure fiat-currency economy (i.e. not the Eurozone), interest-bearing deposits, interest-bearing central bank reserves and interest-bearing government debt are all equivalent in that they are all nominally safe state obligations unencumbered by restraints such as a gold standard. Any shift in liabilities between central bank reserves, deposits and debt engineered by the central bank is only relevant for its interest-rate impact. There is nothing in this process that can be even remotely termed as  “money printing”. The inflation tax and any “hot potato” effect are dependent not on the absolute levels of inflation but the real interest rate offered on each tenor of these government obligations.

To the extent that any activity of the state approaches money printing, it is the act of deficit spending. Even this does not necessarily entail inflation – the central bank can force a contraction in the private credit economy by a sufficient rate-hike to counter any fiscal stance. Again there is no inflation tax and no possibility of hyperinflation as long as interest rates across the government obligation curve compensate sufficiently for inflation. Each fiscal stance has a separate sustainable level of inflation and interest rates that constitutes a short-term equilibrium. When a loose fiscal stance breaks out into excessive inflation and the risk of hyperinflation, it is usually the result of this rate hike being inadequate for fear of a collapse in the private economy.

Rather than talk in the abstract, it is easier to elaborate on the above framework with a few relevant and timely examples.

Permanence of QE is irrelevant

Gavyn Davies gives us the conventional argument as to why the perceived temporary nature of QE matters in preventing out-of-control inflation:

Fiscal policy, in theory at least, is set separately by the government, and the budget deficit is covered by selling bonds. The central bank then comes along and buys some of these bonds, in order to reduce long-term interest rates. It views this, purely and simply, as an unconventional arm of monetary policy. The bonds are explicitly intended to be parked only temporarily at the central bank, and they will be sold back into the private sector when monetary policy needs to be tightened. Therefore, in the long term, the amount of government debt held by the public is not reduced by QE, and all of the restraining effects of the bond sales in the long run will still occur. The government’s long-run fiscal arithmetic is not impacted.

As I have illustrated above, QE in a world of interest-bearing money is simply an adjustment in the maturity profile of government debt. But that is not all. In a credit economy where government bonds are a repoable safe asset, the bond-holder can simply repo his bonds for cash if he so chooses. Just as the East India Company could access cash on the back of their government bond holdings in the 18th century, any pension fund, insurer or bank can do the same today. This illustrates why the reversal of QE, if and when it happens, will have no impact on economy-wide access to cash/purchasing power.

Bond-financed or Money-financed deficits

Gavyn Davies again gives us the conventional argument:

When it runs a budget deficit, the government injects demand into the economy. By selling bonds to cover the deficit, it absorbs private savings, leaving less to be used to finance private investment. Another way of looking at this is that it raises interest rates by selling the bonds. Furthermore the private sector recognises that the bonds will one day need to be redeemed, so the expected burden of taxation in the future rises. This reduces private expenditure today. Let us call this combination of factors the “restraining effect” of bond sales.
All of this is changed if the government does not sell bonds to finance the budget deficit, but asks the central bank to print money instead. In that case, there is no absorption of private savings, no tendency for interest rates to rise, and no expected burden of future taxation. The restraining effect does not apply. Obviously, for any given budget deficit, this is likely to be much more expansionary (and potentially inflationary) than bond finance.

The ability of the private sector to repo its government bonds to access purchasing power today gives us a profound result. Whether the central bank monetises government debt or not is almost irrelevant (except from a signalling perspective) because the private sector can monetise government debt just as effectively. And when the government debt does not represent a ‘hot potato’, the private sector often does exactly that. This is not a theoretical argument. For example, Akçay et al illustrate how fiscal deficits led to inflation in Turkey despite the absence of monetisation because ” innovations in the form of new financial instruments are encouraged through high interest rates, and repos are typical examples of such innovations in chronic and high inflation countries. People are thus able to hold interest-bearing assets that are almost as liquid as money, and monetization is effectively done by the private financial sector instead of the government”. As Çavuşoğlu summarises, “The money creation process under high budget deficits can as well be characterised as an endogenous credit-money expansion rather than a monetary expansion to maximize seignorage revenue”.

Lest you assume that this only applies to developing market economies, the same argument has been made almost three decades ago by Preston Miller:

In the financial sector….higher interest rates make profitable the development of new financial instruments that make government bonds more like money. These instruments allow people to hold interest-bearing assets that are as risk-free and as useful in transactions as money is. In this way, the private sector effectively monetizes government debt that the Federal Reserve doesn’t, so the inflationary effects of higher deficit policies increase.

Even in the early 80s, Miller saw the gradual demise of non-interest bearing money:

In recent years in the United States there have developed, at money market mutual funds, demand deposit accounts that are backed by Treasury securities and, at banks, deep-discount insured certificates of deposit that are backed by Treasury securities, issued in denominations of as little as $250, and assured of purchase by a broker. In Brazil, which has run high deficits for years, Treasury bills have become very liquid: their average turnover is now less than two days.

As in the case of Turkey and as argued by Preston Miller, the private sector can monetize the deficit as effectively as the central bank can. And so long as government obligations are deemed safe, it almost certainly will. In an interest-bearing economy, the safety of these obligations have nothing to do with the absolute level of inflation and everything to do with the real rate of return on the bonds. When central banks and governments attempt to enforce an excessively negative rate of return, they play with fire and risk hyperinflation.

The Near-Permanence of (Non Hot-Potato) Government Debt

P.G.M. Dickson characterised the rise of the government bond market in London during the 18th century as the era of “debts that were permanent for the state, liquid for the individual”. In a credit economy, government debt issued in the past is simply money that has already been printed. Erasing this debt would not imply a monetary collapse but it would unleash strong deflationary forces.

Most of the developed world (ex the Eurozone) could easily maintain their current levels of government debt ad infinitum so long as the real interest rates paid on them are sufficient. And in fact it makes sense for them to do exactly that. Even without the monetisability of long-term government debt, there is a significant demand for them from many private sector holders – the pension fund and insurance industry which needs long-tenor bonds to match its liabilities to retirees, investors who need long-tenor bonds to hedge their risky assets and provide tail-risk protection. Even without taking into account the “natural” real rate of interest, there is a strong argument to be made that the average real rate of return on long-tenor government bonds should be negative. Therefore, it does not even make economic sense for governments to pay back their debt, as long as it can be serviced at a sustainable real rate.

The appropriate question to ask is not ‘What is the maximum level of government debt is that can be plausibly paid back?’. It is ‘What is the maximum level of government debt that can be plausibly serviced on a permanent basis?’. If there are any Ponzi schemes in government debt, they exist only if and when there are real limits to economic growth – working-age population growth, energy limits etc.

What Matters: Future Deficits and Real Rates

A policy option such as a cancellation of past debt or an announcement of helicopter drops would be relevant to the extent that it effects future deficits. Higher deficits would typically warrant a more hawkish monetary stance and it is the combination of this fiscal stance and the monetary response that determines whether the deficit regime constitutes an inflation tax on the private sector. For example, the state could institute a helicopter drop and raise interest rates at the same time to maintain real rates at acceptable levels – again the level of real rates is much more important than the absolute level of inflation. Even this hike in rates may not be required if the private sector is undergoing an endogenous delevering at the time.

Modern Repo and the Asset Price Approach to Monetary Policy

If the collateral underpinning the private credit economy was limited to government bonds, the lender-of-last-resort role of the central bank in the repo market would be trivial. However, the current scope of the repo market and similar financing arrangements (notably ABCP) extends to far riskier assets. Although the risk management in today’s repo market is far superior from an individual counterparty’s perspective ( the predominance of the overnight repo, more sophisticated margining etc ) the systemic risk of cascading defaults triggering a credit collapse has in fact spread to all asset markets.

In order to meet their stabilisation mandate, central banks have implicitly taken on a mandate to backstop and stabilise the entire spectrum of liquid asset markets. If the central banks influence anything that could be termed as money supply in the modern credit economy, they do so via their influence on asset price levels (influenced in turn through the central bank’s actions on present interest rates, future interest rate path and liquidity). In a collateral-dependent credit economy, the Greenspan Put is the logical end-point of the stabilisation processes, the modern motto of which could be summarised as: ‘Focus on collateral values and the money supply will take care of itself’. Successive stabilisation leaves the economy in a condition where all economic actors have moved away from the idiosyncratic, illiquid economic risks that are the essence of an innovative, entrepreneurial economy towards the homogeneous liquid risks of a stagnant economy (detailed argument here).

In an earlier post, I noted that “The long-arc of stabilised cycles is itself a disequilibrium process (a sort of disequilibrium super-cycle) where performance in each cycle deteriorates compared to the last one – an increasing amount of stabilisation needs to be applied in each short-run cycle to achieve poorer results compared to the previous cycle.” This sentiment applies even when we look at the long-arc of stabilisation in England since the 17th century. In the 17th century, it only took a change in the laws (making debts negotiable) to prevent a collapse in the credit economy whereas now we need to prop up the entire spectrum of asset markets.

Notes:
1. The section on monetary hot potatoes and high-powered money is almost completely taken from commenter ‘K’ – example here

Bookmark and Share

Written by Ashwin Parameswaran

October 17th, 2012 at 1:54 pm

Creative Destruction and The Class Struggle

with 20 comments

In a perceptive post, Reihan Salam makes the point that private equity firms are simply an industrialised version of corporate America’s efficiency-seeking impulse. I’ve made a similar point in a previous post that the the excesses of private equity mirror the excesses of the economy during the neoliberal era. To right-wing commentators, neoliberalism signifies a much-needed transition towards a free-market economy. Left-wing commentators on the other hand lament the resultant supremacy of capital over labour and rising inequality. But as I have argued several times, the reality of the neoliberal transition is one where a combination of protected asset markets via the Greenspan Put, an ever-growing ‘License Raj’, regulations that exist primarily to protect incumbent corporates and persistent bailouts of banks and large corporates have given us a system best described as “stability for the classes and instability for the masses”.

The solution preferred by the left is to somehow recreate the golden age of the 50s and the 60s i.e. stability for all. Although this would be an environment of permanent innovative stagnation bereft of Schumpeterian creative destruction, you could argue that restoring social justice, reducing inequality and shoring up the bargaining position of the working class is more important than technological progress. In this post I will argue that this stability-seeking impetus is counterproductive and futile. A stable system where labour and capital are both protected from the dangers of failure inevitably breeds a fragile and disadvantaged working class.

The technology industry provides a great example of how disruptive competitive dynamics can give workers a relatively strong bargaining position. As Reihan notes, the workers fired by Steve Jobs in 1997 probably found employment elsewhere without much difficulty. Some of them probably started their own technology ventures. The relative bargaining power of the technology worker is boosted not just by the presence of a large number of new firms looking to hire but also by the option to simply start their own small venture instead of being employed. This vibrant ecosystem of competing opportunities and alternatives is a direct consequence of the disruptive churn that has characterised the sector over the last few decades. This “disorder” means that most individual firms and jobs are vulnerable at all times to elimination. Yet jobseekers as a whole are in a relatively strong position. Micro-fragility leads to macro-resilience.

In many sectors, there are legitimate economies of scale that prevent laid-off workers from self-organising into smaller firms. But in much of the economy, the digital and the physical, these economies of scale are rapidly diminishing. Yet these options are denied to large sections of the economy due to entry barriers from licensing requirements and regulatory hurdles that systematically disadvantage small, new firms. In some states, it is easier to form a technology start-up than it is to start a hair-braiding business. In fact, the increasingly stifling patent regime is driving Silicon Valley down the same dysfunctional path that the rest of the economy is on.

The idea that we can protect incumbent firms such as banks from failure and still preserve a vibrant environment for new entrants and competitors is folly. Just like a fire that burns down tall trees provides the opportunity for smaller trees to capture precious sunlight and thrive, new firms expand by taking advantage of the failure of large incumbents. But when the incumbent fails, there must be a sufficient diversity of small and new entrants who are in a position to take advantage. A long period of stabilisation does its greatest damage by stamping out this diversity and breeding a micro-stable, macro-fragile environment. Just as in ecosystems, “minor species provide a ‘‘reservoir of resilience’’ through their functional similarity to dominant species and their ability to increase in abundance and thus maintain function under ecosystem perturbation or stress”. This deterioration is not evident during the good times when the dominant species, however homogeneous, appear to be performing well. Stabilisation is therefore an almost irreversible path – once the system is sufficiently homogenous, avoiding systemic collapse requires us to put the incumbent fragile players on permanent life support.

As even Marxists such as David Harvey admit, Olsonian special-interest dynamics subvert and work against the interests of the class struggle:

the social forces engaged in shaping how the state–finance nexus works…differ somewhat from the class struggle between capital and labour typically privileged in Marxian theory….there are many issues, varying from tax, tariff, subsidy and both internal and external regulatory policies, where industrial capital and organised labour in specific geographical settings will be in alliance rather than opposition. This happened with the request for a bail-out for the US auto industry in 2008–9. Auto companies and unions sat side by side in the attempt to preserve jobs and save the companies from bankruptcy.

This fleeting and illusory stability that benefits the short-term interests of the currently employed workers in a firm leads to the ultimate loss of bargaining-power and reduced real wage growth in the long run for workers as a class. In the pursuit of stability, the labour class supports those very policies that are most harmful to it in the long run. A regime of Smithian efficiency-seeking i.e. the invisible hand, without Schumpeterian disruption i.e. the invisible foot inevitably leads to a system where capital dominates labour. Employed workers may achieve temporary stability via special-interest politics but the labour class as a whole will not. Creative destruction prevents the long-term buildup of capital interests by presenting a constant threat to the survival of the incumbent rent-earner. In the instability of the individual worker (driven by the instability of their firm’s prospects) lies the resilience of the worker class. Micro-fragility is the key to macro-resilience but this fragility must be felt by all economic agents, labour and capital alike.

Bookmark and Share

Written by Ashwin Parameswaran

July 5th, 2012 at 1:11 am

Monetary Policy, Fiscal Policy and Inflation

with 8 comments

In a previous post I argued that in the current environment, the Federal Reserve could buy up the entire stock of government bonds without triggering any incremental inflation. The argument for the ineffectiveness of conventional QE is fairly simple. Government bonds are already safe collateral both in the shadow banking system as well as with the central bank itself. The liquidity preference argument is redundant in differentiating between deposits and an asset that qualifies as safe collateral. Broad money supply is therefore unaffected when such an asset is purchased.

The monetarist objection to this argument is that QE increases the stock of high-powered money and increases the price level to the extent that this increase is perceived as permanent. But in an environment where interest is paid on reserves or deposits with the central bank, the very concept of high-powered money is meaningless and there is no hot potato effect to speak of. Some monetarists argue that we need to enforce a penalty rate on reserves to get rid of excess reserves but small negative rates make little difference to safe-haven flows and large negative rates will lead to people hoarding bank notes.

The other objection is as follows: if the central bank can buy up all the debt then why don’t we do just that and retire all that debt and make the state debt-free? Surely that can’t be right – isn’t such debt monetisation the road to Zimbabwe-like hyperinflation? Intuitively, many commentators interpret QE as a step on the slippery slope of fiscal deficit monetisation but this line of thought is fatally flawed. Inflation comes about from the expected and current monetisation of fiscal deficits, not from the central bank’s purchase of the stock of government debt that has arisen from past fiscal deficits. The persistent high inflation that many emerging market economies are so used to arises from money-printed deficits that are expected to continue well into the future.

So why do the present and future expected fiscal deficits in the US economy not trigger inflation today? One, the present deficits come at a time when the shadow money supply is still contracting. And two, the impact of expected future deficits in the future is muddied thanks to the status of the US Dollar as the reserve currency of the world, a status that has been embellished since the 90s thanks to reserves being used as capital flight and IMF-avoidance insurance by many EM countries (This post by Brett Fiebiger is an excellent explanation of the privileged status enjoyed by the US Dollar). The expectations channel has to deal with too much uncertainty and there are too many scenarios in which the USD may hold its value despite large deficits, especially if the global economy continues to be depressed and demand for safe assets remains elevated. There are no such uncertainties in the case of peripheral economy fiat currencies (e.g. Hungary). To the extent that there is any safe asset demand, it is mostly local and the fact that other global safe assets exist means that the fiscal leeway that peripheral economies possess is limited. In other words, the absence of inflation is not just a matter of the market trusting the US government to take care of its long-term structural deficit problems – uncertainty and the “safe asset” status of the USD greatly diminish the efficacy of the expectations channel.

Amidst the fog of uncertainty and imperfect commitments, concrete steps matter and they matter especially in the midst of a financial crisis. Monetary policy can almost always prevent deflation in the face of a contraction in shadow money supply via the central banks’ lender-of-last-resort facilities. In an economy like 2008-2009, no amount of open-market operations, asset purchases and monetary target commitments can prevent a sharp deflationary contraction in the private shadow money supply unless the lender-of-last-resort facility is utilised. Once the system is stabilised and the possibility of a deflationary contraction has been avoided, monetary policy has very little leeway to create incremental inflation in the absence of fiscal profligacy and shadow banking/private credit expansion except via essentially fiscal actions such as buying private assets, credit guarantees etc. In the present situation where the private household economy is excessively indebted and the private business economy suffers from a savings glut and a persistent investment deficit due to structural malformation, fiscal profligacy is the only short-term option. Correspondingly, no amount of monetary stimulus can prevent a sharp fiscal contraction from causing deflation in the current economic state.

Monetary policy is also not all-powerful in its contractionary role – it has significant but not unlimited leeway to tighten policy in the face of fiscal profligacy or shadow banking expansion. The Indian economy in 1995-1996 illustrates how the Reserve Bank of India (RBI) could control inflation in the face of fiscal profligacy only by crippling the private sector economy. The real rates faced by the private sector shot up and spending ground to a halt. The dilemma faced by the RBI today mirror the problems it faced then – if fiscal indiscipline by the Indian government persists, the RBI cannot possibly bring down inflation to acceptable levels without causing the private sector economy to keel over.

The current privileged status of the US Dollar and the low interest rates and inflation does not imply that long-term fiscal discipline is unimportant. Currently, the demand for safety reduces inflation and the low inflation renders the asset safer – this virtuous positive-feedback cycle can turn vicious if expectation of monetisation is sufficiently large and the mutual-feedback nature of the process means that any such transition will almost certainly be rapid. It is not even clear that the United States is better off than say Hungary in the long run. The United States has much leeway and flexibility than Hungary but if it abuses this privilege, any eventual break will be that much more violent. Borrowing from an old adage, give an economy too much rope and it will hang itself.

Bookmark and Share

Written by Ashwin Parameswaran

June 20th, 2012 at 4:55 pm

The Resilience Approach vs Minsky/Bagehot: When and Where to Intervene

with 5 comments

There are many similarities between a resilience approach to macroeconomics and the Minsky/Bagehot approach – the most significant being a common focus on macroeconomies as systems in permanent disequilibrium. Although both approaches largely agree on the descriptive characteristics of macroeconomic systems, there are some significant differences when it comes to the preferred policy prescriptions. In a nutshell, the difference boils down to the question of when and where to intervene.

A resilience approach focuses its interventions on severe disturbances, whilst allowing small and moderate disturbances to play themselves out. Even when the disturbance is severe, a resilience approach avoids stamping out the disturbance at source and focuses its efforts on mitigating the wider impact of the disturbance on the macroeconomy. The primary aim is the minimisation of the long-run fragilising consequences of the intervention which I have explored in detail in many previous posts(1, 2, 3). Just as small fires and floods are integral to ecological resilience, small disturbances are integral to macroeconomic resilience. Although it is difficult to identify ex-ante whether disturbances are moderate or not, the Greenspan-Bernanke era nevertheless contains some excellent examples of when not to intervene. The most obvious amongst all the follies of Greenspan-era monetary policy were the rate cuts during the LTCM collapse which were implemented with the sole purpose of “saving” financial markets at a time when the real economy showed no signs of stress1.

The Minsky/Bagehot approach focuses on tackling all disturbances with debt-deflationary consequences at their source. Bagehot asserted in ‘Lombard Street’ that “in wild periods of alarm, one failure makes many, and the best way to prevent the derivative failures is to arrest the primary failure which causes them”. Minsky emphasised the role of both the lender-of-last-resort (LOLR) mechanism as well as fiscal stabilisers in tackling such “failures”. However Minsky was not ignorant of the long-term damage inflicted by a regime where all disturbances were snuffed out at source – the build-up of financial “innovation” designed to take advantage of this implicit protection, the descent into crony capitalism and the growing fragility of a private-investment driven economy2, an understanding that was also reflected in his fundamental reform proposals3. Minsky also appreciated that the short-run cycle from hedge finance to Ponzi finance does not repeat itself in the same manner. The long-arc of stabilised cycles is itself a disequilibrium process (a sort of disequilibrium super-cycle) where performance in each cycle deteriorates compared to the last one – an increasing amount of stabilisation needs to be applied in each short-run cycle to achieve poorer results compared to the previous cycle.

Resilience Approach: Policy Implications

As I have outlined in an earlier post, an approach that focuses on minimising the adaptive consequences of macroeconomic interventions implies that macroeconomic policy must allow the “river” of the macroeconomy to flow in a natural manner and restrict its interventions to insuring individual economic agents rather than corporate entities against the occasional severe flood. In practise, this involves:

  • De-emphasising the role of conventional and unconventional monetary policy (interest-rate cuts, LOLR, quantitative easing, LTRO) in tackling debt-deflationary disturbances.
  • De-emphasising the role of industrial policy and explicit bailouts of banks and other firms4.
  • Establishing neutral monetary-fiscal hybrid policies such as money-financed helicopter drops as the primary tool of macroeconomic stabilisation. Minsky’s insistence on the importance of LOLR operations was partly driven by his concerns that alternative policy options could not be implemented quickly enough5. This concern is less relevant with regards to helicopter drops in today’s environment where they can be implemented almost instantaneously6.

Needless to say, the policies we have followed throughout the ‘Great Moderation’ and continue to follow are anything but resilient. Nowhere is the farce of orthodox policy more apparent than in Europe where countries such as Spain are compelled to enforce austerity on the masses whilst at the same time being forced to spend tens of billions of dollars in bailing out incumbent banks. Even within the structurally flawed construct of the Eurozone, a resilient strategy would take exactly the opposite approach which will not only drag us out of the ‘Great Stagnation’ but it will do so in a manner that delivers social justice and reduced inequality.

 

 


  1. Of course this “success” also put Greenspan, Rubin and Summers onto the cover of TIME magazine, which goes to show just how biased political incentives are in favour of stabilisation and against resilience.  ↩
  2. From pages 163-165 of Minsky’s book ‘John Maynard Keynes’:
    “The success of a high-private-investment strategy depends upon the continued growth of relative needs to validate private investment. It also requires that policy be directed to maintain and increase the quasi-rents earned by capital – i.e.,rentier and entrepreneurial income. But such high and increasing quasi-rents are particularly conducive to speculation, especially as these profits are presumably guaranteed by policy. The result is experimentation with liability structures that not only hypothecate increasing proportions of cash receipts but that also depend upon continuous refinancing of asset positions. A high-investment, high-profit strategy for full employment – even with the underpinning of an active fiscal policy and an aware Federal Reserve system – leads to an increasingly unstable financial system, and an increasingly unstable economic performance. Within a short span of time, the policy problem cycles among preventing a deep depression, getting a stagnant economy moving again, reining in an inflation, and offsetting a credit squeeze or crunch…….
    In a sense, the measures undertaken to prevent unemployment and sustain output “fix” the game that is economic life; if such a system is to survive, there must be a consensus that the game has not been unfairly fixed…….
    As high investment and high profits depend upon and induce speculation with respect to liability structures, the expansions become increasingly difficult to control; the choice seems to become whether to accomodate to an increasing inflation or to induce a debt-deflation process that can lead to a serious depression……
    The high-investment, high-profits policy synthesis is associated with giant firms and giant financial institutions, for such an organization of finance and industry seemingly makes large-scale external finance easier to achieve. However, enterprises on the scale of the American giant firms tend to become stagnant and inefficient. A policy strategy that emphasizes high consumption, constraints upon income inequality, and limitations upon permissible liability structures, if wedded to an industrial-organization strategy that limits the power of institutionalized giant firms, should be more conducive to individual initiative and individual enterprise than is the current synthesis.
    As it is now, without controls on how investment is to be financed and without a high-consumption, low private-investment strategy, sustained full employment apparently leads to treadmill affluence, accelerating inflation, and recurring threats of financial crisis.”
     ↩
  3. Just like Keynes, Minsky understood completely the dynamic of stabilisation and its long-term strategic implications. Given the malformation of private investment by the interventions needed to preserve the financial system, Keynes preferred the socialisation of investment and Minsky a shift to a high-consumption, low-investment system. But the conventional wisdom, which takes Minsky’s tactical advice on stabilisation and ignores his strategic advice on the need to abandon the private-investment led model of growth, is incoherent. ↩
  4. In his final work ‘Power and Prosperity’, Mancur Olson expressed a similar sentiment: “subsidizing industries, firms and localities that lose money…at the expense of those that make money…is typically disastrous for the efficiency and dynamism of the economy, in a way that transfers unnecessarily to poor individuals…A society that does not shift resources from the losing activities to those that generate a social surplus is irrational, since it is throwing away useful resources in a way that ruins economic performance without the least assurance that it is helping individuals with low incomes. A rational and humane society, then, will confine its distributional transfers to poor and unfortunate individuals.” ↩
  5. From pg 44 of ‘Stabilising an Unstable Economy’: “The need for lender-of-Iast-resort operations will often occur before income falls steeply and before the well nigh automatic income and financial stabilizing effects of Big Government come into play. If the institutions responsible for the lender-of-Iast-resort function stand aside and allow market forces to operate, then the decline in asset values relative to current output prices will be larger than with intervention; investment and debt- financed consumption will fall by larger amounts; and the decline in income, employment, and profits will be greater. If allowed to gain momentum, the financial crisis and the subsequent debt deflation may, for a time, overwhelm the income and financial stabilizing capacity of Big Government. Even in the absence of effective lender-of-Iast-resort action, Big Government will eventually produce a recovery, but, in the interval, a high price will be paid in the form of lost income and collapsing asset values.” ↩
  6. As Charlie Bean of the BoE suggests, helicopter drops could be implemented in the UK via the PAYE system. ↩
Bookmark and Share

Written by Ashwin Parameswaran

May 8th, 2012 at 1:31 pm

The Control Revolution And Its Discontents

with 20 comments

One of the key narratives on this blog is how the Great Moderation and the neo-liberal era has signified the death of truly disruptive innovation in much of the economy. When macroeconomic policy stabilises the macroeconomic system, every economic actor is incentivised to take on more macroeconomic systemic risks and shed idiosyncratic, microeconomic risks. Those that figured out this reality early on and/or had privileged access to the programs used to implement this macroeconomic stability, such as banks and financialised corporates, were the big winners – a process that is largely responsible for the rise in inequality during this period. In such an environment the pace of disruptive product innovation slows but the pace of low-risk process innovation aimed at cost-reduction and improving efficiency flourishes. therefore we get the worst of all worlds – the Great Stagnation combined with widespread technological unemployment.

This narrative naturally begs the question: when was the last time we had a truly disruptive Schumpeterian era of creative destruction. In a previous post looking at the evolution of the post-WW2 developed economic world, I argued that the so-called Golden Age was anything but Schumpeterian – As Alexander Field has argued, much of the economic growth till the 70s was built on the basis of disruptive innovation that occurred in the 1930s. So we may not have been truly Schumpeterian for at least 70 years. But what about the period from at least the mid 19th century till the Great Depression? Even a cursory reading of economic history gives us pause for thought – after all wasn’t a significant part of this period supposed to be the Gilded Age of cartels and monopolies which sounds anything but disruptive.

I am now of the opinion that we have never really had any long periods of constant disruptive innovation – this is not a sign of failure but simply a reality of how complex adaptive systems across domains manage the tension between efficiency,robustness, evolvability and diversity. What we have had is a subverted control revolution where repeated attempts to achieve and hold onto an efficient equilibrium fail. Creative destruction occurs despite our best efforts to stamp it out. In a sense, disruption is an outsider to the essence of the industrial and post-industrial period of the last two centuries, the overriding philosophy of which is automation and algorithmisation aimed at efficiency and control. And much of our current troubles are a function of the fact that we have almost perfected the control project.

The operative word and the source of our problems is “almost”. Too many people look at the transition from the Industrial Revolution to the Algorithmic Revolution as a sea-change in perspective. But in reality, the current wave of reducing everything to a combination of “data & algorithm” and tackling every problem with more data and better algorithms is the logical end-point of the control revolution that started in the 19th century. The difference between Ford and Zara is overrated – Ford was simply the first step in a long process that focused on systematising each element of the industrial process (production,distribution,consumption) but also crucially putting in place a feedback loop between each element. In some sense, Zara simply follows a much more complex and malleable algorithm than Ford did but this algorithm is still one that is fundamentally equilibriating (not disruptive) and focused on introducing order and legibility into a fundamentally opaque environment via a process that reduces human involvement and discretion by replacing intuitive judgements with rules and algorithms. Exploratory/disruptive innovation on the other hand is a disequilibriating force that is created by entrepreneurs and functions outside this feedback/control loop. Both processes are important – the longer period of the gradual shedding of diversity and homogenisation in the name of efficiency as well as the periodic “collapse” that shakes up the system and puts it eventually on the path to a new equilibrium.

Of course, control has been a aim of western civilisation for a lot longer but it was only in the 19th century that the tools of control were good enough for this desire to be implemented in any meaningful sense. And even more crucially, as James Beniger has argued, it was only in the last 150 years that the need for large-scale control arose. And now the tools and technologies in our hands to control and stabilise the economy are more powerful than they’ve ever been, likely too powerful.

If we had perfect information and everything could be algorithmised right now i.e. if the control revolution had been perfected, then the problem disappears. Indeed it is arguable that the need for disruption in the innovation process no longer exists. If we get to a world where radical uncertainty has been eliminated, then the problem of systemic fragility is moot and irrelevant. It is easy to rebut the stabilisation and control project by claiming that we cannot achieve this perfect world.

But even if the techno-utopian project can achieve all that it claims it can, the path matters. We need to make it there in one piece. The current “algorithmic revolution” is best viewed as a continuation of the process through which human beings went from being tool-users to minders and managers of automated systems. The current transition is simply one where the many of these algorithmic and automated systems can essentially run themselves with human beings simply performing the role of supervisors who only need to intervene in extraordinary circumstances. Therefore, it would seem logical that the same process of increased productivity that has occurred during the modern era of automation will continue during the creation of the “vast,automatic and invisible” ‘second economy’. However there are many signs that this may not be the case. What has made things better till now and has been genuine “progress” may make things worse in higher doses and the process of deterioration can be quite dramatic.

The Uncanny Valley on the Path towards “Perfection”

In 1970, Masahiro Mori coined the term ‘uncanny valley’ to denote the phenomenon that “as robots appear more humanlike, our sense of their familiarity increases until we come to a valley”. When robots are almost but not quite human-like, they invoke a feeling of revulsion rather than empathy. As Karl McDorman notes, “Mori cautioned robot designers not to make the second peak their goal — that is, total human likeness — but rather the first peak of humanoid appearance to avoid the risk of their robots falling into the uncanny valley.”

A similar valley exists in the path of increased automation and algorithmisation. Much of the discussion in this section of the post builds upon concepts I explored via a detailed case study in a previous post titled ‘People Make Poor Monitors for Computers’.

The 21st century version of the control project i.e. the algorithmic project consists of two components:
1. More Data – ‘Big Data’.
2. Better and more comprehensive Algorithm.

The process goes hand in hand therefore with increased complexity and crucially, poorer and less intuitive feedback for the human operator. This results in increased fragility and a system prone to catastrophic breakdowns. The typical solution chosen is either further algorithmisation i.e. an improved algorithm and more data and if necessary increased slack and redundancy. This solution exacerbates the problem of feedback and temporarily pushes the catastrophic scenario further out to the tail but it does not eliminate it. Behavioural adaptation by human agents to the slack and the “better” algorithm can make a catastrophic event as likely as it was before but with a higher magnitude. But what is even more disturbing is that this cycle of increasing fragility can occur even without any such adaptation. This is the essence of the fallacy of the ‘defence in depth’ philosophy that lies at the core of most fault-tolerant algorithmic designs that I discussed in my earlier postthe increased “safety” of the automated system allows the build up of human errors without any feedback available from deteriorating system performance.

A thumb rule to get around this problem is to use slack only in those domains where failure is catastrophic and to prioritise feedback when failure is not critical and cannot kill you. But in an uncertain environment, this rule is very difficult to manage. How do you really know that a particular disturbance will not kill you? Moreover the loop of automation -> complexity -> redundancy endogenously turns a non-catastrophic event into one with catastrophic consequences.

This is a trajectory which is almost impossible to reverse once it has gone beyond a certain threshold without undergoing an interim collapse. The easy short-term fix is always to make a patch to the algorithm, get more data and build in some slack if needed. An orderly rollback is almost impossible due to the deskilling of the human workforce and risk of collapse due to other components in the system having adapted to new reality. Even simply reverting to the old more tool-like system makes things a lot worse because the human operators are no longer experts at using those tools – the process of algorithmisation has deskilled the human operator. Moreover, the endogenous nature of this buildup of complexity eventually makes the system fundamentally illegible to the human operator – a phenomenon that is ironic given that the fundamental aim of the control revolution is to increase legibility.

The Sweet Spot Before the Uncanny Valley: Near-Optimal Yet Resilient

Although it is easy to imagine the characteristics of an inefficient and dramatically sub-optimal system that is robust, complex adaptive systems operate at a near-optimal efficiency that is also resilient. Efficiency is not only important due to the obvious reality that resources are scarce but also because slack at the individual and corporate level is a significant cause of unemployment. Such near-optimal robustness in both natural and economic systems is not achieved with simplistically diverse agent compositions or with significant redundancies or slack at agent level.

Diversity and redundancy carry a cost in terms of reduced efficiency. Precisely due to this reason, real-world economic systems appear to exhibit nowhere near the diversity that would seem to ensure system resilience. Rick Bookstaber noted recently, that capitalist competition if anything seems to lead to a reduction in diversity. As Youngme Moon’s excellent book ‘Different’ lays out, competition in most markets seems to result in less diversity, not more. We may have a choice of 100 brands of toothpaste but most of us would struggle to meaningfully differentiate between them.

Similarly, almost all biological and ecological complex adaptive systems are a lot less diverse and contain less pure redundancy than conventional wisdom would expect. Resilient biological systems tend to preserve degeneracy rather than simple redundancy and resilient ecological systems tend to contain weak links rather than naive ‘law of large numbers’ diversity. The key to achieving resilience with near-optimal configurations is to tackle disturbances and generate novelty/innovation with an an emergent systemic response that reconfigures the system rather than simply a localised response. Degeneracy and weak links are key to such a configuration. The equivalent in economic systems is a constant threat of new firm entry.

The viewpoint which emphasises weak links and degeneracy also implies that it is not the keystone species and the large firms that determine resilience but the presence of smaller players ready to reorganise and pick up the slack when an unexpected event occurs. Such a focus is further complicated by the fact that in a stable environment, the system may become less and less resilient with no visible consequences – weak links may be eliminated, barriers to entry may progressively increase etc with no damage done to system performance in the stable equilibrium phase. Yet this loss of resilience can prove fatal when the environment changes and can leave the system unable to generate novelty/disruptive innovation. This highlights the folly of statements such as ‘what’s good for GM is good for America’. We need to focus not just on the keystone species, but on the fringes of the ecosystem.

 THE UNCANNY VALLEY AND THE SWEET SPOT

The Business Cycle in the Uncanny Valley – Deterioration of the Median as well as the Tail

Many commentators have pointed out that the process of automation has coincided with a deskilling of the human workforce. For example, below is a simplified version of the relation between mechanisation and skill required by the human operator that James Bright documented in 1958 (via Harry Braverman’s ‘Labor and Monopoly Capital’). But till now, it has been largely true that although human performance has suffered, the performance of the system has gotten vastly better. If the problem was just a drop in human performance while the system got better, our problem is less acute.

AUTOMATION AND DESKILLING OF THE HUMAN OPERATOR

But what is at stake is a deterioration in system performance – it is not only a matter of being exposed to more catastrophic setbacks. Eventually mean/median system performance deteriorates as more and more pure slack and redundancy needs to be built in at all levels to make up for the irreversibly fragile nature of the system. The business cycle is an oscillation between efficient fragility and robust inefficiency. Over the course of successive cycles, both poles of this oscillation get worse which leads to median/mean system performance falling rapidly at the same time that the tails deteriorate due to the increased illegibility of the automated system to the human operator.

THE UNCANNY VALLEY BUSINESS CYCLE

The Visible Hand and the Invisible Foot, Not the Invisible Hand

The conventional economic view of the economy is one of a primarily market-based equilibrium punctuated by occasional shocks. Even the long arc of innovation is viewed as a sort of benign discovery of novelty without any disruptive consequences. The radical disequilibrium view (which I have been guilty of espousing in the past) is one of constant micro-fragility and creative destruction. However, the history of economic evolution in the modern era has been quite different – neither market-based equilibrium nor constant disequilibrium, but a series of off-market attempts to stabilise relations outside the sphere of the market combined with occasional phase transitions that bring about dramatic change. The presence of rents is a constant and the control revolution has for the most part succeeded in preserving the rents of incumbents, barring the occasional spectacular failure. It is these occasional “failures” that have given us results that in some respect resemble those that would have been created by a market over the long run.

As Bruce Wilder puts it (sourced from 1, 2, 3 and mashed up together by me):

The main problem with the standard analysis of the “market economy”, as well as many variants, is that we do not live in a “market economy”. Except for financial markets and a few related commodity markets, markets are rare beasts in the modern economy. The actual economy is dominated by formal, hierarchical, administrative organization and transactions are governed by incomplete contracts, explicit and implied. “Markets” are, at best, metaphors…..
Over half of the American labor force works for organizations employing over 100 people. Actual markets in the American economy are extremely rare and unusual beasts. An economics of markets ought to be regarded as generally useful as a biology of cephalopods, amid the living world of bones and shells. But, somehow the idealized, metaphoric market is substituted as an analytic mask, laid across a vast variety of economic relations and relationships, obscuring every important feature of what actually is…..
The elaborate theory of market price gives us an abstract ideal of allocative efficiency, in the absence of any firm or household behaving strategically (aka perfect competition). In real life, allocative efficiency is far less important than achieving technical efficiency, and, of course, everyone behaves strategically.
In a world of genuine uncertainty and limitations to knowledge, incentives in the distribution of income are tied directly to the distribution of risk. Economic rents are pervasive, but potentially beneficial, in that they provide a means of stable structure, around which investments can be made and production processes managed to achieve technical efficiency.
In the imaginary world of complete information of Econ 101, where markets are the dominant form of economic organizations, and allocative efficiency is the focus of attention, firms are able to maximize their profits, because they know what “maximum” means. They are unconstrained by anything.
In the actual, uncertain world, with limited information and knowledge, only constrained maximization is possible. All firms, instead of being profit-maximizers (not possible in a world of uncertainty), are rent-seekers, responding to instituted constraints: the institutional rules of the game, so to speak. Economic rents are what they have to lose in this game, and protecting those rents, orients their behavior within the institutional constraints…..
In most of our economic interactions, price is not a variable optimally digesting information and resolving conflict, it is a strategic instrument, held fixed as part of a scheme of administrative control and information discovery……The actual, decentralized “market” economy is not coordinated primarily by market prices—it is coordinated by rules. The dominant relationships among actors is not one of market exchange at price, but of contract: implicit or explicit, incomplete and contingent.

James Beniger’s work is the definitive document on how the essence of the ‘control revolution’ has been an attempt to take economic activity out of the sphere of direct influence of the market. But that is not all – the long process of algorithmisation over the last 150 years has also, wherever possible, replaced implicit rules/contracts and principal-agent relationships with explicit processes and rules. Beniger also notes that after a certain point, the increasing complexity of the system is an endogenous phenomenon i.e. further iterations are aimed at controlling the control process itself. As I have illustrated above, after a certain threshold, the increasing complexity, fragility and deterioration in performance becomes a self-fulfilling positive feedback process.

Although our current system bears very little resemblance to the market economy of the textbook, there was a brief period during the transition from the traditional economy to the control economy during the early part of the 19th century when this was the case. 26% of all imports into the United States in 1827 sold in an auction. But the displacement of traditional controls (familial ties) with the invisible hand of market controls was merely a transitional phase, soon to be displaced by the visible hand of the control revolution.

The Soviet Project, Western Capitalism and High Modernity

Communism and Capitalism are both pillars of the high-modernist control project. The signature of modernity is not markets, but technocratic control projects. Capitalism has simply done it in a manner that is more easily and more regularly subverted. It is the occasional failure of the control revolution that is the source of the capitalist economy’s long-run success. Conversely, the failure of the Soviet Project was due to its too successful adherence and implementation of the high-modernist ideal. The significance of the threat from crony capitalism is a function of the fact that by forming a coalition and partnership of the corporate and state control projects, it enables the implementation of the control revolution to be that much more effective.

The Hayekian argument of dispersed knowledge and its importance in seeking equilibrium is not as important as it seems in explaining why the Soviet project failed. As Joseph Berliner has illustrated, the Soviet economy did not fail to reach local equilibria. Where it failed so spectacularly was in extracting itself out of these equilibria. The dispersed knowledge argument is open to the riposte that better implementation of the control revolution will eventually overcome these problems – indeed much of the current techno-utopian version of the control revolution is based on this assumption. It is a weak argument for free enterprise, a much stronger argument for which is the need to maintain a system that retains the ability to reinvent itself and find a new, hitherto unknown trajectory via the destruction of the incumbents combined with the emergence of the new. Where the Soviet experiment failed is that it eliminated the possibility of failure, that Berliner called the ‘invisible foot’. The success of the free enterprise system has been built not upon the positive incentive of the invisible hand but the negative incentive of the invisible foot to counter the visible hand of the control revolution. It is this threat and occasional realisation of failure and disorder that is the key to maintaining system resilience and evolvability.

 

 

Notes:

  • Borrowing from Beniger, control here simply means “purposive influence towards a predetermined goal”. Similarly, equilibrium in this context is best defined as a state in which economic agents are not forced to change their routines, theories and policies.
  • On the uncanny valley, I wrote a similar post on why perfect memory does not lead to perfect human intelligence. Even if a computer benefits from more data and better memory, we may not. And the evidence suggests that the deterioration in human performance is steepest in the zone close to “perfection”.
  • An argument similar to my assertion on the misconception of a free enterprise economy as a market economy can be made about the nature of democracy. Rather than as a vehicle that enables the regular expression of the political will of the electorate, democracy may be more accurately thought of as the ability to effect a dramatic change when the incumbent system of plutocratic/technocratic rule diverges too much from popular opinion. As always, stability and prevention of disturbances can cause the eventual collapse to be more catastrophic than it needs to be.
  • Although James Beniger’s ‘Control Revolution’ is the definitive reference, Antoine Bousquet’s book ‘The Scientific Way of Warfare’ on the similar revolution in military warfare is equally good. Bousquet’s book highlights the fact that the military is often the pioneer of the key projects of the control revolution and it also highlights just how similar the latest phase of this evolution is to early phases – the common desire for control combined with its constant subversion by reality. Most commentators assume that the threat to the project is external – by constantly evolving guerrilla warfare for example. But the analysis of the uncanny valley suggests that an equally great threat is endogenous – of increasing complexity and illegibility of the control project itself. Bousquet also explains how the control revolution is a child of the modern era and the culmination of the philosophy of the Enlightenment.
  • Much of the “innovation” of the control revolution was not technological but institutional – limited liability, macroeconomic stabilisation via central banks etc.
  • For more on the role of degeneracy in biological systems and how it enables near-optimal resilience, this paper by James Whitacre and Axel Bender is excellent.
Bookmark and Share

Written by Ashwin Parameswaran

February 21st, 2012 at 5:38 pm

People Make Poor Monitors for Computers

with 55 comments

In the early hours of June 1st 2009, Air France Flight 447 crashed into the Atlantic Ocean. Till the black boxes of AF447 were recovered in April 2011, the exact circumstances of the crash remained a mystery. The most widely accepted explanation for the disaster attributes a large part of the blame to human error when faced with a partial but not fatal systems failure. Yet a small but vocal faction blames the disaster and others like it on the increasingly automated nature of modern passenger airplanes.

This debate bears an uncanny resemblance to a similar debate as to the causes of the financial crisis – many commentators blame the persistently irrational nature of human judgement for the recurrence of financial crises. Others such as Amar Bhide blame the unwise deference to imperfect financial models over human judgement. In my opinion, both perspectives miss the true dynamic. These disasters are not driven by human error or systems error alone but by fatal flaws in the interaction between human intelligence and complex, near fully-automated systems.

In a recent article drawing upon the black box transcripts, Jeff Wise attributes the crash primarily to a “simple but persistent mistake on the part of one of the pilots”. According to Wise, the co-pilot reacted to the persistent stall warning by “pulling back on the stick, the exact opposite of what he must do to recover from the stall”.

But there are many hints that the story is nowhere near as simple. As Peter Garrison notes :

every pilot knows that to recover from a stall you must get the nose down. But because a fully developed stall in a large transport is considered highly unlikely, and because in IFR air traffic vertical separation, and therefore control of altitude, is important, transport pilots have not been trained to put the nose down when they hear the stall warning — which heralds, after all, not a fully developed stall, but merely an approaching one. Instead, they have been trained to increase power and to “fly out of the stall” without losing altitude. Perhaps that is what the pilot flying AF447 intended. But the airplane was already too deeply stalled, and at too high an altitude, to recover with power alone.

The patterns of the AF447 disaster are not unique. As Chris Sorensen observes, over 50 commercial aircrafts have crashed in “loss-of-control” accidents in the last five years, a trend for which there is no shortage of explanations:

Some argue that the sheer complexity of modern flight systems, though designed to improve safety and reliability, can overwhelm even the most experienced pilots when something actually goes wrong. Others say an increasing reliance on automated flight may be dulling pilots’ sense of flying a plane, leaving them ill-equipped to take over in an emergency. Still others question whether pilot-training programs have lagged behind the industry’s rapid technological advances.

But simply invoking terms such as “automation addiction” or blaming disasters on irrational behaviour during times of intense stress does not get at the crux of the issue.

People Make Poor Monitors for Computers

Airplane automation systems are not the first to discover the truth in the comment made by David Jenkins that “computers make great monitors for people, but people make poor monitors for computers.” As James Reason observes in his seminal book ‘Human Error’:

We have thus traced a progression from where the human is the prime mover and the computer the slave to one in which the roles are very largely reversed. For most of the time, the operator’s task is reduced to that of monitoring the system to ensure that it continues to function within normal limits. The advantages of such a system are obvious; the operator’s workload is substantially reduced, and the [system] performs tasks that the human can specify but cannot actually do. However, the main reason for the human operator’s continued presence is to use his still unique powers of knowledge-based reasoning to cope with system emergencies. And this is a task peculiarly ill-suited to the particular strengths and weaknesses of human cognition…..

most operator errors arise from a mismatch between the properties of the system as a whole and the characteristics of human information processing. System designers have unwittingly created a work situation in which many of the normally adaptive characteristics of human cognition (its natural heuristics and biases) are transformed into dangerous liabilities.

As Jeff Wise notes, it is impossible to stall an Airbus in most conditions. AF447 however went into a state known as ‘alternate law’ which most pilots have never experienced where the airplane could be stalled:

“You can’t stall the airplane in normal law,” says Godfrey Camilleri, a flight instructor who teaches Airbus 330 systems to US Airways pilots….But once the computer lost its airspeed data, it disconnected the autopilot and switched from normal law to “alternate law,” a regime with far fewer restrictions on what a pilot can do. “Once you’re in alternate law, you can stall the airplane,” Camilleri says….It’s quite possible that Bonin had never flown an airplane in alternate law, or understood its lack of restrictions. According to Camilleri, not one of US Airway’s 17 Airbus 330s has ever been in alternate law. Therefore, Bonin may have assumed that the stall warning was spurious because he didn’t realize that the plane could remove its own restrictions against stalling and, indeed, had done so.

This inability of the human operator to fill in the gaps in a near-fully automated system was identified by Lisanne Bainbridge as one of the ironies of automation which James Reason summarised:

the same designer who seeks to eliminate human beings still leaves the operator “to do the tasks which the designer cannot think how to automate” (Bainbridge,1987, p.272). In an automated plant, operators are required to monitor that the automatic system is functioning properly. But it is well known that even highly motivated operators cannot maintain effective vigilance for anything more than quite short periods; thus, they are demonstrably ill-suited to carry out this residual task of monitoring for rare, abnormal events. In order to aid them, designers need to provide automatic alarm signals. But who decides when these automatic alarms have failed or been switched off?

As Robert Charette notes, the same is true for airplane automation:

operators are increasingly left out of the loop, at least until something unexpected happens. Then the operators need to get involved quickly and flawlessly, says Raja Parasuraman, professor of psychology at George Mason University in Fairfax, Va., who has been studying the issue of increasingly reliable automation and how that affects human performance, and therefore overall system performance. ”There will always be a set of circumstances that was not expected, that the automation either was not designed to handle or other things that just cannot be predicted,” explains Parasuraman. So as system reliability approaches—but doesn’t quite reach—100 percent, ”the more difficult it is to detect the error and recover from it,” he says…..In many ways, operators are being asked to be omniscient systems administrators who are able to jump into the middle of a situation that a complex automated system can’t or wasn’t designed to handle, quickly diagnose the problem, and then find a satisfactory and safe solution.

Stored Routines Are Not Effective in Rare Situations

As James Reason puts it:

the main reason why humans are retained in systems that are primarily controlled by intelligent computers is to handle ‘non-design’ emergencies. In short, operators are there because system designers cannot foresee all possible scenarios of failure and hence are not able to provide automatic safety devices for every contingency. In addition to their cosmetic value, human beings owe their inclusion in hazardous systems to their unique, knowledge-based ability to carry out ‘on-line’ problem solving in novel situations. Ironically, and notwithstanding the Apollo 13 astronauts and others demonstrating inspired improvisation, they are not especially good at it; at least not in the conditions that usually prevail during systems emergencies. One reason for this is that stressed human beings are strongly disposed to employ the effortless, parallel, preprogrammed operations of highly specialised, low-level processors and their associated heuristics. These stored routines are shaped by personal history and reflect the recurring patterns of past experience……

Why do we have operators in complex systems? To cope with emergencies. What will they actually use to deal with these problems? Stored routines based on previous interactions with a specific environment. What, for the most part, is their experience within the control room? Monitoring and occasionally tweaking the plant while it performs within safe operating limits. So how can they perform adequately when they are called upon to reenter the control loop? The evidence is that this task has become so alien and the system so complex that, on a significant number of occasions, they perform badly.

Wise again identifies this problem in the case of AF447:

While Bonin’s behavior is irrational, it is not inexplicable. Intense psychological stress tends to shut down the part of the brain responsible for innovative, creative thought. Instead, we tend to revert to the familiar and the well-rehearsed. Though pilots are required to practice hand-flying their aircraft during all phases of flight as part of recurrent training, in their daily routine they do most of their hand-flying at low altitude—while taking off, landing, and maneuvering. It’s not surprising, then, that amid the frightening disorientation of the thunderstorm, Bonin reverted to flying the plane as if it had been close to the ground, even though this response was totally ill-suited to the situation.

Deskilling From Automation

As James Reason observes:

Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills. One of the consequences of automation, therefore, is that operators become de-skilled in precisely those activities that justify their marginalised existence. But when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions. Duncan (1987, p. 266) makes the same point: “The more reliable the plant, the less opportunity there will be for the operator to practise direct intervention, and the more difficult will be the demands of the remaining tasks requiring operator intervention.”

Opacity and Too Much Information of Uncertain Reliability

Wise captures this problem and its interaction with a human who has very little experience in managing the crisis scenario:

Over the decades, airliners have been built with increasingly automated flight-control functions. These have the potential to remove a great deal of uncertainty and danger from aviation. But they also remove important information from the attention of the flight crew. While the airplane’s avionics track crucial parameters such as location, speed, and heading, the human beings can pay attention to something else. But when trouble suddenly springs up and the computer decides that it can no longer cope—on a dark night, perhaps, in turbulence, far from land—the humans might find themselves with a very incomplete notion of what’s going on. They’ll wonder: What instruments are reliable, and which can’t be trusted? What’s the most pressing threat? What’s going on? Unfortunately, the vast majority of pilots will have little experience in finding the answers.

A similar scenario occurred in the case of the Qantas-owned A380 which took off from Singapore in November 2010:

Shortly after takeoff from Singapore, one of the hulking A380’s four engines exploded and sent pieces of the engine cowling raining down on an Indonesian island. The blast also damaged several of the A380’s key systems, causing the unsuspecting flight crew to be bombarded with no less than 54 different warnings and error messages—so many that co-pilot Matt Hicks later said that, at one point, he held his thumb over a button that muted the cascade of audible alarms, which threatened to distract Capt. Richard De Crespigny and the rest of the feverishly working flight crew. Luckily for passengers, Qantas Flight 32 had an extra two pilots in the cockpit as part of a training exercise, all of whom pitched in to complete the nearly 60 checklists required to troubleshoot the various systems. The wounded plane limped back to Singapore Changi Airport, where it made an emergency landing.

Again James Reason captures the essence of the problem:

One of the consequences of the developments outlined above is that complex, tightly-coupled and highly defended systems have become increasingly opaque to the people who manage, maintain and operate them. This opacity has two aspects: not knowing what is happening and not understanding what the system can do. As we have seen, automation has wrought a fundamental change in the roles people play within certain high-risk technologies. Instead of having ‘hands on’ contact with the process, people have been promoted “to higher-level supervisory tasks and to long-term maintenance and planning tasks” (Rasmussen, 1988). In all cases, these are far removed from the immediate processing. What direct information they have is filtered through the computer-based interface. And, as many accidents have demonstrated, they often cannot find what they need to know while, at the same time, being deluged with information they do not want nor know how to interpret.

Absence of Intuitive Feedback

Among others, Hubert and Stuart Dreyfus have shown that human expertise relies on an intuitive and tacit understanding of the situation rather than a rule-bound and algorithmic understanding. The development of intuitive expertise depends upon the availability of clear and intuitive feedback which complex, automated systems are often unable to provide.

In AF447, when the co-pilot did push forward on the stick (the “correct” response), the behaviour of the stall warning was exactly the opposite of what he would have intuitively expected:

At one point the pilot briefly pushed the stick forward. Then, in a grotesque miscue unforeseen by the designers of the fly-by-wire software, the stall warning, which had been silenced, as designed, by very low indicated airspeed, came to life. The pilot, probably inferring that whatever he had just done must have been wrong, returned the stick to its climb position and kept it there for the remainder of the flight.

Absence of feedback prevents effective learning but the wrong feedback can have catastrophic consequences.

The Fallacy of Defence in Depth

In complex automated systems, the redundancies and safeguards built into the system also contribute to its opacity. By protecting system performance against single faults, redundancies allow the latent buildup of multiple faults. Jens Rasmussen called this ‘the fallacy of defence in depth’ which James Reason elaborates upon:

the system very often does not respond actively to single faults. Consequently, many errors and faults made by the staff and maintenance personnel do not directly reveal themselves by functional response from the system. Humans can operate with an extremely high level of reliability in a dynamic environment when slips and mistakes have immediately visible effects and can be corrected……Violation of safety preconditions during work on the system will probably not result in an immediate functional response, and latent effects of erroneous acts can therefore be left in the system. When such errors are allowed to be present in a system over a longer period of time, the probability of coincidence of the multiple faults necessary for release of an accident is drastically increased. Analyses of major accidents typically show that the basic safety of the system has eroded due to latent errors.

This is exactly what occurred on Malaysia Airlines Flight 124 in August 2005:

The fault-tolerant ADIRU was designed to operate with a failed accelerometer (it has six). The redundant design of the ADIRU also meant that it wasn’t mandatory to replace the unit when an accelerometer failed. However, when the second accelerometer failed, a latent software anomaly allowed inputs from the first faulty accelerometer to be used, resulting in the erroneous feed of acceleration information into the flight control systems. The anomaly, which lay hidden for a decade, wasn’t found in testing because the ADIRU’s designers had never considered that such an event might occur.

Again, defence-in-depth systems are uniquely unsuited to human expertise as Gary Klein notes:

In a massively defended system, if an accident sneaks through all the defenses, the operators will find it far more difficult to diagnose and correct it. That is because they must deal with all of the defenses, along with the accident itself…..A unit designed to reduce small errors helped to create a large one.

Two Approaches to Airplane Automation: Airbus and Boeing

Although both Airbus and Boeing have adopted the fly-by-wire technology, there are fundamental differences in their respective approaches. Whereas Boeing’s system enforces soft limits that can be overridden at the discretion of the pilot, Airbus’ fly-by-wire system has built-in hard limits that cannot be overridden completely at the pilot’s discretion.

As Simon Calder notes, pilots have raised concerns in the past about Airbus‘ systems being “overly sophisticated” as opposed to Boeing’s “rudimentary but robust” system. But this does not imply that the Airbus approach is inferior. It is instructive to analyse Airbus’ response to pilot demands for a manual override switch that allows the pilot to take complete control:

If we have a button, then the pilot has to be trained on how to use the button, and there are no supporting data on which to base procedures or training…..The hard control limits in the Airbus design provide a consistent “feel” for the aircraft, from the 120-passenger A319 to the 350-passenger A340. That consistency itself builds proficiency and confidence……You don’t need engineering test pilot skills to fly this airplane.

David Evans captures the essence of this philosophy as aimed at minimising the “potential for human error, to keep average pilots within the limits of their average training and skills”.

It is easy to criticise Airbus‘ approach but the hard constraints clearly demand less from the pilot. In the hands of an expert pilot, Boeing’s system may outperform. But if the pilot is a novice, Airbus’ system almost certainly delivers superior results. Moreover, as I discussed earlier in the post, the transition to an almost fully automated system by itself reduces the probability that the human operator can achieve intuitive expertise. In other words, the transition to near-autonomous systems creates a pool of human operators that appear to frequently commit “irrational” errors and is therefore almost impossible to reverse.

 *          *         *

People Make Poor Monitors for Some Financial Models

In earlier post, I analysed Amar Bhide’s argument that a significant causal agent in the financial crisis was the replacement of discretion with models in many areas of finance – for example, banks’ mortgage lending decisions. In his excellent book, ‘A Call for Judgement’, he expands on this argument and amongst other technologies, lays some of the blame for this over-mechanisation of finance on the ubiquitous Black-Scholes-Merton (BSM) formula. Although I agree with much of his book, this thesis is too simplistic.

There is no doubt that BSM has many limitations – amongst the most severe being the assumption of continuous asset price movements, a known and flat volatility surface, and an asset price distribution free of fat tails. But the systemic impact of all these limitations is grossly overstated:

  • BSM and similar models have never been used as “valuation” methods on a large scale in derivatives markets but as a tool which tries to back out an implied volatility and generate useful hedge ratios by taking market prices for options as a given. In other words, volatility plays the role of the “wrong number in the wrong formula to get the right price”.
  • When “simple” BSM-like models are used to price more exotic derivatives, they have a modest role to play. As Emanuel Derman puts it, practitioners use models as “interpolating formulas that take you from known prices of liquid securities to the unknown values of illiquid securities”.

Nevertheless, this does not imply that financial modelling choices have no role to play in determining system resilience. But the role was more subtle and had to do less with the imperfections of the models themselves as with the imperfections of how complex models used to price complex products could be used by human traders.

Since the discovery of the volatility smile, traders have known that the interpolation process to price exotic options requires something more than a simple BSM model. One would assume that traders would want to use a model that was accurate and comprehensive as possible. But this has rarely been the case. Supposedly inferior local volatility models still flourish and even in some of the most complex domains of exotic derivatives, models are still chosen based on their intuitive similarities to a BSM-like approach where the free parameters can be thought of as volatility or correlation e.g. The Libor Market Model.

The choice of intuitive understanding over model accuracy is not unwarranted. As all market practitioners know, there is no such thing as a perfect derivatives pricing model. Paul Wilmott hit the nail on the head when he observed that *“the many improvements on Black-Scholes are rarely improvements, the best that can be said for many of them is that they are just better at hiding their faults. Black-Scholes also has its faults, but at least you can see them”.

However, as markets have evolved, maintaining this balance between intuitive understanding and accuracy has become increasingly difficult:

  • Intuitive yet imperfect models require experienced and expert traders. Scaling up trading volumes of exotic derivatives however requires that pricing and trading systems be pushed out to novice traders as well as non-specialists such as salespeople.
  • With the increased complexity of derivative products, preserving an intuitive yet sufficiently accurate model becomes an almost impossible task.
  • Product complexity combined with the inevitable discretion available to traders when they use simpler models presents significant control challenges and an increased potential for fraud.

In this manner, the same paradoxical evolution that have been observed in nuclear plants and airplane automation is now being experienced in finance. The need to scale up and accommodate complex products necessitates the introduction of complex, unintuitive models in combination with which human intuitive expertise is unable to add any value. In such a system, a novice is often as good as a more experienced operator. The ability of these models to tackle most scenarios on ‘auto-pilot’ results in a deskilled and novice-heavy human component in the system which is ill-equipped to tackle the inevitable occasion when the model fails. The failure is inevitably taken as evidence of human failure upon which the system is made even more automated and more safeguards and redundancies are built into the system. This exacerbates the problem of absence of feedback when small errors occur. The buildup of latent errors again increases and failures become even more catastrophic.

 *          *         *

My focus on airplane automation and financial models is simply illustrative. There are ample signs of this incompatibility between human monitors and near-fully automated systems in other domains as well. For example, Andrew Hill observes:

In developed economies, Lynda Gratton writes in her new book The Shift, “when the tasks are more complex and require innovation or problem solving, substitution [by machines or computers] has not taken place”. This creates a paradox: far from making manufacturers easier to manage, automation can make managers’ jobs more complicated. As companies assign more tasks to machines, they need people who are better at overseeing the more sophisticated workforce and doing the jobs that machines cannot….

The insight that greater process efficiency adds to the pressure on managers is not new. Even Frederick Winslow Taylor – these days more often caricatured as a dinosaur for his time-and-motion studies – pointed out in his century-old The Principles of Scientific Management that imposing a more mechanistic regime on workers would oblige managers to take on “other types of duties which involve new and heavy burdens”…..

There is no doubt Foxconn and its peers will be able to automate their labour-intensive processes. They are already doing so. The big question is how easily they will find and develop managers able to oversee the highly skilled workforce that will march with their robot armies.

This process of integrating human intelligence with artificial intelligence is simply a continuation of the process through which human beings went from being tool-users to minders and managers of automated systems. The current transition is important in that for the first time, many of these algorithmic and automated systems can essentially run themselves with human beings performing the role of supervisors who only need to intervene in extraordinary circumstances. Although it seems logical that the same process of increased productivity that has occurred during the modern ‘Control Revolution’ will continue during the creation of the “vast,automatic and invisible” ‘second economy’, the incompability of human cognition with near-fully automated systems suggests that it may only do so by taking on an increased risk of rare but catastrophic failure.

Bookmark and Share

Written by Ashwin Parameswaran

December 29th, 2011 at 11:58 pm

The Pathology of Stabilisation in Complex Adaptive Systems

with 72 comments

The core insight  of the resilience-stability tradeoff is that stability leads to loss of resilience. Therefore stabilisation too leads to increased systemic fragility. But there is a lot more to it. In comparing economic crises to forest fires and river floods, I have highlighted the common patterns to the process of system fragilisation which eventually leaves the system “manager” in a situation where there are no good options left.

Drawing upon the work of Mancur Olson, I have explored how the buildup of special interests means that stability is self-reinforcing. Once rent-seeking has achieved sufficient scale, “distributional coalitions have the incentive and..the power to prevent changes that would deprive them of their enlarged share of the social output”. But what if we “solve” the Olsonian problem? Would that mitigate the problem of increased stabilisation and fragility? In this post, I will argue that the cycle of fragility and collapse has much deeper roots than any particular form of democracy.

In this analysis, I am going to move away from ecological analogies and instead turn to an example from modern medicine. In particular, I am going to compare the experience and history of psychiatric medication in the second half of the twentieth century to some of the issues we have already looked at in macroeconomic and ecological stabilisation. I hope to convince you that the uncanny similarities in the patterns observed in stabilised systems across such diverse domains are not a coincidence. In fact, the human body provides us with a much closer parallel to economic systems than even ecological systems with respect to the final stages of stabilisation. Most ecological systems collapse sooner simply because the limits to which resources will be spent in an escalating fashion to preserve stability are much smaller. For example, there are limits to the resources that will be deployed to prevent a forest fire, no matter how catastrophic. On the other hand, the resources that will be deployed to prevent collapse of any system that is integral to human beings are much larger.

Even by the standards of this blog, this will be a controversial article. In my discussion of psychiatric medicine I am relying primarily on Robert Whitaker’s excellent but controversial and much-disputed book ‘Anatomy of an Epidemic’. Nevertheless, I want to emphasise that my ultimate conclusions are much less incendiary than those of Whitaker. In the same way that I want to move beyond an explanation of the economic crisis that relies on evil bankers, crony capitalists and self-interested technocrats, I am trying to move beyond an explanation that blames evil pharma and misguided doctors for the crisis in mental health. I am not trying to imply that fraud and rent-seeking does not have a role to play. I am arguing that even if we eliminate them, the aim of a resilient economic and social system would not be realised.

THE PUZZLE

The puzzle of the history of macroeconomic stabilisation post-WW2 can be summarised as follows. Clearly every separate event of macroeconomic stabilisation works. Most monetary and fiscal interventions result in a rise in the financial markets, NGDP expectations and economic performance in the short run. Yet,

  • we are in the middle of a ‘great stagnation’ and have been for a few decades.
  • the frequency of crises seems to have risen dramatically in the last fifty years culminating in the environment since 2008 which is best described as a perpetual crisis.
  • each recovery seems to be weaker than the previous one and requires an increased injection of stimulus to achieve results that were easily achieved by a simple rate cut not that long ago.

Similarly, the history of mental health post-WW2 too has been a puzzle and is summarised by Whitaker as follows:

The puzzle can now be precisely summed up. On the one hand, we know that many people are helped by psychiatric medications. We know that many people stabilize well on them and will personally attest to how the drugs have helped them lead normal lives. Furthermore, as Satcher noted in his 1999 report, the scientific literature does document that psychiatric medications, at least over the short term, are “effective.” Psychiatrists and other physicians who prescribe the drugs will attest to that fact, and many parents of children taking psychiatric drugs will swear by the drugs as well. All of that makes for a powerful consensus: Psychiatric drugs work and help people lead relatively normal lives. And yet, at the same time, we are stuck with these disturbing facts: The number of disabled mentally ill has risen dramatically since 1955, and during the past two decades, a period when the prescribing of psychiatric medications has exploded, the number of adults and children disabled by mental illness has risen at a mind-boggling rate.

Whitaker then asks the obvious but heretical question – “Could our drug-based paradigm of care, in some unforeseen way, be fueling this modern-day plague?” and answers the question in the affirmative. But what are the precise mechanisms and patterns that underlie this deterioration?

Adaptive Response to Intervention and Drug Dependence

The fundamental reason why interventions fail in complex adaptive systems is the adaptive response triggered by the intervention that subverts the aim of the intervention. Moreover once the system is artificially stabilised and system agents have adapted to this new stability, the system cannot cope with any abrupt withdrawal of the stabilising force. For example, Whitaker notes that

Neuroleptics put a brake on dopamine transmission, and in response the brain puts down the dopamine accelerator (the extra D2 receptors). f the drug is abruptly withdrawn, the brake on dopamine is suddenly released while the accelerator is still pressed to the floor. The system is now wildly out of balance, and just as a car might careen out of control, so too the dopaminergic pathways in the brain……In short, initial exposure to neuroleptics put patients onto a path where they would likely need the drugs for life.

Whitaker makes the same observation for benzodiazepines and antidepressants:

benzodiazepines….work by perturbing a neurotransmitter system, and in response, the brain undergoes compensatory adaptations, and as a result of this change, the person becomes vulnerable to relapse upon drug withdrawal. That difficulty in turn may lead some to take the drugs indefinitely.

(antidepressants) perturb neurotransmitter systems in the brain. This leads to compensatory processes that oppose the initial acute effects of a drug…. When drug treatment ends, these processes may operate unopposed, resulting in appearance of withdrawal symptoms and increased vulnerability to relapse.

Similarly, when a central bank protects incumbent banks against liquidity risk, the banks choose to hold progressively more illiquid portfolios. When central banks provide incumbent banks with cheap funding in times of crisis to prevent failure and creditor losses, the banks choose to take on more leverage. This is similar to what John Adams has termed the ‘risk thermostat’ – the system readjusts to get back to its preferred risk profile. The protection once provided is almost impossible to withdraw without causing systemic havoc as agents adapt to the new stabilised reality and lose the ability to survive in an unstabilised environment.

Of course, in economic systems when agents actively intend to arbitrage such commitments by central banks, it is simply a form of moral hazard. But such an adaptation can easily occur via the natural selective forces at work in an economy – those who fail to take advantage of the Greenspan/Bernanke put simply go bust or get fired. In our brain the adaptation simply reflects homeostatic mechanisms selected for by the process of evolution.

Transformation into a Pathological State, Loss of Core Functionality and Deterioration of the Baseline State

I have argued in many posts that the successive cycles of Minskyian stabilisation have a role to play in the deterioration in the structural performance of the real economy which has manifested itself as ‘The Great Stagnation’. The same conclusion holds for many other complex adaptive systems and our brain is no different. Stabilisation kills much of what makes human beings creative. Innovation and creativity are fundamentally disequilibrium processes so it is no surprise that an environment of stability does not foster them. Whitaker interviews a patient on antidepressants who said: “I didn’t have mood swings after that, but instead of having a baseline of functioning normally, I was depressed. I was in a state of depression the entire time I was on the medication.”

He also notes disturbing research on the damage done to children who were treated for ADHD with Ritalin:

when researchers looked at whether Ritalin at least helped hyperactive children fare well academically, to get good grades and thus succeed as students, they found that it wasn’t so. Being able to focus intently on a math test, it turned out, didn’t translate into long-term academic achievement. This drug, Sroufe explained in 1973, enhances performance on “repetitive, routinized tasks that require sustained attention,” but “reasoning, problem solving and learning do not seem to be [positively] affected.”……Carol Whalen, a psychologist from the University of California at Irvine, noted in 1997 that “especially worrisome has been the suggestion that the unsalutary effects [of Ritalin] occur in the realm of complex, high-order cognitive functions such as flexible problem-solving or divergent thinking.”

Progressive Increase in Required Dosage

In economic systems, this steady structural deterioration means that increasing amounts of stimulus need to be applied in successive cycles of stabilisation to achieve the same levels of growth. Whitaker too identifies a similar tendency:

Over time, Chouinard and Jones noted, the dopaminergic pathways tended to become permanently dysfunctional. They became irreversibly stuck in a hyperactive state, and soon the patient’s tongue was slipping rhythmically in and out of his mouth (tardive dyskinesia) and psychotic symptoms were worsening (tardive psychosis). Doctors would then need to prescribe higher doses of antipsychotics to tamp down those tardive symptoms.

At this point, some of you may raise the following objection: so what if the new state is pathological? Maybe capitalism with its inherent instability is itself pathological. And once the safety nets of the Greenspan/Bernanke put, lender-of-last-resort programs and too-big-to-fail bailouts are put in place why would we need or want to remove them? If we simply medicate the economy ad infinitum, can we not avoid collapse ad infinitum?

This argument however is flawed.

  • The ability of economic players to reorganise to maximise the rents extracted from central banking and state commitments far exceeds the the resources available to the state and the central bank. The key reason for this is the purely financial nature of this commitment. For example, if the state decided to print money and support the price of corn at twice its natural market price, then it could conceivably do so forever. Sooner or later, rent extractors will run up against natural resource limits – for example,limits on arable land. But when the state commits to support a credit money dominant financial system and asset prices then the economic system can and will generate financial “assets” without limit to take advantage of this commitment. The only defense that the CB and the state possess is regulations aimed at maintaining financial markets in an incomplete, underdeveloped state where economic agents do not possess the tools to game the system. Unfortunately as Minsky and many others have documented, the pace of financial innovation over the last half-century has meant that banks and financialised corporates have all the tools they need to circumvent regulations and maximise rent extraction.
  • Even in a modern state that can print its own fiat currency, the ability to maintain financial commitments is subordinate to the need to control inflation. But doesn’t the complete absence of inflationary pressures in the current environment prove that we are nowhere close to any such limits? Not quite – As I have argued before, the current macroeconomic policy is defined by an abandonment of the full employment target in order to mitigate any risk of inflation whatsoever. The inflationary risk caused by rent extraction from the stabilisation commitment is being counterbalanced by a “reserve army of labour”. The reason for giving up the full employment is simple – As Minsky identified, once the economy has gone through successive cycles of stabilisation, it is prone to ‘rapid cycling’.

Rapid Cycling and Transformation of an Episodic Illness into a Chronic Illness

Minsky noted that

A high-investment, high-profit strategy for full employment – even with the underpinning of an active fiscal policy and an aware Federal Reserve system – leads to an increasingly unstable financial system, and an increasingly unstable economic performance. Within a short span of time, the policy problem cycles among preventing a deep depression, getting a stagnant economy moving again, reining in an inflation, and offsetting a credit squeeze or crunch.

In other words, an economy that attempts to achieve full employment will yo-yo uncontrollably between a state of debt-deflation and high, variable inflation – somewhat similar to a broken shower that only runs either too hot or too cold. The abandonment of the full employment target enables the system to postpone this point of rapid cycling.

The structural malformation of the economic system due to the application of increasing levels of stimulus to the task of stabilisation means that the economy has lost the ability to generate the endogenous growth and innovation that it could before it was so actively stabilised. The system has now been homogenised and is entirely dependent upon constant stimulus. The phenomenon of ‘rapid cycling’ explains a phenomenon I noted in an earlier post which is the apparently schizophrenic nature of the markets, turning from risk-on to risk-off at the drop of a hat. It is the lack of diversity that causes this as the vast majority of agents change their behaviour based on absence or presence of stabilising interventions.

Whitaker again notes the connection between medication and rapid cycling in many instances:

As early as 1965, before lithium had made its triumphant entry into American psychiatry, German psychiatrists were puzzling over the change they were seeing in their manic-depressive patients. Patients treated with antidepressants were relapsing frequently, the drugs “transforming the illness from an episodic course with free intervals to a chronic course with continuous illness,” they wrote. The German physicians also noted that in some patients, “the drugs produced a destabilization in which, for the first time, hypomania was followed by continual cycling between hypomania and depression.””

(stimulants) cause children to cycle through arousal and dysphoric states on a daily basis. When a child takes the drug, dopamine levels in the synapse increase, and this produces an aroused state. The child may show increased energy, an intensified focus, and hyperalertness. The child may become anxious, irritable, aggressive, hostile, and unable to sleep. More extreme arousal symptoms include obsessive-compulsive and hypomanic behaviors. But when the drug exits the brain, dopamine levels in the synapse sharply drop, and this may lead to such dysphoric symptoms as fatigue, lethargy, apathy, social withdrawal, and depression. Parents regularly talk of this daily “crash.”

THE PATIENT WANTS STABILITY TOO

At this point, I seem to be arguing that stabilisation is all just a con-game designed to enrich evil bankers, evil pharma etc. But such an explanation underestimates just how deep-seated the temptation and need to stabilise really is. The most critical component that it misses out on is the fact that the “patient” in complex adaptive systems is as eager to choose stability over resilience as the doctor is.

The Short-Term vs The Long-Term

As Daniel Carlat notes, the reality is that on the whole, psychiatric drugs “work” at least in the short term. Similarly, each individual act of macroeconomic stabilisation such as a lender-of-last-resort intervention, quantitative easing or a rate cut clearly has a positive impact on the short-term performance of both asset markets and the economy.

Whitaker too acknowledges this:

Those are the dueling visions of the psychopharmacology era. If you think of the drugs as “anti-disease” agents and focus on short-term outcomes, the young lady springs into sight. If you think of the drugs as “chemical imbalancers” and focus on long-term outcomes, the old hag appears. You can see either image, depending on where you direct your gaze.

The critical point here is that just like in forest fires and macroeconomies, the initial attempts to stabilise can be achieved easily and with very little medication. The results may seem even miraculous. But this initial period does not last. From one of many cases Whitaker quotes:

at first, “it was like a miracle,” she says. Andrew’s fears abated, he learned to tie his shoes, and his teachers praised his improved behavior. But after a few months, the drug no longer seemed to work so well, and whenever its effects wore off, there would be this “rebound effect.” Andrew would “behave like a wild man, out of control.” A doctor increased his dosage, only then it seemed that Andrew was like a “zombie,” his sense of humor reemerging only when the drug’s effects wore off. Next, Andrew needed to take clonidine in order to fall asleep at night. The drug treatment didn’t really seem to be helping, and so Ritalin gave way to other stimulants, including Adderall, Concerta, and dextroamphetamine. “It was always more drugs,” his mother says.

Medication Seen as Revealing Structural Flaws

One would think that the functional and structural deterioration that follows constant medication would cause both the patient and the doctor to reconsider the benefits of stabilisation. But this deterioration too can be interpreted in many different ways. Whitaker gives an example where the stabilised state is seen to be beneficial by revealing hitherto undiagnosed structural problems:

in 1982, Michael Strober and Gabrielle Carlson at the UCLA Neuropsychiatric Institute put a new twist into the juvenile bipolar story. Twelve of the sixty adolescents they had treated with antidepressants had turned “bipolar” over the course of three years, which—one might think—suggested that the drugs had caused the mania. Instead, Strober and Carlson reasoned that their study had shown that antidepressants could be used as a diagnostic tool. It wasn’t that antidepressants were causing some children to go manic, but rather the drugs were unmasking bipolar illness, as only children with the disease would suffer this reaction to an anti-depressant. “Our data imply that biologic differences between latent depressive subtypes are already present and detectable during the period of early adolescence, and that pharmacologic challenge can serve as one reliable aid in delimiting specific affective syndromes in juveniles,” they said.

Drug Withdrawal as Proof That It Works

The symptoms of drug withdrawal can also be interpreted to mean that the drug was necessary and that the patient is fundamentally ill. The reduction in withdrawal symptoms when the patient goes back on provides further “proof” that the drug works. Withdrawal symptoms can also be interpreted as proof that the patient needs to be treated for a longer period. Again, quoting from Whitaker:

Chouinard and Jones’s work also revealed that both psychiatrists and their patients would regularly suffer from a clinical delusion: They would see the return of psychotic symptoms upon drug withdrawal as proof that the antipsychotic was necessary and that it “worked.” The relapsed patient would then go back on the drug and often the psychosis would abate, which would be further proof that it worked. Both doctor and patient would experience this to be “true,” and yet, in fact, the reason that the psychosis abated with the return of the drug was that the brake on dopamine transmission was being reapplied, which countered the stuck dopamine accelerator. As Chouinard and Jones explained: “The need for continued neuroleptic treatment may itself be drug-induced.”

while they acknowledged that some alprazolam patients fared poorly when the drug was withdrawn, they reasoned that it had been used for too short a period and the withdrawal done too abruptly. “We recommend that patients with panic disorder be treated for a longer period, at least six months,” they said.

Similarly, macroeconomic crises can and frequently are interpreted as a need for better and more stabilisation. The initial positive impact of each intervention and the negative impact of reducing stimulus only reinforces this belief.

SCIENCE AND STABILISATION

A typical complaint against Whitaker’s argument is that his thesis is unproven. I would argue that within the confines of conventional “scientific” data analysis, his thesis and others directly opposed to it are essentially unprovable. To take an example from economics, is the current rush towards “safe” assets a sign that we need to produce more “safe” assets? Or is it a sign that our fragile economic system is addicted to the need for an ever-increasing supply of “safe” assets and what we need is a world in which no assets are safe and all market participants are fully aware of this fact?

In complex adaptive systems it can also be argued that the modern scientific method that relies on empirical testing of theoretical hypotheses against the data is itself fundamentally biased towards stabilisation and against resilience. The same story that I trace out below for the history of mental health can be traced out for economics and many other fields.

Desire to Become a ‘Real’ Science

Whitaker traces out how the theory attributing mental disorders to chemical imbalances was embraced as it enabled psychiatrists to become “real” doctors and captures the mood of the profession in the 80s:

Since the days of Sigmund Freud the practice of psychiatry has been more art than science. Surrounded by an aura of witchcraft, proceeding on impression and hunch, often ineffective, it was the bumbling and sometimes humorous stepchild of modern science. But for a decade and more, research psychiatrists have been working quietly in laboratories, dissecting the brains of mice and men and teasing out the chemical formulas that unlock the secrets of the mind. Now, in the 1980s, their work is paying off. They are rapidly identifying the interlocking molecules that produce human thought and emotion…. As a result, psychiatry today stands on the threshold of becoming an exact science, as precise and quantifiable as molecular genetics.

Search for the Magic Bullet despite Complexity of Problem

In the language of medicine, a ‘magic bullet’ is a drug that counters the root cause of the disease without adversely affecting any other part of the patient. The chemical-imbalance theory took a ‘magic bullet’ approach which reduced the complexity of our mental system to “a simple disease mechanism, one easy to grasp. In depression, the problem was that the serotonergic neurons released too little serotonin into the synaptic gap, and thus the serotonergic pathways in the brain were “underactive”. Antidepressants brought serotonin levels in the synaptic gap up to normal, and that allowed these pathways to transmit messages at a proper pace.”

Search for Scientific Method and Objective Criteria

Whitaker traces out the push towards making psychiatry an objective science with a defined method and its implications:

Congress had created the NIMH with the thought that it would transform psychiatry into a more modern, scientific discipline…..Psychiatrists and nurses would use “rating scales” to measure numerically the characteristic symptoms of the disease that was to be studied. Did a drug for schizophrenia reduce the patient’s “anxiety”? His or her “grandiosity”? “Hostility”? “Suspiciousness”? “Unusual thought content”? “Uncooperativeness”? The severity of all of those symptoms would be measured on a numerical scale and a total “symptom” score tabulated, and a drug would be deemed effective if it reduced the total score significantly more than a placebo did within a six-week period. At least in theory, psychiatry now had a way to conduct trials of psychiatric drugs that would produce an “objective” result. Yet the adoption of this assessment put psychiatry on a very particular path: The field would now see short-term reduction of symptoms as evidence of a drug’s efficacy. Much as a physician in internal medicine would prescribe an antibiotic for a bacterial infection, a psychiatrist would prescribe a pill that knocked down a “target symptom” of a “discrete disease.” The six-week “clinical trial” would prove that this was the right thing to do. However, this tool wouldn’t provide any insight into how patients were faring over the long term.

It cannot be emphasised enough that even increasing the period of the scientific trial is not enough to give us definitive answers. The argument that structural flaws are being uncovered or that withdrawal proves that the drug works cannot be definitively refuted. Moreover, at every point of time after medication is started, the short-term impact of staying on or increasing the level of medication is better than the alternative of going off the medication. The deeper issue here is also that in such a system, statistical analysis that tries to determine the efficacy of the intervention cannot deal with the fact that the nature of the intervention itself is to shift the distribution of outcomes into the tail and continue to do so as long as the level of medication keeps increasing.

The Control Agenda and High Modernism

The desire for stability and the control agenda is not simply a consequence of the growth of Olsonian special interests in the economy. The title of this post is inspired by Holling and Meffe’s classic paper on this topic in ecology. Their paper highlights that stabilisation is embedded within the command-and-control approach which itself is inherent to the high modernist way that James Scott has criticised.

Holling and Meffe also recognise that it is a simplistic application of “scientific” methods that underpins this command-and-control philosophy:

much of present ecological theory uses the equilibrium definition of resilience, even though that definition reinforces the pathology of equilibrium-centered command and control. That is because much of that theory draws predominantly from traditions of deductive mathematical theory (Pimm 1984) in which simplified, untouched ecological systems are imagined, or from traditions of engineering in which the motive is to design systems with a single operating objective (Waide & Webster 1976; De Angelis et. al. 1980; O’Neill et al. 1986), or from small-scale quadrant experiments in nature (Tilman & Downing 1994) in which long-term, large-scale successional or episodic transformations are not of concern. That makes the mathematics more tractable, it accommodates the engineer’s goal to develop optimal designs, and it provides the ecologist with a rationale for utilizing manageable, small sized, and short-term experiments, all reasonable goals. But these traditional concepts and techniques make the world appear more simple, tractable, and manageable than it really is. They carry an implicit assumption that there is global stability – that there is only one equilibrium steady-state, or, if other operating states exist, they should be avoided with safeguards and regulatory controls. They transfer the command-and-control myopia of exploitive development to similarly myopic demands for environmental regulations and prohibitions.

Those who emphasize ecosystem resilience, on the other hand, come from traditions of applied mathematics and applied resource ecology at the scale of ecosystems, such as the dynamics and management of freshwater systems (Fiering 1982) forests (Clark et al. 19759, fisheries (Walters 1986) semiarid grasslands (Walker et al. 1969), and interacting populations in nature (Dublin et al. 1990; Sinclair et al. 1990). Because these studies are rooted in inductive rather than deductive theory formation and in experience with the effects of large-scale management disturbances, the reality of flips from one stable state to another cannot be avoided (Helling 1986).

 

My aim in this last section is not to argue against the scientific method but simply to state that we have adopted too narrow a definition of what constitutes a scientific endeavour. Even this is not a coincidence. High modernism has its roots firmly planted in Enlightenment rationality and philosophical viewpoints that lie at the core of our idea of progress. In many uncertain domains, genuine progress and stabilisation that leads to fragility cannot be distinguished from each other. These are topics that I hope to explore in future posts.

Bookmark and Share

Written by Ashwin Parameswaran

December 14th, 2011 at 10:51 am