macroresilience

resilience, not stability

Creative Destruction and The Class Struggle

with 20 comments

In a perceptive post, Reihan Salam makes the point that private equity firms are simply an industrialised version of corporate America’s efficiency-seeking impulse. I’ve made a similar point in a previous post that the the excesses of private equity mirror the excesses of the economy during the neoliberal era. To right-wing commentators, neoliberalism signifies a much-needed transition towards a free-market economy. Left-wing commentators on the other hand lament the resultant supremacy of capital over labour and rising inequality. But as I have argued several times, the reality of the neoliberal transition is one where a combination of protected asset markets via the Greenspan Put, an ever-growing ‘License Raj’, regulations that exist primarily to protect incumbent corporates and persistent bailouts of banks and large corporates have given us a system best described as “stability for the classes and instability for the masses”.

The solution preferred by the left is to somehow recreate the golden age of the 50s and the 60s i.e. stability for all. Although this would be an environment of permanent innovative stagnation bereft of Schumpeterian creative destruction, you could argue that restoring social justice, reducing inequality and shoring up the bargaining position of the working class is more important than technological progress. In this post I will argue that this stability-seeking impetus is counterproductive and futile. A stable system where labour and capital are both protected from the dangers of failure inevitably breeds a fragile and disadvantaged working class.

The technology industry provides a great example of how disruptive competitive dynamics can give workers a relatively strong bargaining position. As Reihan notes, the workers fired by Steve Jobs in 1997 probably found employment elsewhere without much difficulty. Some of them probably started their own technology ventures. The relative bargaining power of the technology worker is boosted not just by the presence of a large number of new firms looking to hire but also by the option to simply start their own small venture instead of being employed. This vibrant ecosystem of competing opportunities and alternatives is a direct consequence of the disruptive churn that has characterised the sector over the last few decades. This “disorder” means that most individual firms and jobs are vulnerable at all times to elimination. Yet jobseekers as a whole are in a relatively strong position. Micro-fragility leads to macro-resilience.

In many sectors, there are legitimate economies of scale that prevent laid-off workers from self-organising into smaller firms. But in much of the economy, the digital and the physical, these economies of scale are rapidly diminishing. Yet these options are denied to large sections of the economy due to entry barriers from licensing requirements and regulatory hurdles that systematically disadvantage small, new firms. In some states, it is easier to form a technology start-up than it is to start a hair-braiding business. In fact, the increasingly stifling patent regime is driving Silicon Valley down the same dysfunctional path that the rest of the economy is on.

The idea that we can protect incumbent firms such as banks from failure and still preserve a vibrant environment for new entrants and competitors is folly. Just like a fire that burns down tall trees provides the opportunity for smaller trees to capture precious sunlight and thrive, new firms expand by taking advantage of the failure of large incumbents. But when the incumbent fails, there must be a sufficient diversity of small and new entrants who are in a position to take advantage. A long period of stabilisation does its greatest damage by stamping out this diversity and breeding a micro-stable, macro-fragile environment. Just as in ecosystems, “minor species provide a ‘‘reservoir of resilience’’ through their functional similarity to dominant species and their ability to increase in abundance and thus maintain function under ecosystem perturbation or stress”. This deterioration is not evident during the good times when the dominant species, however homogeneous, appear to be performing well. Stabilisation is therefore an almost irreversible path – once the system is sufficiently homogenous, avoiding systemic collapse requires us to put the incumbent fragile players on permanent life support.

As even Marxists such as David Harvey admit, Olsonian special-interest dynamics subvert and work against the interests of the class struggle:

the social forces engaged in shaping how the state–finance nexus works…differ somewhat from the class struggle between capital and labour typically privileged in Marxian theory….there are many issues, varying from tax, tariff, subsidy and both internal and external regulatory policies, where industrial capital and organised labour in specific geographical settings will be in alliance rather than opposition. This happened with the request for a bail-out for the US auto industry in 2008–9. Auto companies and unions sat side by side in the attempt to preserve jobs and save the companies from bankruptcy.

This fleeting and illusory stability that benefits the short-term interests of the currently employed workers in a firm leads to the ultimate loss of bargaining-power and reduced real wage growth in the long run for workers as a class. In the pursuit of stability, the labour class supports those very policies that are most harmful to it in the long run. A regime of Smithian efficiency-seeking i.e. the invisible hand, without Schumpeterian disruption i.e. the invisible foot inevitably leads to a system where capital dominates labour. Employed workers may achieve temporary stability via special-interest politics but the labour class as a whole will not. Creative destruction prevents the long-term buildup of capital interests by presenting a constant threat to the survival of the incumbent rent-earner. In the instability of the individual worker (driven by the instability of their firm’s prospects) lies the resilience of the worker class. Micro-fragility is the key to macro-resilience but this fragility must be felt by all economic agents, labour and capital alike.

Bookmark and Share

Written by Ashwin Parameswaran

July 5th, 2012 at 1:11 am

Monetary Policy, Fiscal Policy and Inflation

with 8 comments

In a previous post I argued that in the current environment, the Federal Reserve could buy up the entire stock of government bonds without triggering any incremental inflation. The argument for the ineffectiveness of conventional QE is fairly simple. Government bonds are already safe collateral both in the shadow banking system as well as with the central bank itself. The liquidity preference argument is redundant in differentiating between deposits and an asset that qualifies as safe collateral. Broad money supply is therefore unaffected when such an asset is purchased.

The monetarist objection to this argument is that QE increases the stock of high-powered money and increases the price level to the extent that this increase is perceived as permanent. But in an environment where interest is paid on reserves or deposits with the central bank, the very concept of high-powered money is meaningless and there is no hot potato effect to speak of. Some monetarists argue that we need to enforce a penalty rate on reserves to get rid of excess reserves but small negative rates make little difference to safe-haven flows and large negative rates will lead to people hoarding bank notes.

The other objection is as follows: if the central bank can buy up all the debt then why don’t we do just that and retire all that debt and make the state debt-free? Surely that can’t be right – isn’t such debt monetisation the road to Zimbabwe-like hyperinflation? Intuitively, many commentators interpret QE as a step on the slippery slope of fiscal deficit monetisation but this line of thought is fatally flawed. Inflation comes about from the expected and current monetisation of fiscal deficits, not from the central bank’s purchase of the stock of government debt that has arisen from past fiscal deficits. The persistent high inflation that many emerging market economies are so used to arises from money-printed deficits that are expected to continue well into the future.

So why do the present and future expected fiscal deficits in the US economy not trigger inflation today? One, the present deficits come at a time when the shadow money supply is still contracting. And two, the impact of expected future deficits in the future is muddied thanks to the status of the US Dollar as the reserve currency of the world, a status that has been embellished since the 90s thanks to reserves being used as capital flight and IMF-avoidance insurance by many EM countries (This post by Brett Fiebiger is an excellent explanation of the privileged status enjoyed by the US Dollar). The expectations channel has to deal with too much uncertainty and there are too many scenarios in which the USD may hold its value despite large deficits, especially if the global economy continues to be depressed and demand for safe assets remains elevated. There are no such uncertainties in the case of peripheral economy fiat currencies (e.g. Hungary). To the extent that there is any safe asset demand, it is mostly local and the fact that other global safe assets exist means that the fiscal leeway that peripheral economies possess is limited. In other words, the absence of inflation is not just a matter of the market trusting the US government to take care of its long-term structural deficit problems – uncertainty and the “safe asset” status of the USD greatly diminish the efficacy of the expectations channel.

Amidst the fog of uncertainty and imperfect commitments, concrete steps matter and they matter especially in the midst of a financial crisis. Monetary policy can almost always prevent deflation in the face of a contraction in shadow money supply via the central banks’ lender-of-last-resort facilities. In an economy like 2008-2009, no amount of open-market operations, asset purchases and monetary target commitments can prevent a sharp deflationary contraction in the private shadow money supply unless the lender-of-last-resort facility is utilised. Once the system is stabilised and the possibility of a deflationary contraction has been avoided, monetary policy has very little leeway to create incremental inflation in the absence of fiscal profligacy and shadow banking/private credit expansion except via essentially fiscal actions such as buying private assets, credit guarantees etc. In the present situation where the private household economy is excessively indebted and the private business economy suffers from a savings glut and a persistent investment deficit due to structural malformation, fiscal profligacy is the only short-term option. Correspondingly, no amount of monetary stimulus can prevent a sharp fiscal contraction from causing deflation in the current economic state.

Monetary policy is also not all-powerful in its contractionary role – it has significant but not unlimited leeway to tighten policy in the face of fiscal profligacy or shadow banking expansion. The Indian economy in 1995-1996 illustrates how the Reserve Bank of India (RBI) could control inflation in the face of fiscal profligacy only by crippling the private sector economy. The real rates faced by the private sector shot up and spending ground to a halt. The dilemma faced by the RBI today mirror the problems it faced then – if fiscal indiscipline by the Indian government persists, the RBI cannot possibly bring down inflation to acceptable levels without causing the private sector economy to keel over.

The current privileged status of the US Dollar and the low interest rates and inflation does not imply that long-term fiscal discipline is unimportant. Currently, the demand for safety reduces inflation and the low inflation renders the asset safer – this virtuous positive-feedback cycle can turn vicious if expectation of monetisation is sufficiently large and the mutual-feedback nature of the process means that any such transition will almost certainly be rapid. It is not even clear that the United States is better off than say Hungary in the long run. The United States has much leeway and flexibility than Hungary but if it abuses this privilege, any eventual break will be that much more violent. Borrowing from an old adage, give an economy too much rope and it will hang itself.

Bookmark and Share

Written by Ashwin Parameswaran

June 20th, 2012 at 4:55 pm

SNB’s Swiss Franc Dilemma: A Solution

with 12 comments

As I highlighted in my previous post, the honeymoon period for the SNB in its enforcement of the 1.20 floor on the EURCHF exchange rate is well and truly over. In May, the SNB needed to intervene to the tune of CHF 66 bn to defend the floor. There’s even speculation that the SNB may be forced to implement capital controls or negative interest rates on offshore deposits in the event of a disorderly Greek exit from the Eurozone.

Increasingly, the SNB is caught between a rock and a hard place. Either it can continue to defend the peg and accumulate increasing amounts of foreign exchange reserves on which it faces the prospect of correspondingly increasing losses. Or it can abandon the peg, allow the CHF to appreciate 20-25% and risk deflation and a collapse in exports and GDP. It is not difficult to see why the SNB is being forced to defend the peg – the EUR in the current environment is a risky asset and the CHF is a safe asset. By committing to sell a safe asset at a below-market price, the SNB is subsidising the price of safety. It is no wonder then that this offer finds so many takers when there is a flight to safety.

Some argue that the continued deflation in the Swiss economy allows the SNB to maintain its peg but this argument ignores the fact that it is the continued deflation that also maintains the safe status of the Swiss Franc. Deflation provides the impetus for the safe-haven flows due to which the required intervention by the SNB and the SNB’s risk exposure are that much greater in magnitude. Therefore, if the SNB is eventually forced to abandon the floor, the earlier the better. A prolonged period of deflation punctuated by occasional flights to safety will compel the SNB to accumulate an unsustainable level of foreign exchange reserves to defend the floor. By the same logic, the SNB would obviously prefer that the Eurozone not implode but if it does implode, then it would rather that the Euro implodes sooner rather than later.

So what does the SNB need to do? It needs to engineer an outcome where the market price of the EURCHF moves up and the CHF devalues by itself. The only sustainable way to achieve this is to provide a significant dose of inflation to the Swiss economy and it needs to do so in a manner that does not provide an even larger subsidy to those running away from risk. For example, raising the EURCHF floor by itself only increases the temptation to buy the Franc and at best provides a one-time dose of inflation. The SNB could decide to buy CHF private sector assets but the safe-haven inflows and relatively strong performance of the Swiss economy mean that asset markets, especially housing, are already frothy.

The more sustainable and equitable solution is to simply make the safe asset unsafe by generating the requisite inflation for which money-financed helicopter drops are the best solution. Money-financed fiscal transfers will create inflation, deter the safe-haven inflow and shore up the balance sheet of the Swiss household sector. The robustness of this solution in creating sustainable inflation will not come as a shock to any emerging market central banker or finance minister. The crucial difference between this plan and that implemented by banana republics around the world is that instead of printing money and funnelling it to corrupt government officials we will distribute the money to the masses.

Bookmark and Share

Written by Ashwin Parameswaran

June 12th, 2012 at 11:21 am

Monetary Policy Targets and The Need for Market Intervention

with 9 comments

If we analyse monetary policy as a threat strategy, then how do we make sure that the threat is credible? According to Nick Rowe, “The Fed needs to communicate its target clearly. And it needs to threaten to do unlimited amounts of QE for an unlimited amount of time until its target is hit. If that threat is communicated clearly, and believed, the actual amount of QE needed will be negative.” In essence, this is a view that a credible threat will cause market expectations to adjust and negate the need for any actual intervention in markets by the central bank.

The current poster-child for this view is the SNB’s maintenance of a floor on the EURCHF exchange rate at 1.20. The market-expectations story argues that because the SNB has credibly committed to maintaining a floor on EURCHF, it will not need to intervene in the markets at all (See Evan Soltas here and here, Scott Sumner, Matthew Yglesias and Timothy Lee). And indeed the SNB did not need to intervene at all…..that is, until May, when they were required to intervene to the tune of CHF 66 billion within the span of just a month in order to defend the floor from euro-crisis induced safe-haven flows.

Even when the central bank wants to hit a target as transparent and as liquidly traded as an exchange rate, it seems that actual intervention is needed sooner or later. Therefore when the transmission channel between central bank purchase of assets and the target variable is as blurred as it would be in regimes such as NGDP targeting, it is unlikely that the central bank will get away with just waving the magic wand of market expectations.

Bookmark and Share

Written by Ashwin Parameswaran

June 7th, 2012 at 11:15 pm

Posted in Monetary Policy

The Case Against Monetary Stimulus Via Asset Purchases

with 43 comments

Many economists and commentators blame the Federal Reserve for the increasingly tepid economic recovery in the United States. For example, Ryan Avent calls the Fed’s unwillingness to further ease monetary policy a “dereliction of duty” and Felix Salmon claims that “we have low bond yields because the Fed has failed to do its job”. Most people assume that the adoption of a higher inflation target (or an NGDP target) and conventional quantitative easing (QE) via government bond purchases will suffice. Milton Friedman, for example, had argued that government bond purchases with “high-powered money” would have dragged Japan out of its recession. But how exactly is more QE supposed to work in an environment when treasury bonds are trading at all-time low yields and banks are awash in excess reserves?

If we analyse monetary policy as a threat strategy, then how do we make sure that the threat is credible? According to Nick Rowe, “The Fed needs to communicate its target clearly. And it needs to threaten to do unlimited amounts of QE for an unlimited amount of time until its target is hit. If that threat is communicated clearly, and believed, the actual amount of QE needed will be negative.” In essence, this is a view that market expectations are sufficient to do the job.

Expectations are a large component of how monetary policy works but expectations only work when there is a clear and credible set of actions that serve as the bazooka(s) to enforce these expectations. In other words, what is it exactly that the central bank threatens to do if the market refuses to react sufficiently to its changed targets? It is easy to identify the nature of the threat when the target variable is simply a market price, e.g. an exchange rate vs another currency (such as the SNB’s enforcement of a minimum EURCHF exchange rate) or an exchange rate vs a commodity (such as the abandoning of the gold standard). But when the target variable is not a market price, the transmission mechanism is nowhere near as simple.

Scott Sumner would implement an NGDP targeting regime in the following manner:

First create an explicit NGDP target. Use level targeting, which means you promise to make up for under- or overshooting. If excess reserves are a problem, get rid of most of them with a penalty rate. Commit to doing QE until various asset prices show (in the view of Fed officials) that NGDP is expected to hit the announced target one or two years out. If necessary buy up all of Planet Earth.

Interest on Reserves

Small negative rates on reserves or deposits held at the central bank are not unusual. But banks can and will pass on this cost to their deposit-holders in the form of negative deposit rates and given the absence of any better liquid and nominally safe investment options, most bank customers will pay this safety premium. For example, when the SNB charged negative rates on offshore deposits denominated in Swiss Franc in the mid-1970s, the move did very little to stem the inflow into the currency.

Significant negative rates are easily evaded as people possess the option to hold cash in the form of bank notes. As SNB Vice-Chairman Jean-Pierre Danthine notes:

With strongly negative interest rates, theory joins practice and seems to lead to a policy of holding onto bank notes (cash) rather than accounts, which destabilises the system.

Quantitative Easing: Government Bonds

Conventional QE can be deconstructed into two components: an exchange of money for treasury-bills and an exchange of treasury-bills for treasury-bonds. The first component has no impact on the market risk position of the T-bill holder for whom deposits and T-bills are synonymous in a zero-rates environment. But it is also irrelevant from the perspective of the banking system unless the rate paid on reserves is significantly negative (which can be evaded by holding bank notes as discussed above).

The second component obviously impacts the market risk position of the economy as a whole. It is widely assumed that by purchasing government bonds, the central bank reduces the duration risk exposure of the market as a whole thus freeing up risk capacity. But for most holders of government bonds (especially pension funds and insurers), duration is not a risk but a hedge. A nominal dollar receivable in 20 years is not always riskier than a nominal dollar receivable today – for those who hold the bond as a hedge for a liability of a nominal dollar payable in 20 years, the dollar receivable today is in fact the riskier holding. More generally the negative beta nature of government bonds means that the central bank increases the risk exposure of the economy when it buys them.

Apart from the market risk impact of QE, we need to examine whether it has any impact on the liquidity position of the private economy. In this respect, neither the first or the second step has any impact for a simple reason – the assets being bought i.e. govt bonds are already safe collateral both in the shadow banking system as well as with the central bank itself. Therefore, any owner of government bonds can freely borrow cash against it. The liquidity preference argument is redundant in differentiating between deposits and an asset that qualifies as safe collateral. Broad money supply is therefore unaffected when such an asset is purchased.

If conventional QE were the only tool in the arsenal, announcing higher targets or NGDP targets achieves very little. The Bank of England and the Federal Reserve could buy up the entire outstanding stock of govt bonds and the impact on inflation or economic growth would be negligible in the current environment.

Credit Easing and More: Private Sector Assets

Many proponents of NGDP targeting would assert that limiting the arsenal of the central bank to simply treasury bonds is inappropriate and that the central bank must be able to purchase private sector assets (bonds, equities) or as Scott Sumner exhorts above “If necessary buy up all of Planet Earth”. There is no denying the fact that by buying up all of Planet Earth, any central bank can create inflation. But when the assets bought are already liquid and market conditions are not distressed, buying of private assets creates inflation only by increasing the price and reducing the yield of those assets i.e. a wealth transfer from the central bank to the chosen asset-holders. As with quantitative easing through government bond purchases, the inability to enforce adequate penalties on reserves nullifies any potential “hot potato” effect.

Bernanke himself has noted that the liquidity facility interventions during the 2008-2009 crisis and QE1 were focused on reducing private market credit spreads and improving the functioning of private credit markets at a time when the market for many private sector assets was under significant stress and liquidity premiums were high. The current situation is not even remotely comparable – yields on private credit instruments are at relatively elevated levels compared to historical median spreads but the difference in absolute terms is only about 50 bps on investment-grade credit (see table below) as compared to much higher levels (at least 300-40 bps on investment grade) during the 2008-2009 crisis.

US Historical Credit Spreads
Source: Robeco

A quantitative easing program focused on purchasing private sector assets is essentially a fiscal program in monetary disguise and is not even remotely neutral in its impact on income distribution and economic activity. Even if the central bank buys a broad index of bonds or equities, such a program is by definition a transfer of wealth towards asset-holders and regressive in nature (financial assets are largely held by the rich). The very act of making private sector assets “safe” is a transfer of wealth from the taxpayer to some of the richest people in our society. The explicit nature of the central banks’ stabilisation commitment means that the rent extracted from the commitment increases over time as more and more economic actors align their portfolios to hold the stabilised and protected assets.

Such a program is also biased towards incumbent firms and against new firms. The assumption that an increase in the price of incumbent firms’ stock/bond price will flow through to lending and investment in new businesses is unjustified due to the significantly more uncertain nature of new business lending/investment. This trend has been exacerbated since the crisis and the bond market is increasingly biased towards the largest, most liquid issuers. Even more damaging, any long-term macroeconomic stabilisation program that commits to purchasing and supporting macro-risky assets will incentivise economic actors to take on macro risk and shed idiosyncratic risk. Idiosyncratic risk-taking is the lifeblood of innovation in any economy.

In other words, QE is not sufficient to hit any desired inflation/NGDP target unless it is expanded to include private sector assets. If it is expanded to include private sector assets, it will exacerbate the descent into an unequal, crony capitalist, financialised and innovatively stagnant economy that started during the Greenspan/Bernanke put era.

Removing the zero-bound

One way of getting around the zero-bound on interest rates is to simply abolish or tax bank note holdings as Willem Buiter has recommended many times:

The existence of bank notes or currency, which is an irredeemable ‘liability’ of the central bank – bearer bonds with a zero nominal interest rate – sets a lower bound (probably at something just below 0%) on central banks’ official policy rates.
The obvious solutions are: (1) abolishing currency completely and moving to E-money on which negative interest rates can be paid as easily as zero or positive rates; (2) taxing holdings of bank notes (a solution first proposed by Gesell (1916) and also advocated by Irving Fisher (1933)) or (3) ending the fixed exchange rate between currency and central bank reserves (which, like all deposits, can carry negative nominal interest rates as easily as positive nominal interest rates, a solution due to Eisler (1932)).

I’ve advocated many times on this blog that monetary-fiscal hybrid policies such as money-financed helicopter drops to individuals should be established as the primary tool of macroeconomic stabilisation. In this manner, inflation/NGDP targets can be achieved in a close-to-neutral manner that minimises rent extraction. My preference for fiscal-monetary helicopter drops over negative interest-rates is primarily driven by financial stability considerations. There is ample evidence that even low interest rates contribute to financial instability.

There’s a deep hypocrisy at the heart of the macro-stabilised era. Every policy of stabilisation is implemented in a manner that only a select few (typically corporate entities) can access with an implicit assumption that the impact will trickle-down to the rest of the economy. Central-banking since the Great Moderation has suffered from an unwarranted focus on asset prices driven by an implicit assumption that changes in asset prices are the best way to influence the macroeconomy. Instead doctrines such as the Greenspan Put have exacerbated inequality and cronyism and promoted asset price inflation over wage inflation. The single biggest misconception about the macro policy debate is the notion that monetary policy is neutral or more consistent with a free market and fiscal policy is somehow socialist and interventionist. A program of simple fiscal transfers to individuals can be more neutral than any monetary policy instrument and realigns macroeconomic stabilisation away from the classes and towards the masses.

Bookmark and Share

Written by Ashwin Parameswaran

June 4th, 2012 at 2:36 pm

The Resilience Approach vs Minsky/Bagehot: When and Where to Intervene

with 5 comments

There are many similarities between a resilience approach to macroeconomics and the Minsky/Bagehot approach – the most significant being a common focus on macroeconomies as systems in permanent disequilibrium. Although both approaches largely agree on the descriptive characteristics of macroeconomic systems, there are some significant differences when it comes to the preferred policy prescriptions. In a nutshell, the difference boils down to the question of when and where to intervene.

A resilience approach focuses its interventions on severe disturbances, whilst allowing small and moderate disturbances to play themselves out. Even when the disturbance is severe, a resilience approach avoids stamping out the disturbance at source and focuses its efforts on mitigating the wider impact of the disturbance on the macroeconomy. The primary aim is the minimisation of the long-run fragilising consequences of the intervention which I have explored in detail in many previous posts(1, 2, 3). Just as small fires and floods are integral to ecological resilience, small disturbances are integral to macroeconomic resilience. Although it is difficult to identify ex-ante whether disturbances are moderate or not, the Greenspan-Bernanke era nevertheless contains some excellent examples of when not to intervene. The most obvious amongst all the follies of Greenspan-era monetary policy were the rate cuts during the LTCM collapse which were implemented with the sole purpose of “saving” financial markets at a time when the real economy showed no signs of stress1.

The Minsky/Bagehot approach focuses on tackling all disturbances with debt-deflationary consequences at their source. Bagehot asserted in ‘Lombard Street’ that “in wild periods of alarm, one failure makes many, and the best way to prevent the derivative failures is to arrest the primary failure which causes them”. Minsky emphasised the role of both the lender-of-last-resort (LOLR) mechanism as well as fiscal stabilisers in tackling such “failures”. However Minsky was not ignorant of the long-term damage inflicted by a regime where all disturbances were snuffed out at source – the build-up of financial “innovation” designed to take advantage of this implicit protection, the descent into crony capitalism and the growing fragility of a private-investment driven economy2, an understanding that was also reflected in his fundamental reform proposals3. Minsky also appreciated that the short-run cycle from hedge finance to Ponzi finance does not repeat itself in the same manner. The long-arc of stabilised cycles is itself a disequilibrium process (a sort of disequilibrium super-cycle) where performance in each cycle deteriorates compared to the last one – an increasing amount of stabilisation needs to be applied in each short-run cycle to achieve poorer results compared to the previous cycle.

Resilience Approach: Policy Implications

As I have outlined in an earlier post, an approach that focuses on minimising the adaptive consequences of macroeconomic interventions implies that macroeconomic policy must allow the “river” of the macroeconomy to flow in a natural manner and restrict its interventions to insuring individual economic agents rather than corporate entities against the occasional severe flood. In practise, this involves:

  • De-emphasising the role of conventional and unconventional monetary policy (interest-rate cuts, LOLR, quantitative easing, LTRO) in tackling debt-deflationary disturbances.
  • De-emphasising the role of industrial policy and explicit bailouts of banks and other firms4.
  • Establishing neutral monetary-fiscal hybrid policies such as money-financed helicopter drops as the primary tool of macroeconomic stabilisation. Minsky’s insistence on the importance of LOLR operations was partly driven by his concerns that alternative policy options could not be implemented quickly enough5. This concern is less relevant with regards to helicopter drops in today’s environment where they can be implemented almost instantaneously6.

Needless to say, the policies we have followed throughout the ‘Great Moderation’ and continue to follow are anything but resilient. Nowhere is the farce of orthodox policy more apparent than in Europe where countries such as Spain are compelled to enforce austerity on the masses whilst at the same time being forced to spend tens of billions of dollars in bailing out incumbent banks. Even within the structurally flawed construct of the Eurozone, a resilient strategy would take exactly the opposite approach which will not only drag us out of the ‘Great Stagnation’ but it will do so in a manner that delivers social justice and reduced inequality.

 

 


  1. Of course this “success” also put Greenspan, Rubin and Summers onto the cover of TIME magazine, which goes to show just how biased political incentives are in favour of stabilisation and against resilience.  ↩
  2. From pages 163-165 of Minsky’s book ‘John Maynard Keynes’:
    “The success of a high-private-investment strategy depends upon the continued growth of relative needs to validate private investment. It also requires that policy be directed to maintain and increase the quasi-rents earned by capital – i.e.,rentier and entrepreneurial income. But such high and increasing quasi-rents are particularly conducive to speculation, especially as these profits are presumably guaranteed by policy. The result is experimentation with liability structures that not only hypothecate increasing proportions of cash receipts but that also depend upon continuous refinancing of asset positions. A high-investment, high-profit strategy for full employment – even with the underpinning of an active fiscal policy and an aware Federal Reserve system – leads to an increasingly unstable financial system, and an increasingly unstable economic performance. Within a short span of time, the policy problem cycles among preventing a deep depression, getting a stagnant economy moving again, reining in an inflation, and offsetting a credit squeeze or crunch…….
    In a sense, the measures undertaken to prevent unemployment and sustain output “fix” the game that is economic life; if such a system is to survive, there must be a consensus that the game has not been unfairly fixed…….
    As high investment and high profits depend upon and induce speculation with respect to liability structures, the expansions become increasingly difficult to control; the choice seems to become whether to accomodate to an increasing inflation or to induce a debt-deflation process that can lead to a serious depression……
    The high-investment, high-profits policy synthesis is associated with giant firms and giant financial institutions, for such an organization of finance and industry seemingly makes large-scale external finance easier to achieve. However, enterprises on the scale of the American giant firms tend to become stagnant and inefficient. A policy strategy that emphasizes high consumption, constraints upon income inequality, and limitations upon permissible liability structures, if wedded to an industrial-organization strategy that limits the power of institutionalized giant firms, should be more conducive to individual initiative and individual enterprise than is the current synthesis.
    As it is now, without controls on how investment is to be financed and without a high-consumption, low private-investment strategy, sustained full employment apparently leads to treadmill affluence, accelerating inflation, and recurring threats of financial crisis.”
     ↩
  3. Just like Keynes, Minsky understood completely the dynamic of stabilisation and its long-term strategic implications. Given the malformation of private investment by the interventions needed to preserve the financial system, Keynes preferred the socialisation of investment and Minsky a shift to a high-consumption, low-investment system. But the conventional wisdom, which takes Minsky’s tactical advice on stabilisation and ignores his strategic advice on the need to abandon the private-investment led model of growth, is incoherent. ↩
  4. In his final work ‘Power and Prosperity’, Mancur Olson expressed a similar sentiment: “subsidizing industries, firms and localities that lose money…at the expense of those that make money…is typically disastrous for the efficiency and dynamism of the economy, in a way that transfers unnecessarily to poor individuals…A society that does not shift resources from the losing activities to those that generate a social surplus is irrational, since it is throwing away useful resources in a way that ruins economic performance without the least assurance that it is helping individuals with low incomes. A rational and humane society, then, will confine its distributional transfers to poor and unfortunate individuals.” ↩
  5. From pg 44 of ‘Stabilising an Unstable Economy’: “The need for lender-of-Iast-resort operations will often occur before income falls steeply and before the well nigh automatic income and financial stabilizing effects of Big Government come into play. If the institutions responsible for the lender-of-Iast-resort function stand aside and allow market forces to operate, then the decline in asset values relative to current output prices will be larger than with intervention; investment and debt- financed consumption will fall by larger amounts; and the decline in income, employment, and profits will be greater. If allowed to gain momentum, the financial crisis and the subsequent debt deflation may, for a time, overwhelm the income and financial stabilizing capacity of Big Government. Even in the absence of effective lender-of-Iast-resort action, Big Government will eventually produce a recovery, but, in the interval, a high price will be paid in the form of lost income and collapsing asset values.” ↩
  6. As Charlie Bean of the BoE suggests, helicopter drops could be implemented in the UK via the PAYE system. ↩
Bookmark and Share

Written by Ashwin Parameswaran

May 8th, 2012 at 1:31 pm

The Control Revolution And Its Discontents

with 20 comments

One of the key narratives on this blog is how the Great Moderation and the neo-liberal era has signified the death of truly disruptive innovation in much of the economy. When macroeconomic policy stabilises the macroeconomic system, every economic actor is incentivised to take on more macroeconomic systemic risks and shed idiosyncratic, microeconomic risks. Those that figured out this reality early on and/or had privileged access to the programs used to implement this macroeconomic stability, such as banks and financialised corporates, were the big winners – a process that is largely responsible for the rise in inequality during this period. In such an environment the pace of disruptive product innovation slows but the pace of low-risk process innovation aimed at cost-reduction and improving efficiency flourishes. therefore we get the worst of all worlds – the Great Stagnation combined with widespread technological unemployment.

This narrative naturally begs the question: when was the last time we had a truly disruptive Schumpeterian era of creative destruction. In a previous post looking at the evolution of the post-WW2 developed economic world, I argued that the so-called Golden Age was anything but Schumpeterian – As Alexander Field has argued, much of the economic growth till the 70s was built on the basis of disruptive innovation that occurred in the 1930s. So we may not have been truly Schumpeterian for at least 70 years. But what about the period from at least the mid 19th century till the Great Depression? Even a cursory reading of economic history gives us pause for thought – after all wasn’t a significant part of this period supposed to be the Gilded Age of cartels and monopolies which sounds anything but disruptive.

I am now of the opinion that we have never really had any long periods of constant disruptive innovation – this is not a sign of failure but simply a reality of how complex adaptive systems across domains manage the tension between efficiency,robustness, evolvability and diversity. What we have had is a subverted control revolution where repeated attempts to achieve and hold onto an efficient equilibrium fail. Creative destruction occurs despite our best efforts to stamp it out. In a sense, disruption is an outsider to the essence of the industrial and post-industrial period of the last two centuries, the overriding philosophy of which is automation and algorithmisation aimed at efficiency and control. And much of our current troubles are a function of the fact that we have almost perfected the control project.

The operative word and the source of our problems is “almost”. Too many people look at the transition from the Industrial Revolution to the Algorithmic Revolution as a sea-change in perspective. But in reality, the current wave of reducing everything to a combination of “data & algorithm” and tackling every problem with more data and better algorithms is the logical end-point of the control revolution that started in the 19th century. The difference between Ford and Zara is overrated – Ford was simply the first step in a long process that focused on systematising each element of the industrial process (production,distribution,consumption) but also crucially putting in place a feedback loop between each element. In some sense, Zara simply follows a much more complex and malleable algorithm than Ford did but this algorithm is still one that is fundamentally equilibriating (not disruptive) and focused on introducing order and legibility into a fundamentally opaque environment via a process that reduces human involvement and discretion by replacing intuitive judgements with rules and algorithms. Exploratory/disruptive innovation on the other hand is a disequilibriating force that is created by entrepreneurs and functions outside this feedback/control loop. Both processes are important – the longer period of the gradual shedding of diversity and homogenisation in the name of efficiency as well as the periodic “collapse” that shakes up the system and puts it eventually on the path to a new equilibrium.

Of course, control has been a aim of western civilisation for a lot longer but it was only in the 19th century that the tools of control were good enough for this desire to be implemented in any meaningful sense. And even more crucially, as James Beniger has argued, it was only in the last 150 years that the need for large-scale control arose. And now the tools and technologies in our hands to control and stabilise the economy are more powerful than they’ve ever been, likely too powerful.

If we had perfect information and everything could be algorithmised right now i.e. if the control revolution had been perfected, then the problem disappears. Indeed it is arguable that the need for disruption in the innovation process no longer exists. If we get to a world where radical uncertainty has been eliminated, then the problem of systemic fragility is moot and irrelevant. It is easy to rebut the stabilisation and control project by claiming that we cannot achieve this perfect world.

But even if the techno-utopian project can achieve all that it claims it can, the path matters. We need to make it there in one piece. The current “algorithmic revolution” is best viewed as a continuation of the process through which human beings went from being tool-users to minders and managers of automated systems. The current transition is simply one where the many of these algorithmic and automated systems can essentially run themselves with human beings simply performing the role of supervisors who only need to intervene in extraordinary circumstances. Therefore, it would seem logical that the same process of increased productivity that has occurred during the modern era of automation will continue during the creation of the “vast,automatic and invisible” ‘second economy’. However there are many signs that this may not be the case. What has made things better till now and has been genuine “progress” may make things worse in higher doses and the process of deterioration can be quite dramatic.

The Uncanny Valley on the Path towards “Perfection”

In 1970, Masahiro Mori coined the term ‘uncanny valley’ to denote the phenomenon that “as robots appear more humanlike, our sense of their familiarity increases until we come to a valley”. When robots are almost but not quite human-like, they invoke a feeling of revulsion rather than empathy. As Karl McDorman notes, “Mori cautioned robot designers not to make the second peak their goal — that is, total human likeness — but rather the first peak of humanoid appearance to avoid the risk of their robots falling into the uncanny valley.”

A similar valley exists in the path of increased automation and algorithmisation. Much of the discussion in this section of the post builds upon concepts I explored via a detailed case study in a previous post titled ‘People Make Poor Monitors for Computers’.

The 21st century version of the control project i.e. the algorithmic project consists of two components:
1. More Data – ‘Big Data’.
2. Better and more comprehensive Algorithm.

The process goes hand in hand therefore with increased complexity and crucially, poorer and less intuitive feedback for the human operator. This results in increased fragility and a system prone to catastrophic breakdowns. The typical solution chosen is either further algorithmisation i.e. an improved algorithm and more data and if necessary increased slack and redundancy. This solution exacerbates the problem of feedback and temporarily pushes the catastrophic scenario further out to the tail but it does not eliminate it. Behavioural adaptation by human agents to the slack and the “better” algorithm can make a catastrophic event as likely as it was before but with a higher magnitude. But what is even more disturbing is that this cycle of increasing fragility can occur even without any such adaptation. This is the essence of the fallacy of the ‘defence in depth’ philosophy that lies at the core of most fault-tolerant algorithmic designs that I discussed in my earlier postthe increased “safety” of the automated system allows the build up of human errors without any feedback available from deteriorating system performance.

A thumb rule to get around this problem is to use slack only in those domains where failure is catastrophic and to prioritise feedback when failure is not critical and cannot kill you. But in an uncertain environment, this rule is very difficult to manage. How do you really know that a particular disturbance will not kill you? Moreover the loop of automation -> complexity -> redundancy endogenously turns a non-catastrophic event into one with catastrophic consequences.

This is a trajectory which is almost impossible to reverse once it has gone beyond a certain threshold without undergoing an interim collapse. The easy short-term fix is always to make a patch to the algorithm, get more data and build in some slack if needed. An orderly rollback is almost impossible due to the deskilling of the human workforce and risk of collapse due to other components in the system having adapted to new reality. Even simply reverting to the old more tool-like system makes things a lot worse because the human operators are no longer experts at using those tools – the process of algorithmisation has deskilled the human operator. Moreover, the endogenous nature of this buildup of complexity eventually makes the system fundamentally illegible to the human operator – a phenomenon that is ironic given that the fundamental aim of the control revolution is to increase legibility.

The Sweet Spot Before the Uncanny Valley: Near-Optimal Yet Resilient

Although it is easy to imagine the characteristics of an inefficient and dramatically sub-optimal system that is robust, complex adaptive systems operate at a near-optimal efficiency that is also resilient. Efficiency is not only important due to the obvious reality that resources are scarce but also because slack at the individual and corporate level is a significant cause of unemployment. Such near-optimal robustness in both natural and economic systems is not achieved with simplistically diverse agent compositions or with significant redundancies or slack at agent level.

Diversity and redundancy carry a cost in terms of reduced efficiency. Precisely due to this reason, real-world economic systems appear to exhibit nowhere near the diversity that would seem to ensure system resilience. Rick Bookstaber noted recently, that capitalist competition if anything seems to lead to a reduction in diversity. As Youngme Moon’s excellent book ‘Different’ lays out, competition in most markets seems to result in less diversity, not more. We may have a choice of 100 brands of toothpaste but most of us would struggle to meaningfully differentiate between them.

Similarly, almost all biological and ecological complex adaptive systems are a lot less diverse and contain less pure redundancy than conventional wisdom would expect. Resilient biological systems tend to preserve degeneracy rather than simple redundancy and resilient ecological systems tend to contain weak links rather than naive ‘law of large numbers’ diversity. The key to achieving resilience with near-optimal configurations is to tackle disturbances and generate novelty/innovation with an an emergent systemic response that reconfigures the system rather than simply a localised response. Degeneracy and weak links are key to such a configuration. The equivalent in economic systems is a constant threat of new firm entry.

The viewpoint which emphasises weak links and degeneracy also implies that it is not the keystone species and the large firms that determine resilience but the presence of smaller players ready to reorganise and pick up the slack when an unexpected event occurs. Such a focus is further complicated by the fact that in a stable environment, the system may become less and less resilient with no visible consequences – weak links may be eliminated, barriers to entry may progressively increase etc with no damage done to system performance in the stable equilibrium phase. Yet this loss of resilience can prove fatal when the environment changes and can leave the system unable to generate novelty/disruptive innovation. This highlights the folly of statements such as ‘what’s good for GM is good for America’. We need to focus not just on the keystone species, but on the fringes of the ecosystem.

 THE UNCANNY VALLEY AND THE SWEET SPOT

The Business Cycle in the Uncanny Valley – Deterioration of the Median as well as the Tail

Many commentators have pointed out that the process of automation has coincided with a deskilling of the human workforce. For example, below is a simplified version of the relation between mechanisation and skill required by the human operator that James Bright documented in 1958 (via Harry Braverman’s ‘Labor and Monopoly Capital’). But till now, it has been largely true that although human performance has suffered, the performance of the system has gotten vastly better. If the problem was just a drop in human performance while the system got better, our problem is less acute.

AUTOMATION AND DESKILLING OF THE HUMAN OPERATOR

But what is at stake is a deterioration in system performance – it is not only a matter of being exposed to more catastrophic setbacks. Eventually mean/median system performance deteriorates as more and more pure slack and redundancy needs to be built in at all levels to make up for the irreversibly fragile nature of the system. The business cycle is an oscillation between efficient fragility and robust inefficiency. Over the course of successive cycles, both poles of this oscillation get worse which leads to median/mean system performance falling rapidly at the same time that the tails deteriorate due to the increased illegibility of the automated system to the human operator.

THE UNCANNY VALLEY BUSINESS CYCLE

The Visible Hand and the Invisible Foot, Not the Invisible Hand

The conventional economic view of the economy is one of a primarily market-based equilibrium punctuated by occasional shocks. Even the long arc of innovation is viewed as a sort of benign discovery of novelty without any disruptive consequences. The radical disequilibrium view (which I have been guilty of espousing in the past) is one of constant micro-fragility and creative destruction. However, the history of economic evolution in the modern era has been quite different – neither market-based equilibrium nor constant disequilibrium, but a series of off-market attempts to stabilise relations outside the sphere of the market combined with occasional phase transitions that bring about dramatic change. The presence of rents is a constant and the control revolution has for the most part succeeded in preserving the rents of incumbents, barring the occasional spectacular failure. It is these occasional “failures” that have given us results that in some respect resemble those that would have been created by a market over the long run.

As Bruce Wilder puts it (sourced from 1, 2, 3 and mashed up together by me):

The main problem with the standard analysis of the “market economy”, as well as many variants, is that we do not live in a “market economy”. Except for financial markets and a few related commodity markets, markets are rare beasts in the modern economy. The actual economy is dominated by formal, hierarchical, administrative organization and transactions are governed by incomplete contracts, explicit and implied. “Markets” are, at best, metaphors…..
Over half of the American labor force works for organizations employing over 100 people. Actual markets in the American economy are extremely rare and unusual beasts. An economics of markets ought to be regarded as generally useful as a biology of cephalopods, amid the living world of bones and shells. But, somehow the idealized, metaphoric market is substituted as an analytic mask, laid across a vast variety of economic relations and relationships, obscuring every important feature of what actually is…..
The elaborate theory of market price gives us an abstract ideal of allocative efficiency, in the absence of any firm or household behaving strategically (aka perfect competition). In real life, allocative efficiency is far less important than achieving technical efficiency, and, of course, everyone behaves strategically.
In a world of genuine uncertainty and limitations to knowledge, incentives in the distribution of income are tied directly to the distribution of risk. Economic rents are pervasive, but potentially beneficial, in that they provide a means of stable structure, around which investments can be made and production processes managed to achieve technical efficiency.
In the imaginary world of complete information of Econ 101, where markets are the dominant form of economic organizations, and allocative efficiency is the focus of attention, firms are able to maximize their profits, because they know what “maximum” means. They are unconstrained by anything.
In the actual, uncertain world, with limited information and knowledge, only constrained maximization is possible. All firms, instead of being profit-maximizers (not possible in a world of uncertainty), are rent-seekers, responding to instituted constraints: the institutional rules of the game, so to speak. Economic rents are what they have to lose in this game, and protecting those rents, orients their behavior within the institutional constraints…..
In most of our economic interactions, price is not a variable optimally digesting information and resolving conflict, it is a strategic instrument, held fixed as part of a scheme of administrative control and information discovery……The actual, decentralized “market” economy is not coordinated primarily by market prices—it is coordinated by rules. The dominant relationships among actors is not one of market exchange at price, but of contract: implicit or explicit, incomplete and contingent.

James Beniger’s work is the definitive document on how the essence of the ‘control revolution’ has been an attempt to take economic activity out of the sphere of direct influence of the market. But that is not all – the long process of algorithmisation over the last 150 years has also, wherever possible, replaced implicit rules/contracts and principal-agent relationships with explicit processes and rules. Beniger also notes that after a certain point, the increasing complexity of the system is an endogenous phenomenon i.e. further iterations are aimed at controlling the control process itself. As I have illustrated above, after a certain threshold, the increasing complexity, fragility and deterioration in performance becomes a self-fulfilling positive feedback process.

Although our current system bears very little resemblance to the market economy of the textbook, there was a brief period during the transition from the traditional economy to the control economy during the early part of the 19th century when this was the case. 26% of all imports into the United States in 1827 sold in an auction. But the displacement of traditional controls (familial ties) with the invisible hand of market controls was merely a transitional phase, soon to be displaced by the visible hand of the control revolution.

The Soviet Project, Western Capitalism and High Modernity

Communism and Capitalism are both pillars of the high-modernist control project. The signature of modernity is not markets, but technocratic control projects. Capitalism has simply done it in a manner that is more easily and more regularly subverted. It is the occasional failure of the control revolution that is the source of the capitalist economy’s long-run success. Conversely, the failure of the Soviet Project was due to its too successful adherence and implementation of the high-modernist ideal. The significance of the threat from crony capitalism is a function of the fact that by forming a coalition and partnership of the corporate and state control projects, it enables the implementation of the control revolution to be that much more effective.

The Hayekian argument of dispersed knowledge and its importance in seeking equilibrium is not as important as it seems in explaining why the Soviet project failed. As Joseph Berliner has illustrated, the Soviet economy did not fail to reach local equilibria. Where it failed so spectacularly was in extracting itself out of these equilibria. The dispersed knowledge argument is open to the riposte that better implementation of the control revolution will eventually overcome these problems – indeed much of the current techno-utopian version of the control revolution is based on this assumption. It is a weak argument for free enterprise, a much stronger argument for which is the need to maintain a system that retains the ability to reinvent itself and find a new, hitherto unknown trajectory via the destruction of the incumbents combined with the emergence of the new. Where the Soviet experiment failed is that it eliminated the possibility of failure, that Berliner called the ‘invisible foot’. The success of the free enterprise system has been built not upon the positive incentive of the invisible hand but the negative incentive of the invisible foot to counter the visible hand of the control revolution. It is this threat and occasional realisation of failure and disorder that is the key to maintaining system resilience and evolvability.

 

 

Notes:

  • Borrowing from Beniger, control here simply means “purposive influence towards a predetermined goal”. Similarly, equilibrium in this context is best defined as a state in which economic agents are not forced to change their routines, theories and policies.
  • On the uncanny valley, I wrote a similar post on why perfect memory does not lead to perfect human intelligence. Even if a computer benefits from more data and better memory, we may not. And the evidence suggests that the deterioration in human performance is steepest in the zone close to “perfection”.
  • An argument similar to my assertion on the misconception of a free enterprise economy as a market economy can be made about the nature of democracy. Rather than as a vehicle that enables the regular expression of the political will of the electorate, democracy may be more accurately thought of as the ability to effect a dramatic change when the incumbent system of plutocratic/technocratic rule diverges too much from popular opinion. As always, stability and prevention of disturbances can cause the eventual collapse to be more catastrophic than it needs to be.
  • Although James Beniger’s ‘Control Revolution’ is the definitive reference, Antoine Bousquet’s book ‘The Scientific Way of Warfare’ on the similar revolution in military warfare is equally good. Bousquet’s book highlights the fact that the military is often the pioneer of the key projects of the control revolution and it also highlights just how similar the latest phase of this evolution is to early phases – the common desire for control combined with its constant subversion by reality. Most commentators assume that the threat to the project is external – by constantly evolving guerrilla warfare for example. But the analysis of the uncanny valley suggests that an equally great threat is endogenous – of increasing complexity and illegibility of the control project itself. Bousquet also explains how the control revolution is a child of the modern era and the culmination of the philosophy of the Enlightenment.
  • Much of the “innovation” of the control revolution was not technological but institutional – limited liability, macroeconomic stabilisation via central banks etc.
  • For more on the role of degeneracy in biological systems and how it enables near-optimal resilience, this paper by James Whitacre and Axel Bender is excellent.
Bookmark and Share

Written by Ashwin Parameswaran

February 21st, 2012 at 5:38 pm

Private Equity and the Greenspan Put

with 12 comments

Mitt Romney’s campaign for the Republican nomination for the US Presidential election has triggered a debate as to the role of private equity (PE) in the economy. The critical of the private equity industry tend to focus on their perceived tendency to layoff employees and increase leverage. Regarding layoffs, there is very little evidence that PE firms are worse than the rest of the corporate sector. However, this does not imply that their role is entirely positive. But it does imply that the excesses of PE mirror the excesses of the larger economy during the neoliberal era. This is obvious when the role of leverage is examined. As Mike Konczal notes, “something did change during the 1980s, and LBO was part of this overall shift.” The road that started with LBOs in the 1980s ended with the rash of dividend recapitalisations between 2003–2007, a phenomenon that has even resurfaced post the crisis.

It is easy to find proximate causes for this dynamic and commentators on both sides of the political spectrum attribute much of the above to the neo-liberal revolution – the doctrine of shareholder value maximisation, high-powered managerial incentives, a drive towards increased efficiency etc. The acceleration of this process in the last decade usually gets explained away as the inevitable consequence of a financial bubble with irrationally exuberant banks making unwise loans to fuel the leverage binge. But these narratives miss the obvious elephant in the room – the role of monetary policy and in particular the dominant monetary policy doctrine underpinning the ‘Great Moderation’ which focused on shoring up financial asset prices as the primary channel of monetary stimulus, otherwise known as the ‘Greenspan Put’. All the above proximate causes were the direct and inevitable result of economic actors seeking to align themselves to the central banks’ focus on asset price stabilisation.

As I elaborated upon in an earlier post:

creating any source of stability in a capitalist economy incentivises economic agents to realign themselves to exploit that source of security and thereby reduce risk. Similar to how banks’ adaptation to the intervention strategies preferred by central banks by taking on more “macro” risks, macro-stabilisation incentivises real economy firms to shed idiosyncratic micro-risks and take on financial risks instead. Suppressing nominal volatility encourages economic agents to shed real risks and take on nominal risks. In the presence of the Greenspan/Bernanke put, a strategy focused on “macro” asset price risks and leverage outcompetes strategies focused on “risky” innovation. Just as banks that exploit the guarantees offered by central banks outcompete those that don’t, real economy firms that realign themselves to become more bank-like outcompete those that choose not to…….When central bankers are focused on preventing significant pullbacks in equity prices (the Greenspan/Bernanke put), then real-economy firms are incentivised to take on more systematic risk and reduce their idiosyncratic risk exposure.

The focus on cost reduction and layoffs is also a result of this increased market-sensitivity combined with the macro-stabilisation commitment encourages low-risk process innovation and discourages uncertain and exploratory product innovation. The excesses of some forms of private equity are often instances in which they apply the maximum possible leverage to extract the rents available via the Greenspan Put. Dividend recaps are one such instance.

James Kwak summarises the case of Simmons Bedding Company:

In 2003, for example, THL bought Simmons (the mattress company) for $327 million in cash and $745 million in debt. In 2004, Simmons (now run by THL) issued more debt and paid a $137 million dividend to THL; in 2007, it issued yet more debt and paid a $238 million dividend to THL. Simmons filed for bankruptcy in 2009.

The obvious question here is why banks and financial institutions would lend so much money and allow firms to lever up so dramatically. Kwak lays the blame on the financial bubble, principal-agent problems, bankers bonus structures etc. TED counters that lenders do in fact typically make informed decisions and also correctly points out that the rest of corporate America is not immune to such leveraged mishaps either. Both explanations ignore the fact that this sort of severely tail-risk heavy loan is exactly the payoff which maximises the banks‘ and their employees’ own moral hazard rent extraction. In an earlier post, I identified that many hedge fund strategies are an indirect beneficiary of moral hazard rents – the same argument also applies to some private equity strategies.

But as I have noted on many occasions, the moral hazard problem from tail-risk hungry TBTF financial institutions is simply the tip of the iceberg. It was not only the banks with access to cheap leverage that were heavily invested in “safe” assets, but also asset managers, money market mutual funds and even ordinary investors. The Greenspan/Bernanke Put incentivises a large proportion of real and financial actors in the economy into taking on more and more tail risk with the expectation that the Fed will avoid any outcomes where these risks will be realised.

Too many commentators fail to recognise that so much of what has made the neo-liberal era a thinly disguised corporate welfare state can be traced to the impact of a supposedly “neutral” macroeconomic policy instrument that in reality has grossly regressive consequences. To expect corporate America to not take advantage of the free lunch offered to it by the Fed is akin to dangling a piece of meat in front of a tiger and expecting it not to bite your hand off.

Bookmark and Share

Written by Ashwin Parameswaran

February 1st, 2012 at 5:56 pm

The Public Deposit Option: An Alternative To “Regulate and Insure” Banking

with 31 comments

Many economists want to turn back the clock on the American economic system to that of the 50s and 60s. This is understandable – the ‘Golden Age’ of the 50s and 60s was characterised by healthy productivity growth, significant real wage growth and financial stability. Similarly, many commentators see the banking system during that time as the ideal state. In this vein, Amar Bhide offers his solution for the chronic fragility of the financial system:

governments should fully guarantee all bank deposits — and impose much tighter restrictions on risk-taking by banks. Banks should be forced to shed activities like derivatives trading that regulators cannot easily examine…..Banks must therefore be restricted to those activities, like making traditional loans and simple hedging operations, that a regulator of average education and intelligence can monitor.

There are a couple of problems with his idea – for one it may not be possible to effectively regulate bank risk-taking. On many previous occasions, I have asserted that regulations cannot restrain banks from extracting moral hazard rents from the guarantee provided by the state/central bank to bank creditors and depositors. The primary reason for this is the spread of financial innovation during the last fifty years that has given banks an almost infinite variety of ways in which it can construct an opaque and precisely tailored payoff that provides a steady stream of profits in good times in exchange for a catastrophic loss in bad times. As I have shown, the moral hazard trade is not a “riskier” trade but a combination of high leverage and a severely negatively skewed payoff with a catastrophic tail risk.

Minsky himself understood the essentially ephemeral nature of the financial system of the 50s from his work on the early stages of the process of financial innovation that allowed the financial system to unshackle itself from the effective control of the central bank and the regulator. As he observes:

The banking system came out of the war with a portfolio heavily weighted with government debt, and it was not until the 1960s that banks began to speculate actively with respect to their liabilities. It was a unique period in which finance mattered relatively little; at least, finance did not interpose its destabilizing ways……The apparent stability and robustness of the financial system of the 1950s and early 1960s can now be viewed as an accident of history, which was due to the financial residue of World War 2 following fast upon a great depression.

Amar Bhide’s idea essentially seeks to turn back the clock and forbid much of the innovation that has taken place in the last few decades. In particular, derivatives businesses will be forbidden for deposit-taking banks. This is a radical idea and one that is a significant improvement on the current status quo. But it is not enough to mitigate the moral hazard problem. To illustrate why this is the case, let me take an example of how as a banker, I would construct such a payoff within a “narrow banking”-like mandate. Let us assume that banks can only take deposits and make loans to corporations and households. They cannot hedge their loans or engage in any activities related to financial market positions even as market makers, and they cannot carry any off balance-sheet exposures, commitments etc. Although this would seem to be a sufficiently narrow mandate to prevent rent extraction, it is not. Banks can simply lend to other firms that take on negatively skewed bets. You may counter that banks should only be allowed to lend to real economy firms. But do we expect regulators to audit not only the banks under their watch but also the firms to whom they lend money? In the first post on this blog, I outlined how the synthetic super-senior CDO tranche was the quintessential rent-extraction product of the derivatives revolution. But at its core, the super-senior tranche is simply a severely negatively skewed bond – a product that pays a small positive spread in good times and loses you all your money in bad times. There is no shortage of ways in which such a negatively skewed payoff can be constructed by simple structured bank loans.

What the synthetic OTC derivatives revolution made possible was for the banking system to structure such payoffs in an essentially infinite amount without even going through the trouble of making new loans or mortgages – all that was needed was a derivatives counterparty. Without derivatives, banks would have to lend money to generate such a payoff – this only makes it a little harder to extract rents but it still does not change the essence of the problem. Even more crucially, the potential for such rent extraction is unlimited compared to other avenues for extracting rent. If the state pays a higher price for an agricultural crop compared to the market, at least the losses suffered by the taxpayer are limited by physical constraints such as arable land available. But when the rent extraction opportunity goes hand in hand with the very process that creates credit and broad money, the potential for rent extraction is virtually unlimited.

Even if we assume that rent extraction can be controlled by more stringent regulations, there remains one problem. There is simply no way that incumbent large banks, especially those with a large OTC derivatives franchise, can shed their derivatives business and still remain solvent. The best indication of how hard it is to unwind complex derivatives portfolios was the experience of Warren Buffett in unwinding the derivatives portfolio which he inherited from the General Re acquisition. As Buffett notes, unwinding the portfolio of a relatively minor player in the derivative market under benign market conditions and no internal financial pressure took years and cost him $404 million. If we asked any of the large banks, let alone all of them at once, to do the same in the current fragile market conditions the cost of doing so will comfortably bankrupt the entire banking sector. The modern TBTF bank with its huge OTC derivatives business is akin to a suicide bomber with his finger on the button that is holding us hostage – this is the reason why regulators handle them with kid gloves.

In other words, even if our dream of limited and safe banking is viable we have a ‘can’t get there from here’ problem. This does not mean that there are no viable solutions but we need to be more creative. Amar Bhide makes a valid point when he argues that “Why not also make all short-term deposits, which function much like currency, the explicit liability of the government?” But the solution is not to allow private banks to reap the rents from cheap deposit financing but to allow each citizen and corporation access to a public deposit account. The simplest implementation of this would be a system similar to the postal savings system where all deposits are necessarily backed by short-term treasury bills. If the current stock of T-bills is not sufficient to back the demand for such deposits, the Treasury should shift the maturity profile of its debt until the demand is met. In such a system, there would be no deposit insurance i.e. all investment/deposit alternatives except for the state system will be explicitly risky and unprotected.

One criticism of such a system would be that the benefits of maturity transformation would be lost to the economy i.e. unless short-term deposits are deployed to match long-term investment projects, such projects would not find adequate funding. But as I have argued and the data shows, household long-term savings (which includes pensions and life insurance) is more than sufficient to meet the long-term borrowing needs of the corporate and the household sector in both the United States and Europe.

The “regulate and insure” model ignores the ability of banks to arbitrage any regulatory framework. But the status quo is also unacceptable. However the system is sufficiently levered and fragile that allowing market forces to operate or simply forcing a drastic structural change upon incumbent banks by regulatory fiat implies an almost certain collapse of the incumbent banks. Creating a public deposit option is the first step in implementing a sustainable transition to a resilient financial system, one in which instead of shackling incumbent banks we separate them from the risk-free depository system.

 

Note: My views on this topic and some other related topics which I hope to explore soon have been significantly influenced by uber-commenter K. For a taste of his broader ideas which are similar to mine, try this comment which he made in response to a Nick Rowe post.

Bookmark and Share

Written by Ashwin Parameswaran

January 5th, 2012 at 12:18 pm

People Make Poor Monitors for Computers

with 55 comments

In the early hours of June 1st 2009, Air France Flight 447 crashed into the Atlantic Ocean. Till the black boxes of AF447 were recovered in April 2011, the exact circumstances of the crash remained a mystery. The most widely accepted explanation for the disaster attributes a large part of the blame to human error when faced with a partial but not fatal systems failure. Yet a small but vocal faction blames the disaster and others like it on the increasingly automated nature of modern passenger airplanes.

This debate bears an uncanny resemblance to a similar debate as to the causes of the financial crisis – many commentators blame the persistently irrational nature of human judgement for the recurrence of financial crises. Others such as Amar Bhide blame the unwise deference to imperfect financial models over human judgement. In my opinion, both perspectives miss the true dynamic. These disasters are not driven by human error or systems error alone but by fatal flaws in the interaction between human intelligence and complex, near fully-automated systems.

In a recent article drawing upon the black box transcripts, Jeff Wise attributes the crash primarily to a “simple but persistent mistake on the part of one of the pilots”. According to Wise, the co-pilot reacted to the persistent stall warning by “pulling back on the stick, the exact opposite of what he must do to recover from the stall”.

But there are many hints that the story is nowhere near as simple. As Peter Garrison notes :

every pilot knows that to recover from a stall you must get the nose down. But because a fully developed stall in a large transport is considered highly unlikely, and because in IFR air traffic vertical separation, and therefore control of altitude, is important, transport pilots have not been trained to put the nose down when they hear the stall warning — which heralds, after all, not a fully developed stall, but merely an approaching one. Instead, they have been trained to increase power and to “fly out of the stall” without losing altitude. Perhaps that is what the pilot flying AF447 intended. But the airplane was already too deeply stalled, and at too high an altitude, to recover with power alone.

The patterns of the AF447 disaster are not unique. As Chris Sorensen observes, over 50 commercial aircrafts have crashed in “loss-of-control” accidents in the last five years, a trend for which there is no shortage of explanations:

Some argue that the sheer complexity of modern flight systems, though designed to improve safety and reliability, can overwhelm even the most experienced pilots when something actually goes wrong. Others say an increasing reliance on automated flight may be dulling pilots’ sense of flying a plane, leaving them ill-equipped to take over in an emergency. Still others question whether pilot-training programs have lagged behind the industry’s rapid technological advances.

But simply invoking terms such as “automation addiction” or blaming disasters on irrational behaviour during times of intense stress does not get at the crux of the issue.

People Make Poor Monitors for Computers

Airplane automation systems are not the first to discover the truth in the comment made by David Jenkins that “computers make great monitors for people, but people make poor monitors for computers.” As James Reason observes in his seminal book ‘Human Error’:

We have thus traced a progression from where the human is the prime mover and the computer the slave to one in which the roles are very largely reversed. For most of the time, the operator’s task is reduced to that of monitoring the system to ensure that it continues to function within normal limits. The advantages of such a system are obvious; the operator’s workload is substantially reduced, and the [system] performs tasks that the human can specify but cannot actually do. However, the main reason for the human operator’s continued presence is to use his still unique powers of knowledge-based reasoning to cope with system emergencies. And this is a task peculiarly ill-suited to the particular strengths and weaknesses of human cognition…..

most operator errors arise from a mismatch between the properties of the system as a whole and the characteristics of human information processing. System designers have unwittingly created a work situation in which many of the normally adaptive characteristics of human cognition (its natural heuristics and biases) are transformed into dangerous liabilities.

As Jeff Wise notes, it is impossible to stall an Airbus in most conditions. AF447 however went into a state known as ‘alternate law’ which most pilots have never experienced where the airplane could be stalled:

“You can’t stall the airplane in normal law,” says Godfrey Camilleri, a flight instructor who teaches Airbus 330 systems to US Airways pilots….But once the computer lost its airspeed data, it disconnected the autopilot and switched from normal law to “alternate law,” a regime with far fewer restrictions on what a pilot can do. “Once you’re in alternate law, you can stall the airplane,” Camilleri says….It’s quite possible that Bonin had never flown an airplane in alternate law, or understood its lack of restrictions. According to Camilleri, not one of US Airway’s 17 Airbus 330s has ever been in alternate law. Therefore, Bonin may have assumed that the stall warning was spurious because he didn’t realize that the plane could remove its own restrictions against stalling and, indeed, had done so.

This inability of the human operator to fill in the gaps in a near-fully automated system was identified by Lisanne Bainbridge as one of the ironies of automation which James Reason summarised:

the same designer who seeks to eliminate human beings still leaves the operator “to do the tasks which the designer cannot think how to automate” (Bainbridge,1987, p.272). In an automated plant, operators are required to monitor that the automatic system is functioning properly. But it is well known that even highly motivated operators cannot maintain effective vigilance for anything more than quite short periods; thus, they are demonstrably ill-suited to carry out this residual task of monitoring for rare, abnormal events. In order to aid them, designers need to provide automatic alarm signals. But who decides when these automatic alarms have failed or been switched off?

As Robert Charette notes, the same is true for airplane automation:

operators are increasingly left out of the loop, at least until something unexpected happens. Then the operators need to get involved quickly and flawlessly, says Raja Parasuraman, professor of psychology at George Mason University in Fairfax, Va., who has been studying the issue of increasingly reliable automation and how that affects human performance, and therefore overall system performance. ”There will always be a set of circumstances that was not expected, that the automation either was not designed to handle or other things that just cannot be predicted,” explains Parasuraman. So as system reliability approaches—but doesn’t quite reach—100 percent, ”the more difficult it is to detect the error and recover from it,” he says…..In many ways, operators are being asked to be omniscient systems administrators who are able to jump into the middle of a situation that a complex automated system can’t or wasn’t designed to handle, quickly diagnose the problem, and then find a satisfactory and safe solution.

Stored Routines Are Not Effective in Rare Situations

As James Reason puts it:

the main reason why humans are retained in systems that are primarily controlled by intelligent computers is to handle ‘non-design’ emergencies. In short, operators are there because system designers cannot foresee all possible scenarios of failure and hence are not able to provide automatic safety devices for every contingency. In addition to their cosmetic value, human beings owe their inclusion in hazardous systems to their unique, knowledge-based ability to carry out ‘on-line’ problem solving in novel situations. Ironically, and notwithstanding the Apollo 13 astronauts and others demonstrating inspired improvisation, they are not especially good at it; at least not in the conditions that usually prevail during systems emergencies. One reason for this is that stressed human beings are strongly disposed to employ the effortless, parallel, preprogrammed operations of highly specialised, low-level processors and their associated heuristics. These stored routines are shaped by personal history and reflect the recurring patterns of past experience……

Why do we have operators in complex systems? To cope with emergencies. What will they actually use to deal with these problems? Stored routines based on previous interactions with a specific environment. What, for the most part, is their experience within the control room? Monitoring and occasionally tweaking the plant while it performs within safe operating limits. So how can they perform adequately when they are called upon to reenter the control loop? The evidence is that this task has become so alien and the system so complex that, on a significant number of occasions, they perform badly.

Wise again identifies this problem in the case of AF447:

While Bonin’s behavior is irrational, it is not inexplicable. Intense psychological stress tends to shut down the part of the brain responsible for innovative, creative thought. Instead, we tend to revert to the familiar and the well-rehearsed. Though pilots are required to practice hand-flying their aircraft during all phases of flight as part of recurrent training, in their daily routine they do most of their hand-flying at low altitude—while taking off, landing, and maneuvering. It’s not surprising, then, that amid the frightening disorientation of the thunderstorm, Bonin reverted to flying the plane as if it had been close to the ground, even though this response was totally ill-suited to the situation.

Deskilling From Automation

As James Reason observes:

Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills. One of the consequences of automation, therefore, is that operators become de-skilled in precisely those activities that justify their marginalised existence. But when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions. Duncan (1987, p. 266) makes the same point: “The more reliable the plant, the less opportunity there will be for the operator to practise direct intervention, and the more difficult will be the demands of the remaining tasks requiring operator intervention.”

Opacity and Too Much Information of Uncertain Reliability

Wise captures this problem and its interaction with a human who has very little experience in managing the crisis scenario:

Over the decades, airliners have been built with increasingly automated flight-control functions. These have the potential to remove a great deal of uncertainty and danger from aviation. But they also remove important information from the attention of the flight crew. While the airplane’s avionics track crucial parameters such as location, speed, and heading, the human beings can pay attention to something else. But when trouble suddenly springs up and the computer decides that it can no longer cope—on a dark night, perhaps, in turbulence, far from land—the humans might find themselves with a very incomplete notion of what’s going on. They’ll wonder: What instruments are reliable, and which can’t be trusted? What’s the most pressing threat? What’s going on? Unfortunately, the vast majority of pilots will have little experience in finding the answers.

A similar scenario occurred in the case of the Qantas-owned A380 which took off from Singapore in November 2010:

Shortly after takeoff from Singapore, one of the hulking A380’s four engines exploded and sent pieces of the engine cowling raining down on an Indonesian island. The blast also damaged several of the A380’s key systems, causing the unsuspecting flight crew to be bombarded with no less than 54 different warnings and error messages—so many that co-pilot Matt Hicks later said that, at one point, he held his thumb over a button that muted the cascade of audible alarms, which threatened to distract Capt. Richard De Crespigny and the rest of the feverishly working flight crew. Luckily for passengers, Qantas Flight 32 had an extra two pilots in the cockpit as part of a training exercise, all of whom pitched in to complete the nearly 60 checklists required to troubleshoot the various systems. The wounded plane limped back to Singapore Changi Airport, where it made an emergency landing.

Again James Reason captures the essence of the problem:

One of the consequences of the developments outlined above is that complex, tightly-coupled and highly defended systems have become increasingly opaque to the people who manage, maintain and operate them. This opacity has two aspects: not knowing what is happening and not understanding what the system can do. As we have seen, automation has wrought a fundamental change in the roles people play within certain high-risk technologies. Instead of having ‘hands on’ contact with the process, people have been promoted “to higher-level supervisory tasks and to long-term maintenance and planning tasks” (Rasmussen, 1988). In all cases, these are far removed from the immediate processing. What direct information they have is filtered through the computer-based interface. And, as many accidents have demonstrated, they often cannot find what they need to know while, at the same time, being deluged with information they do not want nor know how to interpret.

Absence of Intuitive Feedback

Among others, Hubert and Stuart Dreyfus have shown that human expertise relies on an intuitive and tacit understanding of the situation rather than a rule-bound and algorithmic understanding. The development of intuitive expertise depends upon the availability of clear and intuitive feedback which complex, automated systems are often unable to provide.

In AF447, when the co-pilot did push forward on the stick (the “correct” response), the behaviour of the stall warning was exactly the opposite of what he would have intuitively expected:

At one point the pilot briefly pushed the stick forward. Then, in a grotesque miscue unforeseen by the designers of the fly-by-wire software, the stall warning, which had been silenced, as designed, by very low indicated airspeed, came to life. The pilot, probably inferring that whatever he had just done must have been wrong, returned the stick to its climb position and kept it there for the remainder of the flight.

Absence of feedback prevents effective learning but the wrong feedback can have catastrophic consequences.

The Fallacy of Defence in Depth

In complex automated systems, the redundancies and safeguards built into the system also contribute to its opacity. By protecting system performance against single faults, redundancies allow the latent buildup of multiple faults. Jens Rasmussen called this ‘the fallacy of defence in depth’ which James Reason elaborates upon:

the system very often does not respond actively to single faults. Consequently, many errors and faults made by the staff and maintenance personnel do not directly reveal themselves by functional response from the system. Humans can operate with an extremely high level of reliability in a dynamic environment when slips and mistakes have immediately visible effects and can be corrected……Violation of safety preconditions during work on the system will probably not result in an immediate functional response, and latent effects of erroneous acts can therefore be left in the system. When such errors are allowed to be present in a system over a longer period of time, the probability of coincidence of the multiple faults necessary for release of an accident is drastically increased. Analyses of major accidents typically show that the basic safety of the system has eroded due to latent errors.

This is exactly what occurred on Malaysia Airlines Flight 124 in August 2005:

The fault-tolerant ADIRU was designed to operate with a failed accelerometer (it has six). The redundant design of the ADIRU also meant that it wasn’t mandatory to replace the unit when an accelerometer failed. However, when the second accelerometer failed, a latent software anomaly allowed inputs from the first faulty accelerometer to be used, resulting in the erroneous feed of acceleration information into the flight control systems. The anomaly, which lay hidden for a decade, wasn’t found in testing because the ADIRU’s designers had never considered that such an event might occur.

Again, defence-in-depth systems are uniquely unsuited to human expertise as Gary Klein notes:

In a massively defended system, if an accident sneaks through all the defenses, the operators will find it far more difficult to diagnose and correct it. That is because they must deal with all of the defenses, along with the accident itself…..A unit designed to reduce small errors helped to create a large one.

Two Approaches to Airplane Automation: Airbus and Boeing

Although both Airbus and Boeing have adopted the fly-by-wire technology, there are fundamental differences in their respective approaches. Whereas Boeing’s system enforces soft limits that can be overridden at the discretion of the pilot, Airbus’ fly-by-wire system has built-in hard limits that cannot be overridden completely at the pilot’s discretion.

As Simon Calder notes, pilots have raised concerns in the past about Airbus‘ systems being “overly sophisticated” as opposed to Boeing’s “rudimentary but robust” system. But this does not imply that the Airbus approach is inferior. It is instructive to analyse Airbus’ response to pilot demands for a manual override switch that allows the pilot to take complete control:

If we have a button, then the pilot has to be trained on how to use the button, and there are no supporting data on which to base procedures or training…..The hard control limits in the Airbus design provide a consistent “feel” for the aircraft, from the 120-passenger A319 to the 350-passenger A340. That consistency itself builds proficiency and confidence……You don’t need engineering test pilot skills to fly this airplane.

David Evans captures the essence of this philosophy as aimed at minimising the “potential for human error, to keep average pilots within the limits of their average training and skills”.

It is easy to criticise Airbus‘ approach but the hard constraints clearly demand less from the pilot. In the hands of an expert pilot, Boeing’s system may outperform. But if the pilot is a novice, Airbus’ system almost certainly delivers superior results. Moreover, as I discussed earlier in the post, the transition to an almost fully automated system by itself reduces the probability that the human operator can achieve intuitive expertise. In other words, the transition to near-autonomous systems creates a pool of human operators that appear to frequently commit “irrational” errors and is therefore almost impossible to reverse.

 *          *         *

People Make Poor Monitors for Some Financial Models

In earlier post, I analysed Amar Bhide’s argument that a significant causal agent in the financial crisis was the replacement of discretion with models in many areas of finance – for example, banks’ mortgage lending decisions. In his excellent book, ‘A Call for Judgement’, he expands on this argument and amongst other technologies, lays some of the blame for this over-mechanisation of finance on the ubiquitous Black-Scholes-Merton (BSM) formula. Although I agree with much of his book, this thesis is too simplistic.

There is no doubt that BSM has many limitations – amongst the most severe being the assumption of continuous asset price movements, a known and flat volatility surface, and an asset price distribution free of fat tails. But the systemic impact of all these limitations is grossly overstated:

  • BSM and similar models have never been used as “valuation” methods on a large scale in derivatives markets but as a tool which tries to back out an implied volatility and generate useful hedge ratios by taking market prices for options as a given. In other words, volatility plays the role of the “wrong number in the wrong formula to get the right price”.
  • When “simple” BSM-like models are used to price more exotic derivatives, they have a modest role to play. As Emanuel Derman puts it, practitioners use models as “interpolating formulas that take you from known prices of liquid securities to the unknown values of illiquid securities”.

Nevertheless, this does not imply that financial modelling choices have no role to play in determining system resilience. But the role was more subtle and had to do less with the imperfections of the models themselves as with the imperfections of how complex models used to price complex products could be used by human traders.

Since the discovery of the volatility smile, traders have known that the interpolation process to price exotic options requires something more than a simple BSM model. One would assume that traders would want to use a model that was accurate and comprehensive as possible. But this has rarely been the case. Supposedly inferior local volatility models still flourish and even in some of the most complex domains of exotic derivatives, models are still chosen based on their intuitive similarities to a BSM-like approach where the free parameters can be thought of as volatility or correlation e.g. The Libor Market Model.

The choice of intuitive understanding over model accuracy is not unwarranted. As all market practitioners know, there is no such thing as a perfect derivatives pricing model. Paul Wilmott hit the nail on the head when he observed that *“the many improvements on Black-Scholes are rarely improvements, the best that can be said for many of them is that they are just better at hiding their faults. Black-Scholes also has its faults, but at least you can see them”.

However, as markets have evolved, maintaining this balance between intuitive understanding and accuracy has become increasingly difficult:

  • Intuitive yet imperfect models require experienced and expert traders. Scaling up trading volumes of exotic derivatives however requires that pricing and trading systems be pushed out to novice traders as well as non-specialists such as salespeople.
  • With the increased complexity of derivative products, preserving an intuitive yet sufficiently accurate model becomes an almost impossible task.
  • Product complexity combined with the inevitable discretion available to traders when they use simpler models presents significant control challenges and an increased potential for fraud.

In this manner, the same paradoxical evolution that have been observed in nuclear plants and airplane automation is now being experienced in finance. The need to scale up and accommodate complex products necessitates the introduction of complex, unintuitive models in combination with which human intuitive expertise is unable to add any value. In such a system, a novice is often as good as a more experienced operator. The ability of these models to tackle most scenarios on ‘auto-pilot’ results in a deskilled and novice-heavy human component in the system which is ill-equipped to tackle the inevitable occasion when the model fails. The failure is inevitably taken as evidence of human failure upon which the system is made even more automated and more safeguards and redundancies are built into the system. This exacerbates the problem of absence of feedback when small errors occur. The buildup of latent errors again increases and failures become even more catastrophic.

 *          *         *

My focus on airplane automation and financial models is simply illustrative. There are ample signs of this incompatibility between human monitors and near-fully automated systems in other domains as well. For example, Andrew Hill observes:

In developed economies, Lynda Gratton writes in her new book The Shift, “when the tasks are more complex and require innovation or problem solving, substitution [by machines or computers] has not taken place”. This creates a paradox: far from making manufacturers easier to manage, automation can make managers’ jobs more complicated. As companies assign more tasks to machines, they need people who are better at overseeing the more sophisticated workforce and doing the jobs that machines cannot….

The insight that greater process efficiency adds to the pressure on managers is not new. Even Frederick Winslow Taylor – these days more often caricatured as a dinosaur for his time-and-motion studies – pointed out in his century-old The Principles of Scientific Management that imposing a more mechanistic regime on workers would oblige managers to take on “other types of duties which involve new and heavy burdens”…..

There is no doubt Foxconn and its peers will be able to automate their labour-intensive processes. They are already doing so. The big question is how easily they will find and develop managers able to oversee the highly skilled workforce that will march with their robot armies.

This process of integrating human intelligence with artificial intelligence is simply a continuation of the process through which human beings went from being tool-users to minders and managers of automated systems. The current transition is important in that for the first time, many of these algorithmic and automated systems can essentially run themselves with human beings performing the role of supervisors who only need to intervene in extraordinary circumstances. Although it seems logical that the same process of increased productivity that has occurred during the modern ‘Control Revolution’ will continue during the creation of the “vast,automatic and invisible” ‘second economy’, the incompability of human cognition with near-fully automated systems suggests that it may only do so by taking on an increased risk of rare but catastrophic failure.

Bookmark and Share

Written by Ashwin Parameswaran

December 29th, 2011 at 11:58 pm