macroresilience

resilience, not stability

Search Results

Minsky and Hayek: Connections

with one comment

As Tyler Cowen argues, there are many similarities between Hayek’s and Minsky’s views on business cycles. Fundamentally, they both describe the “fundamental impossibility in maintaining orderly credit relations over time”.

Minsky saw Keynes’ theory as an ‘investment theory of the business cycle’ and his contribution as being a ‘financial theory of investment’. This financial theory was based on the credit/financing-focused endogenous theory of money of Joseph Schumpeter, whom Minsky studied under. Schumpeter’s views are best described in Chapter 3 (’Credit and Capital’) of his book ‘Theory of Economic Development’. The gist of this view is that “investment, and expenditures more generally, require financing, not saving” (Borio and Disyatat).

Schumpeter viewed the ability of banks to create money ex nihilo as the differentia specifica of capitalism. He saw bankers as ‘capitalists par excellence’ and viewed this ‘elastic’ nature of credit as an unambiguously positive phenomenon. Many people see Schumpeter’s view of money and banking as the antithesis of the Austrian view. But as Agnes Festre has highlighted, Hayek had a very similar view on the empirical reality of the credit process. Hayek however saw this elasticity of the monetary supply as a negative phenomenon. The similarity between Hayek and Minksy comes from the fact that Minsky also focused on the downside of an elastic monetary system in which overextension of credit was inevitably brought back to a halt by the violent snapback of the Minsky Moment.

Where Hayek and Minsky differed was that Minsky favoured a comprehensive stabilisation of the financial and monetary system through fiscal and monetary intervention after the Minsky moment. Hayek only supported the prevention of secondary deflationary spirals. Minsky supported aggressive and early monetary interventions (e.g. lender-of-last-resort programs) as well as fiscal stimulus. However, although Minsky supported stabilisation he was well aware of the damaging long-run consequences of stabilising the economic system. He understood that such a system would inevitably deteriorate into crony capitalism if fundamental reforms did not follow the stabilisation. Minsky supported a “policy strategy that emphasizes high consumption, constraints upon income inequality, and limitations upon permissible liability structures”. He also advocated “an industrial-organization strategy that limits the power of institutionalized giant firms”. Minsky was under no illusions that a stabilised capitalist economy could carry on with business as usual.

I disagree with Minsky on two fundamental points – I believe that a capitalist economy with sufficient low-level instability is resilient. Allow small failures of banks and financial players, tolerate small recessions and we can dramatically reduce the impact and probability of large-scale catastrophic recessions such as the 2008 financial crisis. A little bit of chaos is an essential ingredient in a resilient capitalist economy. I also believe that we must avoid stamping out the disturbance at its source and instead focus our efforts on mitigating the wider impact of the disturbance on the masses. In other words, bail out the masses with helicopter drops rather than bailing out the banks.

But although I disagree with Minsky his ideas are coherent. The same cannot be said for the current popular interpretation of Minsky which believes that so long as we deal with sufficient force when the Minsky moment arrives, capitalism can carry on as usual. As Minsky has argued in his book ‘John Maynard Keynes’, and as I have argued based on experiences in stabilising other complex adaptive systems such as rivers, forest fires and our brain, stabilised capitalism is an oxymoron.

What about Hayek’s views on credit elasticity? As I argued in an earlier post, “we live in a world where maturity transformation is no longer required to meet our investment needs. The evolution and malformation of the financial system means that Hayek’s analysis is more relevant now than it probably was during his own lifetime”. An elastic credit system is no longer beneficial to economic growth in the modern economy. This does not mean that we should ban the process of endogenous credit creation – it simply means that we must allow the maturity-transforming entities to collapse when they get in trouble1.


  1. Because we do not need an elastic, maturity-transforming financial system, we can firewall basic deposit banking from risky finance. This will enable us to allow the banks to fail when the next crisis hits us. The solution is not to ban casino banking but to suck the lifeblood out of it by constructing an alternative 100% reserve-like system. I have advocated that each resident should be given a deposit account with the central bank which can be backed by Treasuries, a ‘public option’ for basic deposit banking. John Cochrane has also argued for a similar system. In his words, “the Federal Reserve should continue to provide abundant reserves to banks, paying market interest. The Treasury could offer reserves to the rest of us—floating-rate, fixed-value, electronically-transferable debt. There is no reason that the Fed and Treasury should artificially starve the economy of completely safe, interest-paying cash”. ↩

Written by Ashwin Parameswaran

August 23rd, 2013 at 4:56 pm

The Control Revolution And Its Discontents

with 20 comments

One of the key narratives on this blog is how the Great Moderation and the neo-liberal era has signified the death of truly disruptive innovation in much of the economy. When macroeconomic policy stabilises the macroeconomic system, every economic actor is incentivised to take on more macroeconomic systemic risks and shed idiosyncratic, microeconomic risks. Those that figured out this reality early on and/or had privileged access to the programs used to implement this macroeconomic stability, such as banks and financialised corporates, were the big winners – a process that is largely responsible for the rise in inequality during this period. In such an environment the pace of disruptive product innovation slows but the pace of low-risk process innovation aimed at cost-reduction and improving efficiency flourishes. therefore we get the worst of all worlds – the Great Stagnation combined with widespread technological unemployment.

This narrative naturally begs the question: when was the last time we had a truly disruptive Schumpeterian era of creative destruction. In a previous post looking at the evolution of the post-WW2 developed economic world, I argued that the so-called Golden Age was anything but Schumpeterian – As Alexander Field has argued, much of the economic growth till the 70s was built on the basis of disruptive innovation that occurred in the 1930s. So we may not have been truly Schumpeterian for at least 70 years. But what about the period from at least the mid 19th century till the Great Depression? Even a cursory reading of economic history gives us pause for thought – after all wasn’t a significant part of this period supposed to be the Gilded Age of cartels and monopolies which sounds anything but disruptive.

I am now of the opinion that we have never really had any long periods of constant disruptive innovation – this is not a sign of failure but simply a reality of how complex adaptive systems across domains manage the tension between efficiency,robustness, evolvability and diversity. What we have had is a subverted control revolution where repeated attempts to achieve and hold onto an efficient equilibrium fail. Creative destruction occurs despite our best efforts to stamp it out. In a sense, disruption is an outsider to the essence of the industrial and post-industrial period of the last two centuries, the overriding philosophy of which is automation and algorithmisation aimed at efficiency and control. And much of our current troubles are a function of the fact that we have almost perfected the control project.

The operative word and the source of our problems is “almost”. Too many people look at the transition from the Industrial Revolution to the Algorithmic Revolution as a sea-change in perspective. But in reality, the current wave of reducing everything to a combination of “data & algorithm” and tackling every problem with more data and better algorithms is the logical end-point of the control revolution that started in the 19th century. The difference between Ford and Zara is overrated – Ford was simply the first step in a long process that focused on systematising each element of the industrial process (production,distribution,consumption) but also crucially putting in place a feedback loop between each element. In some sense, Zara simply follows a much more complex and malleable algorithm than Ford did but this algorithm is still one that is fundamentally equilibriating (not disruptive) and focused on introducing order and legibility into a fundamentally opaque environment via a process that reduces human involvement and discretion by replacing intuitive judgements with rules and algorithms. Exploratory/disruptive innovation on the other hand is a disequilibriating force that is created by entrepreneurs and functions outside this feedback/control loop. Both processes are important – the longer period of the gradual shedding of diversity and homogenisation in the name of efficiency as well as the periodic “collapse” that shakes up the system and puts it eventually on the path to a new equilibrium.

Of course, control has been a aim of western civilisation for a lot longer but it was only in the 19th century that the tools of control were good enough for this desire to be implemented in any meaningful sense. And even more crucially, as James Beniger has argued, it was only in the last 150 years that the need for large-scale control arose. And now the tools and technologies in our hands to control and stabilise the economy are more powerful than they’ve ever been, likely too powerful.

If we had perfect information and everything could be algorithmised right now i.e. if the control revolution had been perfected, then the problem disappears. Indeed it is arguable that the need for disruption in the innovation process no longer exists. If we get to a world where radical uncertainty has been eliminated, then the problem of systemic fragility is moot and irrelevant. It is easy to rebut the stabilisation and control project by claiming that we cannot achieve this perfect world.

But even if the techno-utopian project can achieve all that it claims it can, the path matters. We need to make it there in one piece. The current “algorithmic revolution” is best viewed as a continuation of the process through which human beings went from being tool-users to minders and managers of automated systems. The current transition is simply one where the many of these algorithmic and automated systems can essentially run themselves with human beings simply performing the role of supervisors who only need to intervene in extraordinary circumstances. Therefore, it would seem logical that the same process of increased productivity that has occurred during the modern era of automation will continue during the creation of the “vast,automatic and invisible” ‘second economy’. However there are many signs that this may not be the case. What has made things better till now and has been genuine “progress” may make things worse in higher doses and the process of deterioration can be quite dramatic.

The Uncanny Valley on the Path towards “Perfection”

In 1970, Masahiro Mori coined the term ‘uncanny valley’ to denote the phenomenon that “as robots appear more humanlike, our sense of their familiarity increases until we come to a valley”. When robots are almost but not quite human-like, they invoke a feeling of revulsion rather than empathy. As Karl McDorman notes, “Mori cautioned robot designers not to make the second peak their goal — that is, total human likeness — but rather the first peak of humanoid appearance to avoid the risk of their robots falling into the uncanny valley.”

A similar valley exists in the path of increased automation and algorithmisation. Much of the discussion in this section of the post builds upon concepts I explored via a detailed case study in a previous post titled ‘People Make Poor Monitors for Computers’.

The 21st century version of the control project i.e. the algorithmic project consists of two components:
1. More Data – ‘Big Data’.
2. Better and more comprehensive Algorithm.

The process goes hand in hand therefore with increased complexity and crucially, poorer and less intuitive feedback for the human operator. This results in increased fragility and a system prone to catastrophic breakdowns. The typical solution chosen is either further algorithmisation i.e. an improved algorithm and more data and if necessary increased slack and redundancy. This solution exacerbates the problem of feedback and temporarily pushes the catastrophic scenario further out to the tail but it does not eliminate it. Behavioural adaptation by human agents to the slack and the “better” algorithm can make a catastrophic event as likely as it was before but with a higher magnitude. But what is even more disturbing is that this cycle of increasing fragility can occur even without any such adaptation. This is the essence of the fallacy of the ‘defence in depth’ philosophy that lies at the core of most fault-tolerant algorithmic designs that I discussed in my earlier postthe increased “safety” of the automated system allows the build up of human errors without any feedback available from deteriorating system performance.

A thumb rule to get around this problem is to use slack only in those domains where failure is catastrophic and to prioritise feedback when failure is not critical and cannot kill you. But in an uncertain environment, this rule is very difficult to manage. How do you really know that a particular disturbance will not kill you? Moreover the loop of automation -> complexity -> redundancy endogenously turns a non-catastrophic event into one with catastrophic consequences.

This is a trajectory which is almost impossible to reverse once it has gone beyond a certain threshold without undergoing an interim collapse. The easy short-term fix is always to make a patch to the algorithm, get more data and build in some slack if needed. An orderly rollback is almost impossible due to the deskilling of the human workforce and risk of collapse due to other components in the system having adapted to new reality. Even simply reverting to the old more tool-like system makes things a lot worse because the human operators are no longer experts at using those tools – the process of algorithmisation has deskilled the human operator. Moreover, the endogenous nature of this buildup of complexity eventually makes the system fundamentally illegible to the human operator – a phenomenon that is ironic given that the fundamental aim of the control revolution is to increase legibility.

The Sweet Spot Before the Uncanny Valley: Near-Optimal Yet Resilient

Although it is easy to imagine the characteristics of an inefficient and dramatically sub-optimal system that is robust, complex adaptive systems operate at a near-optimal efficiency that is also resilient. Efficiency is not only important due to the obvious reality that resources are scarce but also because slack at the individual and corporate level is a significant cause of unemployment. Such near-optimal robustness in both natural and economic systems is not achieved with simplistically diverse agent compositions or with significant redundancies or slack at agent level.

Diversity and redundancy carry a cost in terms of reduced efficiency. Precisely due to this reason, real-world economic systems appear to exhibit nowhere near the diversity that would seem to ensure system resilience. Rick Bookstaber noted recently, that capitalist competition if anything seems to lead to a reduction in diversity. As Youngme Moon’s excellent book ‘Different’ lays out, competition in most markets seems to result in less diversity, not more. We may have a choice of 100 brands of toothpaste but most of us would struggle to meaningfully differentiate between them.

Similarly, almost all biological and ecological complex adaptive systems are a lot less diverse and contain less pure redundancy than conventional wisdom would expect. Resilient biological systems tend to preserve degeneracy rather than simple redundancy and resilient ecological systems tend to contain weak links rather than naive ‘law of large numbers’ diversity. The key to achieving resilience with near-optimal configurations is to tackle disturbances and generate novelty/innovation with an an emergent systemic response that reconfigures the system rather than simply a localised response. Degeneracy and weak links are key to such a configuration. The equivalent in economic systems is a constant threat of new firm entry.

The viewpoint which emphasises weak links and degeneracy also implies that it is not the keystone species and the large firms that determine resilience but the presence of smaller players ready to reorganise and pick up the slack when an unexpected event occurs. Such a focus is further complicated by the fact that in a stable environment, the system may become less and less resilient with no visible consequences – weak links may be eliminated, barriers to entry may progressively increase etc with no damage done to system performance in the stable equilibrium phase. Yet this loss of resilience can prove fatal when the environment changes and can leave the system unable to generate novelty/disruptive innovation. This highlights the folly of statements such as ‘what’s good for GM is good for America’. We need to focus not just on the keystone species, but on the fringes of the ecosystem.

 THE UNCANNY VALLEY AND THE SWEET SPOT

The Business Cycle in the Uncanny Valley – Deterioration of the Median as well as the Tail

Many commentators have pointed out that the process of automation has coincided with a deskilling of the human workforce. For example, below is a simplified version of the relation between mechanisation and skill required by the human operator that James Bright documented in 1958 (via Harry Braverman’s ‘Labor and Monopoly Capital’). But till now, it has been largely true that although human performance has suffered, the performance of the system has gotten vastly better. If the problem was just a drop in human performance while the system got better, our problem is less acute.

AUTOMATION AND DESKILLING OF THE HUMAN OPERATOR

But what is at stake is a deterioration in system performance – it is not only a matter of being exposed to more catastrophic setbacks. Eventually mean/median system performance deteriorates as more and more pure slack and redundancy needs to be built in at all levels to make up for the irreversibly fragile nature of the system. The business cycle is an oscillation between efficient fragility and robust inefficiency. Over the course of successive cycles, both poles of this oscillation get worse which leads to median/mean system performance falling rapidly at the same time that the tails deteriorate due to the increased illegibility of the automated system to the human operator.

THE UNCANNY VALLEY BUSINESS CYCLE

The Visible Hand and the Invisible Foot, Not the Invisible Hand

The conventional economic view of the economy is one of a primarily market-based equilibrium punctuated by occasional shocks. Even the long arc of innovation is viewed as a sort of benign discovery of novelty without any disruptive consequences. The radical disequilibrium view (which I have been guilty of espousing in the past) is one of constant micro-fragility and creative destruction. However, the history of economic evolution in the modern era has been quite different – neither market-based equilibrium nor constant disequilibrium, but a series of off-market attempts to stabilise relations outside the sphere of the market combined with occasional phase transitions that bring about dramatic change. The presence of rents is a constant and the control revolution has for the most part succeeded in preserving the rents of incumbents, barring the occasional spectacular failure. It is these occasional “failures” that have given us results that in some respect resemble those that would have been created by a market over the long run.

As Bruce Wilder puts it (sourced from 1, 2, 3 and mashed up together by me):

The main problem with the standard analysis of the “market economy”, as well as many variants, is that we do not live in a “market economy”. Except for financial markets and a few related commodity markets, markets are rare beasts in the modern economy. The actual economy is dominated by formal, hierarchical, administrative organization and transactions are governed by incomplete contracts, explicit and implied. “Markets” are, at best, metaphors…..
Over half of the American labor force works for organizations employing over 100 people. Actual markets in the American economy are extremely rare and unusual beasts. An economics of markets ought to be regarded as generally useful as a biology of cephalopods, amid the living world of bones and shells. But, somehow the idealized, metaphoric market is substituted as an analytic mask, laid across a vast variety of economic relations and relationships, obscuring every important feature of what actually is…..
The elaborate theory of market price gives us an abstract ideal of allocative efficiency, in the absence of any firm or household behaving strategically (aka perfect competition). In real life, allocative efficiency is far less important than achieving technical efficiency, and, of course, everyone behaves strategically.
In a world of genuine uncertainty and limitations to knowledge, incentives in the distribution of income are tied directly to the distribution of risk. Economic rents are pervasive, but potentially beneficial, in that they provide a means of stable structure, around which investments can be made and production processes managed to achieve technical efficiency.
In the imaginary world of complete information of Econ 101, where markets are the dominant form of economic organizations, and allocative efficiency is the focus of attention, firms are able to maximize their profits, because they know what “maximum” means. They are unconstrained by anything.
In the actual, uncertain world, with limited information and knowledge, only constrained maximization is possible. All firms, instead of being profit-maximizers (not possible in a world of uncertainty), are rent-seekers, responding to instituted constraints: the institutional rules of the game, so to speak. Economic rents are what they have to lose in this game, and protecting those rents, orients their behavior within the institutional constraints…..
In most of our economic interactions, price is not a variable optimally digesting information and resolving conflict, it is a strategic instrument, held fixed as part of a scheme of administrative control and information discovery……The actual, decentralized “market” economy is not coordinated primarily by market prices—it is coordinated by rules. The dominant relationships among actors is not one of market exchange at price, but of contract: implicit or explicit, incomplete and contingent.

James Beniger’s work is the definitive document on how the essence of the ‘control revolution’ has been an attempt to take economic activity out of the sphere of direct influence of the market. But that is not all – the long process of algorithmisation over the last 150 years has also, wherever possible, replaced implicit rules/contracts and principal-agent relationships with explicit processes and rules. Beniger also notes that after a certain point, the increasing complexity of the system is an endogenous phenomenon i.e. further iterations are aimed at controlling the control process itself. As I have illustrated above, after a certain threshold, the increasing complexity, fragility and deterioration in performance becomes a self-fulfilling positive feedback process.

Although our current system bears very little resemblance to the market economy of the textbook, there was a brief period during the transition from the traditional economy to the control economy during the early part of the 19th century when this was the case. 26% of all imports into the United States in 1827 sold in an auction. But the displacement of traditional controls (familial ties) with the invisible hand of market controls was merely a transitional phase, soon to be displaced by the visible hand of the control revolution.

The Soviet Project, Western Capitalism and High Modernity

Communism and Capitalism are both pillars of the high-modernist control project. The signature of modernity is not markets, but technocratic control projects. Capitalism has simply done it in a manner that is more easily and more regularly subverted. It is the occasional failure of the control revolution that is the source of the capitalist economy’s long-run success. Conversely, the failure of the Soviet Project was due to its too successful adherence and implementation of the high-modernist ideal. The significance of the threat from crony capitalism is a function of the fact that by forming a coalition and partnership of the corporate and state control projects, it enables the implementation of the control revolution to be that much more effective.

The Hayekian argument of dispersed knowledge and its importance in seeking equilibrium is not as important as it seems in explaining why the Soviet project failed. As Joseph Berliner has illustrated, the Soviet economy did not fail to reach local equilibria. Where it failed so spectacularly was in extracting itself out of these equilibria. The dispersed knowledge argument is open to the riposte that better implementation of the control revolution will eventually overcome these problems – indeed much of the current techno-utopian version of the control revolution is based on this assumption. It is a weak argument for free enterprise, a much stronger argument for which is the need to maintain a system that retains the ability to reinvent itself and find a new, hitherto unknown trajectory via the destruction of the incumbents combined with the emergence of the new. Where the Soviet experiment failed is that it eliminated the possibility of failure, that Berliner called the ‘invisible foot’. The success of the free enterprise system has been built not upon the positive incentive of the invisible hand but the negative incentive of the invisible foot to counter the visible hand of the control revolution. It is this threat and occasional realisation of failure and disorder that is the key to maintaining system resilience and evolvability.

 

 

Notes:

  • Borrowing from Beniger, control here simply means “purposive influence towards a predetermined goal”. Similarly, equilibrium in this context is best defined as a state in which economic agents are not forced to change their routines, theories and policies.
  • On the uncanny valley, I wrote a similar post on why perfect memory does not lead to perfect human intelligence. Even if a computer benefits from more data and better memory, we may not. And the evidence suggests that the deterioration in human performance is steepest in the zone close to “perfection”.
  • An argument similar to my assertion on the misconception of a free enterprise economy as a market economy can be made about the nature of democracy. Rather than as a vehicle that enables the regular expression of the political will of the electorate, democracy may be more accurately thought of as the ability to effect a dramatic change when the incumbent system of plutocratic/technocratic rule diverges too much from popular opinion. As always, stability and prevention of disturbances can cause the eventual collapse to be more catastrophic than it needs to be.
  • Although James Beniger’s ‘Control Revolution’ is the definitive reference, Antoine Bousquet’s book ‘The Scientific Way of Warfare’ on the similar revolution in military warfare is equally good. Bousquet’s book highlights the fact that the military is often the pioneer of the key projects of the control revolution and it also highlights just how similar the latest phase of this evolution is to early phases – the common desire for control combined with its constant subversion by reality. Most commentators assume that the threat to the project is external – by constantly evolving guerrilla warfare for example. But the analysis of the uncanny valley suggests that an equally great threat is endogenous – of increasing complexity and illegibility of the control project itself. Bousquet also explains how the control revolution is a child of the modern era and the culmination of the philosophy of the Enlightenment.
  • Much of the “innovation” of the control revolution was not technological but institutional – limited liability, macroeconomic stabilisation via central banks etc.
  • For more on the role of degeneracy in biological systems and how it enables near-optimal resilience, this paper by James Whitacre and Axel Bender is excellent.

Written by Ashwin Parameswaran

February 21st, 2012 at 5:38 pm

Debunking the ‘Savings Glut’ Thesis

with 20 comments

Some excellent recent research debunking the savings glut thesis: Borio and Disyatat, Hyun-Song Shin, Thomas Palley.

The Borio-Disyatat paper is especially recommended. It explains best why the savings glut thesis itself is a product of a faulty ‘Loanable Funds’ view of money. Much more appropriate is the credit/financing view of money that Borio and Disyatat take. The best explanation of this credit view is Chapter 3 (’Credit and Capital’) in Joseph Schumpeter’s book ‘Theory of Economic Development’. As Agnès Festré notes, Hayek had a very similar theory of credit but a very different opinion as to its implications:

both Hayek and Schumpeter make use of the mechanism of forced saving in their analyses of the cyclical upswing in order to describe the real effects of credit creation. In Schumpeter’s framework, the relevant redistribution of purchasing power is from traditional producers to innovators with banks playing a crucial complementary role in meeting demand for finance by innovating firms. The dynamic process thus set into motion then leads to a new quasi-equilibrium position characterised by higher productivity and an improved utilisation of resources. For Hayek, however, forced saving is equivalent to a redistribution from consumers to investing producers as credit not backed by voluntary savings is channelled towards investment activities, in the course of which more roundabout methods of production are being implemented. In this setting, expansion does not lead to a new equilibrium position but is equivalent to a deviation from the equilibrium path, that is to an economically harmful distortion of the relative (intertemporal) price system. The eventual return to equilibrium then takes place via an inevitable economic crisis.

Schumpeter viewed this elasticity of credit as the ‘differentia specifica’ of capitalism. Although this view combined with his vision of the banker as a ‘capitalist par excellence’ may have been true in an unstabilised financial system, it is not accurate in the stabilised financial system that his student Hyman Minsky identified as the reality of the modern capitalist economy. Successive rounds of stabilisation mean that the modern banker is more focused on seeking out bets that will be validated by central bank interventions than funding disruptive entrepreneurial activity. Moreover, we live in a world where maturity transformation is no longer required to meet our investment needs. The evolution and malformation of the financial system means that Hayek’s analysis is more relevant now than it probably was during his own lifetime.

Written by Ashwin Parameswaran

November 22nd, 2011 at 5:49 am

Rent Extraction and Competition in Banking as an Ultimatum Game

with 9 comments

In two recent posts [1,2], Scott Sumner disputes the role of financial rent extraction in increasing inequality. His best argument is that due to competition, government subsidies by themselves cannot cause inequality. A few months ago, Russ Roberts asked a similar question: “If banking is a protected sector that the government coddles and rewards, why doesn’t competition for banking jobs reduce the returns to more normal levels?” This post tries to answer this question. To summarise the conclusion, synthetic rent extraction markets are closer to an ‘Ultimatum Game’ than they are to competitive “real economy” markets.

Scott brings up the example of farm subsidies and points out that they only reduce food prices without making farmers any richer – the reason of course being competitive food markets. In my post on inequality and rents, I used a similar rationale to explain how reduced borrowing costs for banks in Germany (due to state protection) simply results in reduced borrowing costs for the Mittelstand. So how is this any different from the rents that banks, hedge funds and others can extract from the central bank’s commitment to insure them and the economy from tail events? The answer lies in the synthetic and rent-contingent nature of markets for products such as CDOs. The absence of moral hazard rents doesn’t simply change the price and quantity of many financial products – it ensures that the market does not exist to start with. In other words, the very raison d’être of many financial products is their role in extracting rents from central bank commitments.

The process of distributing rents amongst financial market participants is closer to an ultimatum game than it is to a perfectly competitive product market. The rewards in this game are the rents on offer which are limited only by the willingness or ability of the central bank to insure against tail risk. To illustrate how this game may be played out, let us take the ubiquitous negatively-skewed product payoff that banks accumulated during the crisis – the super-senior CDO tranche1. In order to originate a synthetic super-senior tranche, a bank needs to find a willing counterparty (probably a hedge fund) to take the other side of the trade. The bank itself needs to negotiate an arrangement between its owners, creditors and employees as to how the rents will be shared. If the various parties cannot come to an agreement, there is no trade and no rents are extracted.  The central bank commitment provides an almost unlimited quantity of insurance/rents at a constant price. Therefore, there is no incentive for any of the above parties to risk failure to come to an agreement by insisting on a larger share of the pie.

In a world with unlimited potential bank stockholders, creditors and employees and unlimited potential hedge funds, the eventual result is unlimited rent extraction and state bankruptcy. The only way to avoid inequality in the presence of such a commitment is for every single person in the economy to extract rents in an equally efficient manner – simply increased competition between hedge funds or banks is not good enough. In reality of course, not all of us are bankers or hedge fund managers.  Nevertheless, it is troubling that the evolution of many financial product markets over the past 30 years can be viewed as a gradual expansion of such rent extraction.

Although I’ve focused on synthetic financial products, the above analysis is valid even for many of the “real” loans made during the housing boom. In the absence of the ability to extract rents, many of the worst loans would likely not have been made. The presence of rents of course meant that every party went out of their way to ensure that the loans were made. It is also worth noting that although I have explained the process of rent extraction as a calculated and intentional activity, it does not need to be. In fact, as I have argued before [1,2], rent extraction can easily arise with each party genuinely believing themselves to be blameless and well-intentioned. The road to inequality and state bankruptcy is paved with good intentions.

  1. In some cases, the super-senior itself was insured with counterparties such as AIG or the monolines making the payoff even more negatively skewed []

Written by Ashwin Parameswaran

January 4th, 2011 at 10:45 am

Agent Irrationality and Macroeconomics

with 2 comments

In a recent post, Rajiv Sethi questions the tendency to find behavioural explanations for financial crises and argues for an ecological approach instead – a sentiment that I agree with and have touched upon in previous posts on this blog. This post expands upon some of these themes.

A More Realistic View of Rationality and Human Cognition, Not Irrationality

Much of the debate on rationality in economics focuses on whether we as human beings are rational in the “homo economicus” sense. The “heuristics and biases” program pioneered by Daniel Kahneman and Amos Tversky argues that we are not “rational” – however, it does not question whether the definition of rationality implicit in “rational choice theory” is valid or not. Many researchers in the neural and cognitive sciences now believe that the conventional definition of rationality needs to be radically overhauled.

Most heuristics/biases are not a sign of irrationality but an entirely rational form of decision-making when faced with uncertainty. In an earlier post, I explained how Ronald Heiner’s framework can explain our neglect of tail events as a logical response to an uncertain environment, but the best exposition of this viewpoint can be seen in Gerd Gigerenzer’s work which itself is inspired by Herbert Simon’s ideas on “bounded rationality”. In his aptly named book “Rationality for Mortals: How People Cope with Uncertainty”, Gigerenzer explains the two key building blocks of “the science of heuristics”:

  • The Adaptive Toolbox: “the building blocks for fast and frugal heuristics that work in real-world environments of natural complexity, where an optimal strategy is often unknown or computationally intractable”
  • Ecological Rationality: “the environmental structures in which a given heuristic is successful” and the “coevolution between heuristics and environments”

The irony of course is that many classical economists had a more accurate definition of rationality than the one implicit in “rational choice theory” (See Brian Loasby’s book which I discussed here). Much of the work done in the neural sciences confirms the more nuanced view of human cognition espoused in Hayek’s “The Sensory Order” or Ken Boulding’s “The Image” (See Joaquin Fuster on Hayek or the similarities between Ken Boulding’s views and V.S. Ramachandran’s work discussed here).

Macro-Rationality is consistent with Micro-Irrationality

Even a more realistic definition of rationality doesn’t preclude individual irrationality. However, as Michael Mauboussin pointed out: “markets can still be rational when investors are individually irrational. Sufficient investor diversity is the essential feature in efficient price formation. Provided the decision rules of investors are diverse—even if they are suboptimal—errors tend to cancel out and markets arrive at appropriate prices. Similarly, if these decision rules lose diversity, markets become fragile and susceptible to inefficiency. So the issue is not whether individuals are irrational (they are) but whether they are irrational in the same way at the same time. So while understanding individual behavioral pitfalls may improve your own decision making, appreciation of the dynamics of the collective is key to outperforming the market.”

Economies as Complex Adaptive Systems: Behavioural Heterogeneity, Selection Pressures and Emphasis on System Dynamics

In my view, the ecological approach to macroeconomics is essentially a systems approach with the emphasis on the “adaptive” nature of the system i.e. incentives matter and the actors in a system tend to find ways to work around imposed rules that try to fight the impact of misaligned incentives. David Merkel explained it well when he noted: “People hate having their freedom restrained, and so when arbitrary rules are imposed, even smart rules, they look for means of escape.” And many of the posts on this blog have focused on how rules can be subverted even when economic agents don’t actively intend to do so.

The ecological approach emphasises the diversity of behavioural preferences and the role of incentives/institutions/rules in “selecting” from this pool of possible agent behaviours or causing agent behaviour to adapt in reaction to these incentives. When a behaviourally homogeneous pool of agents is observed, the ecological approach focuses on the selection pressures and incentives that could have caused this loss of diversity rather than attempting to lay the blame on some immutable behavioural trait. Again, as Rajiv Sethi puts it here: “human behavior differs substantially across career paths because of selection both into and within occupations….[Regularities] identified in controlled laboratory experiments with standard subject pools have limited application to environments in which the distribution of behavioral propensities is both endogenous and psychologically rare. This is the case in financial markets, which are subject to selection at a number of levels. Those who enter the profession are unlikely to be psychologically typical, and market conditions determine which behavioral propensities survive and thrive at any point in historical time.”


Written by Ashwin Parameswaran

June 24th, 2010 at 8:50 am

Micro-Foundations of a Resilience Approach to Macro-Economic Analysis

with 4 comments

Before assessing whether a resilience approach is relevant to macro-economic analysis, we need to define resilience. Resilience is best defined as “the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks.”

The assertion that an ecosystem can lose resilience and become fragile is not controversial. To claim that the same can occur in social systems such as macro-economies is nowhere near as obvious, not least due to our ability to learn, forecast the future and adapt to changes in our environment. Any analysis of how social systems can lose resilience is open to the objection that loss of resilience implies systematic error on the part of economic actors in assessing the economic conditions accurately and an inability to adapt to the new reality. For example, one of the common objections to Minsky’s Financial Instability Hypothesis (FIH) is that it requires irrational behaviour on the part of economic actors. Rajiv Sethi’s post has a summary of this debate with a notable objection coming from Bernanke’s paper on the subject which insists thatHyman Minsky and Charles Kindleberger have in several places argued for the inherent instability of the financial system, but in doing so have had to depart from the assumption of rational behavior.”

One response to this objection is “So What?” and indeed the stability-resilience trade-off can be explained within the Kahneman-Tversky framework. Another response which I’ve invoked on this blog and Rajiv has also mentioned in a recent post focuses on the pervasive principal-agent relationship in the financial economy. However, I am going to focus on a third and a more broadly applicable rationale which utilises a “rationality” that incorporates Knightian uncertainty as the basis for the FIH. The existence of irreducible uncertainty is sufficient to justify an evolutionary approach for any social system, whether it be an organization or a macro-economy.

Cognitive Rigidity as a Rational Response to Uncertainty

Rajiv touches on the crux of the issue when he notes: “Selection of strategies necessarily implies selection of people, since individuals are not infinitely flexible with respect to the range of behavior that they can exhibit.” But is achieving infinite flexibility a worthwhile aim? The evidence suggests that it is not. In the face of true uncertainty, infinite flexibility is not only unrealistic due to finite cognitive resources but it is also counterproductive and may deliver results that are significantly inferior to a partially “rigid” framework. V.S. Ramachandran explains this brilliantly: “At any given moment in out waking lives, our brains are flooded with a bewildering variety of sensory inputs, all of which have to be incorporated into a coherent perspective based on what stored memories already tell us is true about ourselves and the world. In order to act, the brain must have some way of selecting from this superabundance of detail and ordering it into a consistent ‘belief system’, a story that makes sense of the available evidence. When something doesn’t quite fit the script, however, you very rarely tear up the entire story and start from scratch. What you do, instead, is to deny or confabulate in order to make the information fit the big picture. Far from being maladaptive, such everyday defense mechanisms keep the brain from being hounded into directionless indecision by the ‘combinational explosion’ of possible stories that might be written from the material available to the senses.”

This rigidity is far from being maladaptive and appears to be irrational only when measured against a utopian definition of rational choice. Behavioural Economics also frequently commits the same error – As Brian Loasby notes: “It is common to find apparently irrational behaviour attributed to ‘framing effects’, as if ‘framing’ were a remediable distortion. But any action must be taken within a framework.” This notion of true rationality being less than completely flexible is not a new one – Ramachandran’s work provides the neurological bases for the notion of ‘rigidity as a rational response to uncertainty’. I have already discussed Ronald Heiner’s framework in a previous post which bears a striking resemblance to Ramachandran’s thesis:

“Think of an omniscient agent with literally no uncertainty in identifying the most preferred action under any conceivable condition, regardless of the complexity of the environment which he encounters. Intuitively, such an agent would benefit from maximum flexibility to use all potential information or to adjust to all environmental conditions, no matter how rare or subtle those conditions might be. But what if there is uncertainty because agents are unable to decipher all of the complexity of the environment? Will allowing complete flexibility still benefit the agents?

I believe the general answer to this question is negative: that when genuine uncertainty exists, allowing greater flexibility to react to more information or administer a more complex repertoire of actions will not necessarily enhance an agent’s performance.”

Brian Loasby has an excellent account of ‘rationality under uncertainty’ and its evolutionary implications in this excellent book which traces hints of this idea running through the work of Adam Smith, Alfred Marshall, George Kelly’s ‘Personal Construct Theory’ and Hayek’s ‘Sensory Order’. But perhaps the clearest exposition of the idea was provided by Kenneth Boulding in his description of subjective human knowledge as an ‘Image’. Most external information either conforms so closely to the image that it is ignored or it adds to the image in a well-defined manner. But occasionally, we receive information that is at odds with our image. Boulding recognised that such change is usually abrupt and explained it in the following manner: “The sudden and dramatic nature of these reorganizations is perhaps a result of the fact that our image is in itself resistant to change. When it receives messages which conflict with it, its first impulse is to reject them as in some sense untrue….As we continue to receive messages which contradict our image, however, we begin to have doubts, and then one day we receive a message which overthrows our previous image and we revise it completely.” He also recognises that this resistance is not “irrational” but merely a logical response to uncertainty in an “imperfect” market. “The buyer or seller in an imperfect market drives on a mountain highway where he cannot see more than a few feet around each curve; he drives it, moreover, in a dense fog. There is little wonder, therefore, that he tends not to drive it at all but to stay where he is. The well-known stability or stickiness of prices in imperfect markets may have much more to do with the uncertain nature of the image involved than with any ideal of maximizing behavior.”

Loasby describes the key principles of this framework as follows: “The first principle is that all action is decided in the space of representations. These representations include, for example, neural networks formed in the brain by processes which are outside our conscious control…None are direct copies of reality; all truncate complexity and suppress uncertainty……The second principle of this inquiry is that viable processes must operate within viable boundaries; in human affairs these boundaries limit our attention and our procedures to what is manageable without, we hope, being disastrously misleading – though no guarantees are available……The third principle is that these frameworks are useless unless they persist, even when they do not fit very well. Hahn’s definition of equilibrium as a situation in which the messages received by agents do not cause them to change the theories that they hold or the policies that they pursue offers a useful framework for the analysis both of individual behaviour and of the co-ordination of economic activity across a variety of circumstances precisely because it is not to be expected that theories and policies will be readily changed just because some evidence does not appear readily compatible with them.” (For a more detailed account, read Chapter 3 ‘Cognition and Institutions’ of the aforementioned book or his papers here and here.)

The above principles are similar to Ronald Heiner’s assertion that actions chosen under true uncertainty must satisfy a ‘reliability condition’. It also accounts for the existence of the stability-resilience trade-off. In Loasby’s words: “If behaviour is a selected adaptation and not a specific application of a general logic of choice, then the introduction of substantial novelty – a change not of weather but of climate – is liable to be severely disruptive, as Schumpeter also insisted. In biological systems it can lead to the extinction of species, sometimes on a very large scale.” Extended periods of stability narrow the scope of events that fit the script and correspondingly broaden the scope of events that appear to be anomalous and novel. When the inevitable anomalous event comes along, we either adapt too slowly or in extreme cases, not at all.

Written by Ashwin Parameswaran

April 11th, 2010 at 7:51 am

Employee Whistle-blowers as an Effective Mechanism to Uncover Fraud

with 5 comments

One of the more predictable discoveries in Anton Valukas’ report on Lehman was the fate of the lone employee whistleblower and the reaction of the audit firm to the whistleblower’s allegations. Much ink has been spilt on improving the regulatory framework to avoid another Lehman (See for example TED). Improving the incentives for employee whistleblowers to come forward is an important regulatory imperative that has not received the attention that it deserves. All whistleblowers, not just employees, play a key role in uncovering fraud in corporations. The bulk of this post is derived from the excellent work done in this regard by Dyck, Morse and Zingales(henceforth DMZ) and Bowen, Call and Rajgopal.

Compared to other whistleblowers, employees have the best access to the information required to uncover fraud. They also possess the knowledge to analyse and parse the information for any signs of fraud. This is especially important in a field such as banking where outsiders rarely possess the knowledge to uncover fraud even when they possess the raw information – a key reason why the media is so ineffective in uncovering banking fraud compared to its role in other industries which DMZ highlight.

One might ask why auditors are so ineffective in uncovering fraud despite possessing the relevant information. One reason is the aforementioned lack of knowledge required to uncover fraud in complex situations. But a more crucial reason is that auditors are incentivised to ignore fraud. In DMZ’s words: “we find a clear cost for auditors who blow the whistle. The auditor of a company involved with fraud is more likely to lose the client if he blows the whistle than if he does not, while there is no significant evidence that bringing the fraud to light pays him off in terms of a greater number of accounts.”

So what prevents more employee whistleblowers from coming forward? As DMZ note, many whistleblowers prefer to remain anonymous because : “In spite of being selected cases (for which the expected benefit of revealing should exceed the expected cost), we find that in 82 percent of cases, the whistleblower was fired, quit under duress, or had significantly altered responsibilities. In addition, many employee whistleblowers report having to move to another industry and often to another town to escape personal harassment. The lawyer of James Bingham, a whistleblower in the Xerox case, sums up Jim’s situation as: “Jim had a great career, but he’ll never get a job in Corporate America again.”….. consequences to being the whistleblower include distancing and retaliation from fellow workers and friends, personal attacks on one’s character during the course of a protracted dispute, and the need to change one’s career. Not only is the honest behavior not rewarded by the market, but it is penalized.” i.e. employers prefer loyal employees to honest ones, just as they prefer loyal auditors to honest auditors.

SarbanesOxley contained many provisions aimed at protecting whistleblowers. Quoting from Bowen, Call and Rajgopal: “In response to Enron, WorldCom and other scandals, Congress passed the SarbanesOxley Act (SOX) in July 2002, which in part made it unlawful for companies to take negative action against employees who disclose “questionable accounting or auditing matters.” (See SOX section 806, codified as title 15 U.S.C., § 78f(m)(4).) Under the whistleblower provisions of SOX, employees who disclose improper financial practices receive greater protection from discrimination. (See title 18 U.S.C., § 1514A(a)(1).) SOX also ruled that every company quoted on a U.S. Stock Exchange must set up a hotline enabling whistle-blowers to report anonymously (Economist 2006).” DMZ offer many possible explanations for why these provisions have not succeeded: “One possible explanation is that rules which strengthen the protection of the whistleblowers’ current jobs offer only a small reward relative to the extensive ostracism whistleblowers face. Additionally, just because jobs are protected does not mean that career advancements in the firm are not impacted by whistle blowing. Another explanation could be that job protection is of no use if the firm goes bankrupt after the revelation of fraud.”

So what else can be done to encourage employees to come forward? Unsurprisingly, DMZ find that monetary incentives have a role to play and I agree. Employee whistleblowers play a more significant role in industries such as healthcare where Qui tam” suits are available. I would assert that monetary incentives have an even stronger role to play in uncovering fraud in banking. The extremely high lifetime pay expected in the course of a banking career combined with the almost certainly career-ending implications of becoming a whistleblower means that any employee will think twice before pulling the trigger. Moreover, the extremely specialised nature of the industry means that many senior bankers have very few alternative industries to move to.

The focus of SOX on making it harder to fire whistleblowers is misguided as well as ineffective. The focus must be not to keep whistleblowers from losing their jobs but to compensate them sufficiently so that they never have to work again. As it happens, the scale of fraud in financial institutions means that this may even be achieved without spending taxpayer money. The whistleblower may be allowed to claim a small percentage of the monetary value of the fraud prevented from the institution itself, which should be more than sufficient for the purpose.

The obvious objection to my proposal is that this will lead to a surge in frivolous claims from disgruntled employees. For one, the monetary reward is dependent on fraud being proven in a court of law and the likely career-ending nature of becoming a whistleblower should be enough to prevent any frivolous allegations. Indeed, DMZ find that the percentage of frivolous lawsuits is lower in the healthcare industry where “qui tam” suits are available.

As DMZ point out, “the idea of extending the qui tam statue to corporate frauds (i.e. providing a financial award to those who bring forward information about a corporate fraud) is very much in the Hayekian spirit of sharpening the incentives of those who are endowed with information.” This is even more crucial in uncovering fraud in a complex industry such as banking where even qualified outsiders may struggle to put the pieces together and informed insiders face such steep deterrents that prevent them from rocking the boat.

Written by Ashwin Parameswaran

March 17th, 2010 at 5:13 pm

Posted in Financial Crisis

Natural Selection, Self-Deception and the Moral Hazard Explanation of the Financial Crisis

with 15 comments

Moral Hazard and Agent Intentionality

A common objection to the moral hazard explanation of the financial crisis is the following: Bankers did not explicitly factor in the possibility of being bailed out. In fact, they genuinely believed that their firms could not possibly collapse under any circumstances. For example, Megan McArdle says: I went to business school with these people, and talked to them when they were at the banks, and the operating assumption was not that they could always get the government to bail them out if something went wrong.  The operating assumption was that they had gotten a whole lot smarter, and would not require a bailout.” And Jeffrey Friedman has this to say about the actions of Ralph Cioffi and Matthew Tannin, the managers of the Bear Stearns fund whose collapse was the canary in the coal mine for the crisis: These are not the words, nor were Tannin and Cioffi’s actions the behavior, of people who had deliberately taken what they knew to be excessive risks. If Tannin and Cioffi were guilty of anything, it was the mistake of believing the triple-A ratings.”

This objection errs in assuming that the moral hazard problem requires an explicit intention on the part of economic agents to take on more risk and maximise the free lunch available courtesy of the taxpayer. The essential idea which I outlined at the end of this post is as follows: The current regime of explicit and implicit bank creditor protection and regulatory capital requirements means that a highly levered balance sheet invested in “safe” assets with severely negatively skewed payoffs is the optimal strategy to maximise the moral hazard free lunch. Reaching this optimum does not require explicit intentionality on the part of economic actors. The same may be achieved via a Hayekian spontaneous order of agents reacting to local incentives or even more generally through “natural selection”-like mechanisms.

Let us analyse the “natural selection” argument a little further. If we assume that there is a sufficient diversity of balance-sheet strategies being followed by various bank CEOs, those CEOs who follow the above-mentioned strategy of high leverage and assets with severely negatively skewed payoffs will be “selected” by their shareholders over other competing CEOs. As I have explained in more detail in this post, the cheap leverage afforded by the creditor guarantee means that this strategy can be levered up to achieve extremely high rates of return. Even better, the assets will most likely not suffer any loss in the extended stable period before a financial crisis. The principal, in this case the bank shareholder, will most likely mistake the returns to be genuine alpha rather than the severe blowup risk trade it truly represents. The same analysis applies to all levels of the principal-agent relationship in banks where an asymmetric information problem exists.

Self-Deception and Natural Selection

But this argument still leaves one empirical question unanswered – given that such a free lunch is on offer, why don’t we see more examples of active and intentional exploitation of the moral hazard subsidy? In other words, why do most bankers seem to be true believers like Tannin and Cioffi. To answer this question, we need to take the natural selection analogy a little further. In the evolutionary race between true believers and knowing deceivers, who wins? The work of Robert Trivers on the evolutionary biology of self-deception tells us that the true believer has a significant advantage in this contest.

Trivers’ work is well summarised by Ramachandran: “According to Trivers, there are many occasions when a person needs to deceive someone else. Unfortunately, it is difficult to do this convincingly since one usually gives the lie away through subtle cues, such as facial expressions and tone of voice. Trivers proposed, therefore, that maybe the best way to lie to others is to first lie to yourself. Self-deception, according to Trivers, may have evolved specifically for this purpose, i.e. you lie to yourself in order to enable you to more effectively deceive others.” Or as Conor Oberst put it more succinctly here: “I am the first one I deceive. If I can make myself believe, the rest is easy.” Trivers’ work is not as relevant for the true believers as it is for the knowing deceivers. It shows that active deception is an extremely hard task to pull off especially when attempted in competition with a true believer who is operating with the same strategy as the deceiver.

Between a CEO who is consciously trying to maximise the free lunch and a CEO who genuinely believes that a highly levered balance sheet of “safe” assets is the best strategy, who is likely to be more convincing to his shareholders and regulator? Bob Trivers’ work shows that it is the latter. Bankers who drink their own Kool-Aid are more likely to convince their bosses, shareholders or regulators that there is nothing to worry about. Given a sufficiently strong selective mechanism such as the principal-agent relationship, it is inevitable that such bankers would end up being the norm rather than the exception. The real deviation from the moral hazard explanation would be if it were any other way!

There is another question which although not necessary for the above analysis to hold is still intriguing: How and why do people transform into true believers? Of course we can assume a purely selective environment where a small population of true believers merely outcompete the rest. But we can do better. There is ample evidence from many fields of study that we tend to cling onto our beliefs even in the face of contradictory pieces of information. Only after the anomalous information crosses a significant threshold do we revise our beliefs. For a neurological explanation of this phenomenon, the aforementioned paper by V.S. Ramachandran analyses how and why patients with right hemisphere strokes vehemently deny their paralysis with the aid of numerous self-deceiving defence mechanisms.

Jeffrey Friedman’s analysis of how Cioffi and Tannin clung to their beliefs in the face of mounting evidence to the contrary until the “threshold” was cleared and they finally threw in the towel is a perfect example of this phenomenon. In Ramachandran’s words, “At any given moment in our waking lives, our brains are flooded with a bewildering variety of sensory inputs, all of which have to be incorporated into a coherent perspective based on what stored memories already tell us is true about ourselves and the world. In order to act, the brain must have some way of selecting from this superabundance of detail and ordering it into a consistent ‘belief system’, a story that makes sense of the available evidence. When something doesn’t quite fit the script, however, you very rarely tear up the entire story and start from scratch. What you do, instead, is to deny or confabulate in order to make the information fit the big picture. Far from being maladaptive, such everyday defense mechanisms keep the brain from being hounded into directionless indecision by the ‘combinational explosion’ of possible stories that might be written from the material available to the senses.” However, once a threshold is passed, the brain finds a way to revise the model completely. Ramachandran’s analysis also provides a neurological explanation for Thomas Kuhn‘s phases of science where the “normal” period is overturned once anomalies accumulate beyond a threshold. It also provides further backing for the thesis that we follow simple rules and heuristics in the face of significant uncertainty which I discussed here.

Fix The System, Don’t Blame the Individuals

The “selection” argument provides the rationale for how the the extraction of the moral hazard subsidy can be maximised despite the lack of any active deception on the part of economic agents. Therefore, as I have asserted before, we need to fix the system rather than blaming the individuals. This does not mean that we should not pursue those guilty of fraud. But merely pursuing instances of fraud without fixing the incentive system in place will get us nowhere.

Written by Ashwin Parameswaran

February 17th, 2010 at 10:30 am

Moral Hazard: A Wide Definition

with 19 comments

A common objection to the moral hazard explanation of the financial crisis runs as follows: No banker explicitly factored in the possibility of a bailout into his decision-making process.

The obvious answer to this objection is the one Andrew Haldane noted:

“There was a much simpler explanation according to one of those present. There was absolutely no incentive for individuals or teams to run severe stress tests and show these to management. First, because if there were such a severe shock, they would very likely lose their bonus and possibly their jobs. Second, because in that event the authorities would have to step-in anyway to save a bank and others suffering a similar plight.

All of the other assembled bankers began subjecting their shoes to intense scrutiny. The unspoken words had been spoken. The officials in the room were aghast. Did banks not understand that the official sector would not underwrite banks mismanaging their risks?

Yet history now tells us that the unnamed banker was spot-on. His was a brilliant articulation of the internal and external incentive problem within banks. When the big one came, his bonus went and the government duly rode to the rescue. The time- consistency problem, and its associated negative consequences for risk management, was real ahead of crisis. Events since will have done nothing to lessen this problem, as successively larger waves of institutions have been supported by the authorities.”

Bankers did not consciously take on more risk. They took on less protection against risk, particularly extreme event risk.

But this too is an unnecessarily limited definition of moral hazard. Moral hazard can persist without any explicit intention on the part of the agent to behave differently.

Spontaneous Order

It is not at all necessary that each economic agent is consciously aware of and is trying to maximise the value of the moral hazard subsidy. A system that exploits the subsidy efficiently can arise by each agent merely adapting to and reacting to the local incentives and information put in front of him. For example, the CEO is under pressure to improve return on equity and increases leverage at the firm level. Individual departments of the bank may be extended cheap internal funding and told to hit aggressive profitability targets without using capital. And so on and so forth. It is not at all necessary that each individual trader in the bank is aware of or working towards a common goal.

Nevertheless, the system adapts in a manner as if it was consciously directed towards the goal of maximising the subsidy. In other words, a Hayekian spontaneous order could achieve the same result as a constructed order.

Natural Selection

The system can also move towards a moral hazard outcome without even partial intent or adaptation by economic agents given a sufficiently diverse agent strategy pool, a stable environment and some selection mechanism. This argument is similar to Armen Alchian’s famous paper arguing for the natural selection of profit-maximising firms.

The obvious selection mechanism in banking is the principal-agent relationship at all levels i.e. shareholders can fire CEOs, CEOs can fire managers, managers can fire traders etc. If we start out with a diverse pool of economic agents pursuing different strategies, only one of which is a high-leverage,bet-the-house strategy, sooner or later this strategy will outcompete and dominate all other strategies (provided that the environment is stable).

In the context of Andrew Haldane’s comment on banks’ neglect of risk management, banks that would have invested in risk insurance would have systematically underperformed their peer group during the boom. Any CEO who would have elected to operate with low leverage would have been fired a long time before the crisis hit.

To summarise, moral hazard outcomes can and indeed did drive the financial crisis through a variety of channels: explicit agent intentionality, adaptation of agents to local incentives or merely market pressures weeding out those firms/agents that refuse to maximise the moral hazard free lunch.

Written by Ashwin Parameswaran

January 1st, 2010 at 8:30 pm

Efficient Markets and Pattern Predictions

with 4 comments

Markets can be “inefficient” and yet almost impossible to beat because of the existence of “Limits to Arbitrage” . It is essential not only to have the correct view but also to know when the view will be realised.

Why is it so difficult to time the market? Because the market is a complex adaptive system and complex adaptive systems are amenable only to what Hayek called “pattern predictions”. Hayek introduced this concept in his essay “The Theory of Complex Phenomena” where he analysed economic and other social phenomena as “phenomena of organised complexity” (A term introduced by Warren Weaver in this essay).

In such phenomena, according to Hayek, only pattern predictions are possible about the social structure as a whole: As Hayek explained in an interview with Leo Rosten:

“We can build up beautiful theories which would explain everything, if we could fit into the blanks of the formulae the specific information; but we never have all the specific information. Therefore, all we can explain is what I like to call “pattern prediction.” You can predict what sort of pattern will form itself, but the specific manifestation of it depends on the number of specific data, which you can never completely ascertain. Therefore, in that intermediate field — intermediate between the fields where you can ascertain all the data and the fields where you can substitute probabilities for the data–you are very limited in your predictive capacities.”

“Our capacity of prediction in a scientific sense is very seriously limited. We must put up with this. We can only understand the principle on which things operate, but these explanations of the principle, as I sometimes call them, do not enable us to make specific predictions on what will happen tomorrow.”

Hayek was adamant however that theories of pattern prediction were useful and scientific and had “empirical significance”. The example he drew upon was the Darwinian theory of evolution by natural selection, which provided only predictions as to the patterns one could observe over evolutionary time at levels of analysis above the individual entity.

Hayek’s intention with his theory was to debunk the utility of statistics and econometrics in the forecast of macroeconomic outcomes (See his Nobel lecture). The current neoclassical defense against their inability to predict the crisis takes the other extreme position i.e. our theories are right because no one could predict the crisis. This contention explicitly denies the possibility of “pattern predictions” and is not a valid defense. Any macroeconomic theory should be capable of explaining the patterns of our economic system – no more, no less.

One of the key reasons why timing and exact prediction is so difficult is the futility of conventional cause-effect thinking in complex adaptive systems. As Michael Mauboussin observed, ” Cause and effect thinking is futile, if not dangerous”. The underlying causes may be far removed from the effect, both in time and in space and the proximate cause may only be the “straw that broke the camel’s back”.

Many excellent examples of “pattern prediction” can be seen in ecology. For example, the proximate cause of the catastrophic degradation of Jamaica’s coral reefs since the 1980s was the mass mortality of the dominant species of urchin (reference). However, the real reason was the progressive loss of diversity due to overfishing since the 1950s.

As CS Holling observed in his analysis of a similar collapse in fisheries in the Great Lakes:

“Whatever the specific causes, it is clear that the precondition for the collapse was set by the harvesting of fish, even though during a long period there were no obvious signs of problems. The fishing activity, however, progressively reduced the resilience of the system so that when the inevitable unexpected event occurred, the populations collapsed. If it had not been the lamprey, it would have been something else: a change in climate as part of the normal pattern of fluctuation, a change in the chemical or physical environment, or a change in competitors or predators.”

The financial crisis of 2008-2009 can be analysed as the inevitable result of a progressive loss of system resilience. Whether the underlying cause was a buildup of debt, moral hazard or monetary policy errors is a different debate and can only be analysed by looking at the empirical evidence. However, just as is the case in ecology, the inability to predict the time of collapse or even the proximate cause of collapse does not equate to an inability to explain macroeconomic patterns.

Written by Ashwin Parameswaran

December 31st, 2009 at 10:52 am