macroresilience

resilience, not stability

Archive for the ‘Evolutionary Economics’ Category

A “Systems” Explanation of How Bailouts can Cause Business Cycles

with 3 comments

In a previous post, I quoted Richard Fisher’s views on how bailouts cause business cycles and financial crises: “The system has become slanted not only toward bigness but also high risk…..if the central bank and regulators view any losses to big bank creditors as systemically disruptive, big bank debt will effectively reign on high in the capital structure. Big banks would love leverage even more, making regulatory attempts to mandate lower leverage in boom times all the more difficult…..It is not difficult to see where this dynamic leads—to more pronounced financial cycles and repeated crises.”

Fisher utilises the “incentives” argument but the same argument could also be made via the language of natural selection and Hannan and Freeman did exactly that in their seminal paper that launched the field of Organizational Ecology”. Hannan and Freeman wrote the below in the context of the bailout of Lockheed in 1971 but it is as relevant today as it has ever been: “we must consider what one anonymous reader, caught up in the spirit of our paper, called the anti-eugenic actions of the state in saving firms such as Lockheed from failure. This is a dramatic instance of the way in which large dominant organizations can create linkages with other large and powerful ones so as to reduce selection pressures. If such moves are effective, they alter the pattern of selection. In our view, the selection pressure is bumped up to a higher level. So instead of individual organizations failing, entire networks fail. The general consequence of a large number of linkages of this sort is an increase in the instability of the entire system and therefore we should see boom and bust cycles of organizational outcomes.”

Written by Ashwin Parameswaran

June 8th, 2010 at 3:45 pm

Richard Fisher of the Dallas Fed on Financial Reform

with 6 comments

Richard Fisher of the Dallas Fed delivered a speech last week( h/t Zerohedge) on the topic of financial reform, which delivered some of the most brutally honest analysis of the problem at hand that I’ve seen from anyone at the Fed. It also made a few points that I felt deserved further analysis and elaboration.

The Dynamics of the TBTF Problem

In Fisher’s words: “Big banks that took on high risks and generated unsustainable losses received a public benefit: TBTF support. As a result, more conservative banks were denied the market share that would have been theirs if mismanaged big banks had been allowed to go out of business. In essence, conservative banks faced publicly backed competition…..It is my view that, by propping up deeply troubled big banks, authorities have eroded market discipline in the financial system.

The system has become slanted not only toward bigness but also high risk…..if the central bank and regulators view any losses to big bank creditors as systemically disruptive, big bank debt will effectively reign on high in the capital structure. Big banks would love leverage even more, making regulatory attempts to mandate lower leverage in boom times all the more difficult…..

It is not difficult to see where this dynamic leads—to more pronounced financial cycles and repeated crises.”

Fisher correctly notes that TBTF support damages system resilience not only by encouraging higher leverage amongst large banks, but by disadvantaging conservative banks that would otherwise have gained market share during the crisis. As I have noted many times on this blog, the dynamic, evolutionary view of moral hazard focuses not only on the protection provided to destabilising positive feedback forces, but on how stabilising negative feedback forces that might have flourished in the absence of the stabilising actions are selected against and progressively weeded out of the system.

Regulatory Discretion and the Time Consistency Problem

Fisher: “Language that includes a desire to minimize moral hazard—and directs the FDIC as receiver to consider “the potential for serious adverse effects”—provides wiggle room to perpetuate TBTF.” Fisher notes that it’s difficult to credibly commit ex-ante not to bail out TBTF creditors – as long as the regulator retains any amount of discretion with the purpose of maintaining systemic stability, they will be tempted to use it.

On the Ineffectiveness of Regulation Alone

Fisher: “While it is certainly true that ineffective regulation of systemically important institutions—like big commercial banking companies—contributed to the crisis, I find it highly unlikely that such institutions can be effectively regulated, even after reform…Simple regulatory changes in most cases represent a too-late attempt to catch up with the tricks of the regulated—the trickiest of whom tend to be large. In the U.S. financial system, what passed as “innovation” was in large part circumvention, as financial engineers invented ways to get around the rules of the road. There is little evidence that new regulations, involving capital and liquidity rules, could ever contain the circumvention instinct.”

This is a sentiment I don’t often hear expressed by a regulator – As I have opined before on this blog, regulations alone just don’t work. The history of banking is one of repeated circumvention of regulations by banks, a process that has only accelerated with the increased completeness of markets. The question is not whether deregulation accelerated the process of banks’ maximising the moral hazard subsidy – it almost certainly did and this was understood even by the Fed as early as 1983. As John Kareken noted, “Deregulation Is the Cart, Not the Horse”. The question is whether re-regulation has any chance of succeeding without fixing the incentives guiding the actors in the system – it does not.

Bailouts Come in Many Shapes and Sizes

Fisher: “Even if an effective resolution regime can be written down, chances are it might not be used. There are myriad ways for regulators to forbear. Accounting forbearance, for example, could artificially boost regulatory capital levels at troubled big banks. Special liquidity facilities could provide funding relief. In this and similar manners, crisis-related events that might trigger the need for resolution could be avoided, making resolution a moot issue.”

A watertight resolution regime may only encourage regulators to aggressively utilise other forbearance mechanisms. Fisher mentions accounting and liquidity relief but fails to mention the most important “alternative bailout mechanism” – the “Greenspan Put” variant of monetary policy.

Preventing Systemic Risk perpetuates the Too-Big-To-Fail Problem

Fisher: “Consider the idea of limiting any and all financial support strictly to the system as a whole, thus preventing any one firm from receiving individual assistance….If authorities wanted to support a big bank in trouble, they would need only institute a systemwide program. Big banks could then avail themselves of the program, even if nobody else needed it. Systemwide programs are unfortunately a perfect back door through which to channel big bank bailouts.”

“System-wide” programs by definition get activated only when big banks and non-banking financial institutions such as GE Capital are in trouble. Apart from perpetuating TBTF, they encourage smaller banks to mimic big banks and take on similar tail risk thus reducing system diversity.

Shrink the TBTF Banks?

Fisher clearly prefers that the big banks be shrunk as a “second-best” solution to the incentive problems that both regulators and banks face in our current system. Although I’m not convinced that shrinking the banks is a sufficient response, even a “free market” solution to the crisis will almost certainly imply a more dispersed banking sector, due to the removal of the TBTF subsidy. The gist of the problem is not size but insufficient diversity. Fisher argues “there is considerable diversity in strategy and performance among banks that are not TBTF.” This is the strongest and possibly even the only valid argument for breaking up the big banks. My concern is that even a more dispersed banking sector will evolve towards a tightly coupled and homogenous outcome due to the protection against systemic risk provided by the “alternative bailout mechanisms”, particularly the Greenspan Put.

The fact that Richard Fisher’s comments echo themes popular with both left-wing and right-wing commentators is not a coincidence. In the fitness landscape of our financial system, our current choice is not so much a local peak as a deep valley – tinkering will get us nowhere and a significant move either to the left or to the right is likely to be an improvement.

Written by Ashwin Parameswaran

June 6th, 2010 at 1:30 pm

Organisational Rigidity, Crony Capitalism, Too-Big-To-Fail and Macro-Resilience

with 11 comments

In a previous post, I outlined why cognitive rigidity is not necessarily irrational even though it may lead to a loss of resilience. However, if the universe of agent strategies is sufficiently diverse, a macro-system comprising of fragile, inflexible agents can be incredibly resilient. So a simple analysis of micro-fragility does not enable us to reach any definitive conclusions about macro-resilience - organisations and economies may retain significant resilience and an ability to cope with novelty despite the fragility of their component agents.

Yet, there is significant evidence that organisations exhibit rigidity and although some of this rigidity can be perceived as irrational or perverse, much of it arises as a rational response to uncertainty. In Hannan and Freeman’s work on Organizational Ecology”, the presence of significant organisational rigidity is the basis of a selection-based rather than an adaptation-based explanation of organisational diversity. There are many factors driving organisational inertia, some of which have been summarised in this paper by Hannan and Freeman. These include internal considerations such as sunk costs, informational constraints, political constraints etc as well as external considerations such as barriers to entry and exit. In a later paper, Hannan and Freeman also justify organisational inertia as a means to an end, the end being “reliability”. Just as was the case in Ronald Heiner’s and V.S. Ramachandran’s framework discussed previously, inertia is a perfectly logical response to an uncertain environment.

Hannan and Freeman also hypothesise that older and larger organizations are more structurally inert and less capable of adapting to novel situations. In his book “Dynamic Economics”, Burton Klein analysed the historical record and found that advances that “resulted in new S-shaped curves in relatively static industries” do not come from the established players in an industry. In an excellent post, Sean Park summarises exactly why large organizations find it so difficult to innovate and also points to the pre-eminent reference in the management literature on this topic – Clayton Christensen’s “The Innovator’s Dilemma”. Christensen’s work is particularly relevant as it elaborates how established firms can fail not because of any obvious weaknesses, but as a direct consequence of their focus on core clients’ demands.

The inability of older and larger firms to innovate and adapt to novelty can be understood within the framework of the exploration-exploitation tradeoff as an inability to “explore” in an effective manner. As Levinthal and March put it, “past exploitation in a given domain makes future exploitation in the same domain even more efficient….As they develop greater and greater competence at a particular activity, they engage in that activity more, thus further increasing competence and the opportunity cost of exploration.” Exploration is also anathema to large organisations as it seems to imply a degree of managerial indecision. David Ellerman captures the essence of this thought process: “The organization’s experts will decide on the best experiment or approach—otherwise the organization would appear “not to know what it’s doing.”"

A crony capitalist economic system that protects the incumbent firms hampers the ability of the system to innovate and adapt to novelty. It is obvious how the implicit subsidy granted to our largest financial institutions via the Too-Big-To-Fail doctrine represents a transfer of wealth from the taxpayer to the financial sector. It is also obvious how the subsidy encourages a levered, homogenous and therefore fragile financial sector that is susceptible to collapse. What is less obvious is the paralysis that it induces in the financial sector and by extension the macroeconomy long after the bailouts and the Minsky moment have passed.

We shouldn’t conflate this paralysis with an absence of competition between the incumbents – the competition between the incumbents may even be intense enough to ensure that they retain only a small portion of the rents that they fight so desperately to retain. What the paralysis does imply is a fierce and unified defence of the local peak that they compete for. Their defence is directed not so much against new entrants who want to play the incumbents at their own game, but at those who seek to change the rules of the game.

The best example of this is the OTC derivatives market which is the benefits of TBTF to the big banks are most evident. Bob Litan notes that clients “wanted the comfort of knowing that they were dealing with large, well-capitalized financial institutions” when dealing in CDS and this observation holds for most other OTC derivative markets. He also correctly identifies that the crucial component of effective reform is removing the advantage that the “Derivative Dealers’ Club” currently possess: “Systemic risk also would be reduced with true derivatives market reforms that would have the effect of removing the balance sheet advantage of the incumbent dealers now most likely regarded as TBTF. If end-users know that when their trades are completed with a clearinghouse, they are free to trade with any market maker – not just the specific dealer with whom they now customarily do business – that is willing to provide the right price, the resulting trades are more likely to be the end-users’ advantage. In short, in a reformed market, the incumbent dealers would face much greater competition.”

Innovation in the financial sector is also hampered because of the outsized contribution it already makes to economic activity in the United States, which makes market-broadening innovations extremely unlikely. James Utterback identified how difficult it is for new entrants to immediately substitute incumbent players: “Innovations that broaden a market create room for new firms to start. Innovation-inspired substitutions may cause established firms to hang on all the more tenaciously, making it extremely difficult for an outsider to gain a foothold along with the cash flow needed to expand and become a player in the industry.” Of course, the incumbents may eventually break away from the local peak but an extended period of stagnation is more likely.

Sustaining an environment conducive to the entry of new firms is critical to the maintenance of a resilient macroeconomy that is capable of innovating and dealing with novelty. The very least that financial sector reform must achieve is to eliminate the benefits of TBTF that currently make it all but impossible for a new entrant to challenge the status quo.

Written by Ashwin Parameswaran

May 2nd, 2010 at 3:48 pm

Micro-Foundations of a Resilience Approach to Macro-Economic Analysis

with 4 comments

Before assessing whether a resilience approach is relevant to macro-economic analysis, we need to define resilience. Resilience is best defined as “the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks.”

The assertion that an ecosystem can lose resilience and become fragile is not controversial. To claim that the same can occur in social systems such as macro-economies is nowhere near as obvious, not least due to our ability to learn, forecast the future and adapt to changes in our environment. Any analysis of how social systems can lose resilience is open to the objection that loss of resilience implies systematic error on the part of economic actors in assessing the economic conditions accurately and an inability to adapt to the new reality. For example, one of the common objections to Minsky’s Financial Instability Hypothesis (FIH) is that it requires irrational behaviour on the part of economic actors. Rajiv Sethi’s post has a summary of this debate with a notable objection coming from Bernanke’s paper on the subject which insists thatHyman Minsky and Charles Kindleberger have in several places argued for the inherent instability of the financial system, but in doing so have had to depart from the assumption of rational behavior.”

One response to this objection is “So What?” and indeed the stability-resilience trade-off can be explained within the Kahneman-Tversky framework. Another response which I’ve invoked on this blog and Rajiv has also mentioned in a recent post focuses on the pervasive principal-agent relationship in the financial economy. However, I am going to focus on a third and a more broadly applicable rationale which utilises a “rationality” that incorporates Knightian uncertainty as the basis for the FIH. The existence of irreducible uncertainty is sufficient to justify an evolutionary approach for any social system, whether it be an organization or a macro-economy.

Cognitive Rigidity as a Rational Response to Uncertainty

Rajiv touches on the crux of the issue when he notes: “Selection of strategies necessarily implies selection of people, since individuals are not infinitely flexible with respect to the range of behavior that they can exhibit.” But is achieving infinite flexibility a worthwhile aim? The evidence suggests that it is not. In the face of true uncertainty, infinite flexibility is not only unrealistic due to finite cognitive resources but it is also counterproductive and may deliver results that are significantly inferior to a partially “rigid” framework. V.S. Ramachandran explains this brilliantly: “At any given moment in out waking lives, our brains are flooded with a bewildering variety of sensory inputs, all of which have to be incorporated into a coherent perspective based on what stored memories already tell us is true about ourselves and the world. In order to act, the brain must have some way of selecting from this superabundance of detail and ordering it into a consistent ‘belief system’, a story that makes sense of the available evidence. When something doesn’t quite fit the script, however, you very rarely tear up the entire story and start from scratch. What you do, instead, is to deny or confabulate in order to make the information fit the big picture. Far from being maladaptive, such everyday defense mechanisms keep the brain from being hounded into directionless indecision by the ‘combinational explosion’ of possible stories that might be written from the material available to the senses.”

This rigidity is far from being maladaptive and appears to be irrational only when measured against a utopian definition of rational choice. Behavioural Economics also frequently commits the same error – As Brian Loasby notes: “It is common to find apparently irrational behaviour attributed to ‘framing effects’, as if ‘framing’ were a remediable distortion. But any action must be taken within a framework.” This notion of true rationality being less than completely flexible is not a new one – Ramachandran’s work provides the neurological bases for the notion of ‘rigidity as a rational response to uncertainty’. I have already discussed Ronald Heiner’s framework in a previous post which bears a striking resemblance to Ramachandran’s thesis:

“Think of an omniscient agent with literally no uncertainty in identifying the most preferred action under any conceivable condition, regardless of the complexity of the environment which he encounters. Intuitively, such an agent would benefit from maximum flexibility to use all potential information or to adjust to all environmental conditions, no matter how rare or subtle those conditions might be. But what if there is uncertainty because agents are unable to decipher all of the complexity of the environment? Will allowing complete flexibility still benefit the agents?

I believe the general answer to this question is negative: that when genuine uncertainty exists, allowing greater flexibility to react to more information or administer a more complex repertoire of actions will not necessarily enhance an agent’s performance.”

Brian Loasby has an excellent account of ‘rationality under uncertainty’ and its evolutionary implications in this excellent book which traces hints of this idea running through the work of Adam Smith, Alfred Marshall, George Kelly’s ‘Personal Construct Theory’ and Hayek’s ‘Sensory Order’. But perhaps the clearest exposition of the idea was provided by Kenneth Boulding in his description of subjective human knowledge as an ‘Image’. Most external information either conforms so closely to the image that it is ignored or it adds to the image in a well-defined manner. But occasionally, we receive information that is at odds with our image. Boulding recognised that such change is usually abrupt and explained it in the following manner: “The sudden and dramatic nature of these reorganizations is perhaps a result of the fact that our image is in itself resistant to change. When it receives messages which conflict with it, its first impulse is to reject them as in some sense untrue….As we continue to receive messages which contradict our image, however, we begin to have doubts, and then one day we receive a message which overthrows our previous image and we revise it completely.” He also recognises that this resistance is not “irrational” but merely a logical response to uncertainty in an “imperfect” market. “The buyer or seller in an imperfect market drives on a mountain highway where he cannot see more than a few feet around each curve; he drives it, moreover, in a dense fog. There is little wonder, therefore, that he tends not to drive it at all but to stay where he is. The well-known stability or stickiness of prices in imperfect markets may have much more to do with the uncertain nature of the image involved than with any ideal of maximizing behavior.”

Loasby describes the key principles of this framework as follows: “The first principle is that all action is decided in the space of representations. These representations include, for example, neural networks formed in the brain by processes which are outside our conscious control…None are direct copies of reality; all truncate complexity and suppress uncertainty……The second principle of this inquiry is that viable processes must operate within viable boundaries; in human affairs these boundaries limit our attention and our procedures to what is manageable without, we hope, being disastrously misleading – though no guarantees are available……The third principle is that these frameworks are useless unless they persist, even when they do not fit very well. Hahn’s definition of equilibrium as a situation in which the messages received by agents do not cause them to change the theories that they hold or the policies that they pursue offers a useful framework for the analysis both of individual behaviour and of the co-ordination of economic activity across a variety of circumstances precisely because it is not to be expected that theories and policies will be readily changed just because some evidence does not appear readily compatible with them.” (For a more detailed account, read Chapter 3 ‘Cognition and Institutions’ of the aforementioned book or his papers here and here.)

The above principles are similar to Ronald Heiner’s assertion that actions chosen under true uncertainty must satisfy a ‘reliability condition’. It also accounts for the existence of the stability-resilience trade-off. In Loasby’s words: “If behaviour is a selected adaptation and not a specific application of a general logic of choice, then the introduction of substantial novelty – a change not of weather but of climate – is liable to be severely disruptive, as Schumpeter also insisted. In biological systems it can lead to the extinction of species, sometimes on a very large scale.” Extended periods of stability narrow the scope of events that fit the script and correspondingly broaden the scope of events that appear to be anomalous and novel. When the inevitable anomalous event comes along, we either adapt too slowly or in extreme cases, not at all.

Written by Ashwin Parameswaran

April 11th, 2010 at 7:51 am

Notes on the Evolutionary Approach to the Moral Hazard Explanation of the Financial Crisis

with 5 comments

In arguing the case for the moral hazard explanation of the financial crisis, I have frequently utilised evolutionary metaphors. This approach is not without controversy and this post is a partial justification as well as an explication of the conditions under which such an approach is valid. In particular, the simple story of selective forces maximising the moral hazard subsidy that I have outlined is dependent upon the specific circumstances and facts of our current financial system.

The “Natural Selection” Analogy

One point of dispute is whether selective forces are relevant in economic systems. The argument against selection usually invokes the possibility of firms or investors surviving for long periods of time despite losses i.e. bankruptcy is not strong enough as a selective force. My arguments rely not on firm survival as the selective force but the principal-agent relationship between investors and asset managers, between shareholders and CEOs etc. Selection kicks in much before the point of bankruptcy in the modern economy. In this respect, it is relevant to note the increased prevalence of shareholder activism in the last 25 years which has strengthened this argument. Moreover, the natural selection argument only serves as a more robust justification for the moral hazard story that does not depend upon explicit agent intentionality but is nevertheless strengthened by it.

The “Optimisation” Analogy

The argument that selective forces lead to optimisation is of course an old argument, most famously put by Milton Friedman and Armen Alchian. However, evolutionary economic processes only lead to optimisation if some key assumptions are satisfied. A brief summary of the key conditions under which an evolutionary process equates to neoclassical outcomes can be found on pages 26-27 of this paper by Nelson and Winter. Below is a partial analysis of these conditions with some examples relevant to the current crisis.

Diversity

Genetic diversity is the raw material upon which Darwinian natural selection operates. Similarly, to achieve anything close to an “optimal” outcome, the strategies available to be chosen by economic agents must be sufficiently diverse. The “natural selection” explanation of the moral hazard problem which I had elaborated upon in my previous post, therefore depends upon the toolset of banks’ strategies being sufficiently varied. The toolset available to banks to exploit the moral hazard subsidy is primarily determined by two factors: technology/innovation and regulation. The development of new financial products via securitisation, tranching and most importantly synthetic issuances with a CDS rather than a bond as an underlying which I discussed here, has significantly expanded this toolset.

Stability

The story of one optimal strategy outcompeting all others is also dependent on environmental conditions being stable. Quoting from Nelson and Winter: “If the analysis concerns a hypothetical static economy, where the underlying economic problem is standing still, it is reasonable to ask whether the dynamics of an evolutionary selection process can solve it in the long run. But if the economy is undergoing continuing exogenous change, and particularly if it is changing in unanticipated ways, then there really is no “long run” in a substantive sense. Rather, the selection process is always in a transient phase, groping toward its temporary target. In that case, we should expect to find firm behavior always maladapted to its current environment and in characteristic ways—for example, out of date because of learning and adjustment lags, or “unstable” because of ongoing experimentation and trial-and-error learning.”

This follows logically from the ‘Law of Competitive Exclusion‘. In an environment free of disturbances, diversity of competing strategies must reduce dramatically as the optimal strategy will outcompete all others. In fact, disturbances are a key reason why competitive exclusion is rarely observed in ecosystems. When Evelyn Hutchinson examined the ‘Paradox of the Plankton’, one of the explanations he offered was the “permanent failure to achieve equilibrium” . Indeed, one of the most accepted explanations of the paradox is the ‘Intermediate Disturbance Hypothesis’ which concludes that ecosystem diversity may be low when the environment is free of disturbances.

Stability here is defined as “stability with respect to the criteria of selection”. In the principal-agent selective process, the analogous criteria to Darwinian “fitness” is profitability. Nelson and Winter’s objection is absolutely relevant when the strategy that maximises profitability is a moving target and there is significant uncertainty regarding the exact contours of this strategy. On the other hand, the kind of strategies that maximise profitability in a bank have not changed for a while, in no small part because of the size of the moral hazard free lunch available. A CEO who wants to maximise Return on Equity for his shareholders would maximise balance sheet leverage, as I explained in my first post. The stability of the parameters of the strategy that would maximise the moral hazard subsidy and accordingly profitability, ensures that this strategy outcompetes all others.

Written by Ashwin Parameswaran

March 13th, 2010 at 5:22 am