macroresilience

resilience, not stability

Archive for the ‘Complex Adaptive Systems’ Category

Uncertainty and the Cyclical vs Structural Unemployment Debate

with 6 comments

There are two schools of thought on the primary cause of our current unemployment problem: Some claim that the unemployment is cyclical (low aggregate demand) whereas others think it’s structural (mismatch in the labour market). The “Structuralists” point to the apparent shift in the Beveridge curve and the increased demand in healthcare and technology whereas the “Cyclicalists” point to the fall in employment across all other sectors. So who’s right? In my opinion, neither explanation is entirely satisfactory. This post is an expansion of some thoughts I touched upon in my last post that describe the “persistent unemployment” problem as a logical consequence of a dynamically uncompetitive “Post Minsky Moment” economy.

Narayana Kocherlakota explains the mismatch thesis as follows: “Firms have jobs, but can’t find appropriate workers. The workers want to work, but can’t find appropriate jobs. There are many possible sources of mismatch—geography, skills, demography—and they are probably all at work….the Fed does not have a means to transform construction workers into manufacturing workers.” Undoubtedly this argument has some merit – the real question is how much of our current unemployment can be attributed to the mismatch problem? Kocherlakota draws on work done by Robert Shimer and extrapolates from the Beveridge curve relationship since 2000 to arrive at a implied unemployment rate of 6.3% if mismatch were not a bigger problem and the Beveridge curve relationship had not broken down. Jan Hatzius of Goldman Sachs on the other hand attributes as little as 0.75% of the current unemployment problem to structural reasons. Murat Tasci and Dave Lindner however conclude that the recent behaviour of the Beveridge curve is not anomalous when viewed in the context of previous post-war recessions. Shimer himself was wary of extrapolating too much from the limited data set from 2000 (see pg 12-13 here)  This would imply that Kocherlakota’s estimate is an overestimate even if Jan Hatzius’ may be an underestimate.

Incorporating Uncertainty into the Mismatch Argument

It is likely therefore that there is a significant pool of unemployment that cannot be justified by the simple mismatch argument. But this does not mean that the “recalculation” thesis is not valid. The simple mismatch argument ignores the uncertainty involved in the “Post-Minsky Moment economy” – it assumes that firms have known jobs that remain unfilled whereas in reality, firms need to engage in a process of exploration that will determine the nature of jobs consistent with the new economic reality before they search for suitable workers. The problem we face right now is of firms unwilling to take on the risk inherent in such an exploration. The central message in my previous posts on evolvability and organisational rigidity is that this process of exploration is dependent upon the maintenance of a dynamically competitive economy rather than a statically competitive economy. Continuous entry of new firms is of critical importance in maintaining a dynamically competitive economy that retains the ability to evolve and reconfigure itself when faced with a dramatic change in circumstances.

The “Post Minsky Moment” Economy

In Minsky’s Financial Instability Hypothesis, the long period of stability before the crash creates a homogeneous and fragile ecosystem – the fragility arises due to the fragility of the individual firms as well the absence of diversity. Post the inevitable crash, the system inevitably regains some of its robustness via the slack built up by the incumbent firms, usually in the form of financial liquidity. However, so long as this slack at firm level is maintained, the macro-system cannot possibly revert to a state where it attains conventional welfare optima such as full employment. The conventional Keynesian solution suggests that the state pick up the slack in economic activity whereas some assume that sooner or later, market forces will reorganise to utilise this firm-level slack. This post is an attempt to partially refute both explanations – As Burton Klein often notedthere is no hidden hand that can miraculously restore the “animal spirits” of an economy or an industry once it has lost its evolvability. Similarly, Keynesian policies that shore up the position of the incumbent firms can cause fatal damage to the evolvability of the macro-economy.

Corporate Profits and Unemployment

This thesis does not imply that incumbent firms leave money on the table. In fact, incumbents typically redouble their efforts at static optimisation – hence the rise in corporate profits. Some may argue that this rise in profitability is illusory and represents capital consumption i.e. short-term gain at the expense of long-term loss of competence and capabilities at firm level. But in the absence of new firm entry, it is unlikely that there is even a long-term threat to incumbents’ survival i.e. firms are making a calculated bet that loss of evolvability represents a minor risk. It is only the invisible foot of the threat of new firms that prevents incumbents from going down this route.

Small Business Financing Constraints as a Driver of Unemployment

The role of new firms in generating employment is well-established and my argument implies that incumbent firms will effectively contribute to solving the unemployment problem only when prodded to do so by the hidden foot of new firm entry. The credit conditions faced by small businesses remain extremely tight despite funding costs for big incumbent firms having eased considerably since the peak of the crisis. Of course this may be due to insufficient investment opportunities – some of which may be due to dominant large incumbents in specific sectors. But a more plausible explanation lies in the unevolvable and incumbent-dominated state of our banking sector. Expanding lending to new firms is an act of exploration and incumbent banks are almost certainly content with exploiting their known and low-risk sources of income instead. One of Burton Klein’s key insights was how only a few key dynamically uncompetitive sectors can act as a deadweight drag on the entire economy and banking certainly fits the bill.

Bookmark and Share

Written by Ashwin Parameswaran

September 8th, 2010 at 9:21 am

Evolvability, Robustness and Resilience in Complex Adaptive Systems

with 14 comments

In a previous post, I asserted that “the existence of irreducible uncertainty is sufficient to justify an evolutionary approach for any social system, whether it be an organization or a macro-economy.” This is not a controversial statement – Nelson and Winter introduced their seminal work on evolutionary economics as follows: “Our evolutionary theory of economic change…is not an interpretation of economic reality as a reflection of supposedly constant “given data” but a scheme that may help an observer who is sufficiently knowledgeable regarding the facts of the present to see a little further through the mist that obscures the future.”

In microeconomics, irreducible uncertainty implies a world of bounded rationality where many heuristics become not signs of irrationality but a rational and effective tool of decision-making. But it is the implications of human action under uncertainty for macro-economic outcomes that is the focus of this blog – In previous posts (1,2) I have elaborated upon the resilience-stability tradeoff and its parallels in economics and ecology. This post focuses on another issue critical to the functioning of all complex adaptive systems: the relationship between evolvability and robustness.

Evolvability and Robustness Defined

Hiroaki Kitano defines robustness as follows: “Robustness is a property that allows a system to maintain its functions despite external and internal perturbations….A system must be robust to function in unpredictable environments using unreliable components.” Kitano makes it explicit that robustness is concerned with the maintenance of functionality rather than specific components: “Robustness is often misunderstood to mean staying unchanged regardless of stimuli or mutations, so that the structure and components of the system, and therefore the mode of operation, is unaffected. In fact, robustness is the maintenance of specific functionalities of the system against perturbations, and it often requires the system to change its mode of operation in a flexible way. In other words, robustness allows changes in the structure and components of the system owing to perturbations, but specific functions are maintained.”

Evolvability is defined as the ability of the system to generate novelty and innovate thus enabling the system to “adapt in ways that exploit new resources or allow them to persist under unprecedented environmental regime shifts” (Whitacre 2010). At first glance, evolvability and robustness appear to be incompatible: Generation of novelty involves a leap into the dark, an exploration rather than an act of “rational choice” and the search for a beneficial innovation carries with it a significant risk of failure. It’s worth noting that in social systems, this dilemma vanishes in the absence of irreducible uncertainty. If all adaptations are merely a realignment to a known systemic configuration (“known” in either a deterministic or a probabilistic sense), then an inability to adapt needs other explanations such as organisational rigidity.

Evolvability, Robustness and Resilience

Although it is typical to equate resilience with robustness, resilient complex adaptive systems also need to possess the ability to innovate and generate novelty. As Allen and Holling put it : “Novelty and innovation are required to keep existing complex systems resilient and to create new structures and dynamics following system crashes”. Evolvability also enables the system to undergo fundamental transformational change – it could be argued that such innovations are even more important in a modern capitalist economic system than they are in the biological or ecological arena. The rest of this post will focus on elaborating upon how macro-economic systems can be both robust and evolvable at the same time – the apparent conflict between evolvability and robustness arises from a fallacy of composition where macro-resilience is assumed to arise from micro-resilience, when in fact it arises from the very absence of micro-resilience.

EVOLVABILITY, ROBUSTNESS AND RESILIENCE IN MACRO-ECONOMIC SYSTEMS

The pre-eminent reference on how a macro-economic system can be both robust and evolvable at the same time is the work of Burton Klein in his books “Dynamic Economics” and “Prices, Wages and Business Cycles: A Dynamic Theory”. But as with so many other topics in evolutionary economics, no one has summarised it better than Brian Loasby: “Any economic system which is to remain viable over a long period must be able to cope with unexpected change. It must be able to revise or replace policies which have worked well. Yet this ability is problematic. Two kinds of remedy may be tried, at two different system levels. One is to try to sensitize those working within a particular research programme to its limitations and to possible alternatives, thus following Menger’s principle of creating private reserves against unknown but imaginable dangers, and thereby enhancing the capacity for internal adaptation….But reserves have costs; and it may be better , from a system-wide perspective, to accept the vulnerability of a sub-system in order to exploit its efficiency, while relying on the reserves which are the natural product of a variety of sub-systems….
Research programmes, we should recall, are imperfectly specified, and two groups starting with the same research programme are likely to become progressively differentiated by their experience, if there are no strong pressures to keep them closely aligned. The long-run equilibrium of the larger system might therefore be preserved by substitution between sub-systems as circumstances change. External selection may achieve the same overall purpose as internal adaptation – but only if the system has generated adequate variety from which the selection may be made. An obvious corollary which has been emphasised by Klein (1977) is that attempts to preserve sub-system stability may wreck the larger system. That should not be a threatening notion to economists; it also happens to be exemplified by Marshall’s conception of the long-period equilibrium of the industry as a population equilibrium, which is sustained by continued change in the membership of that population. The tendency of variation is not only a chief cause of progress; it is also an aid to stability in a changing environment (Eliasson, 1991). The homogeneity which is conducive to the attainment of conventional welfare optima is a threat to the resilience which an economy needs.”

Uncertainty can be tackled at the micro-level by maintaining reserves and slack (liquidity, retained profits) but this comes at the price of slack at the macro-level in terms of lost output and employment. Note that this is essentially a Keynesian conclusion, similar to how individually rational saving decisions can lead to collectively sub-optimal outcomes. From a systemic perspective, it is more preferable to substitute the micro-resilience with a diverse set of micro-fragilities. But how do we induce the loss of slack at firm-level? And how do we ensure that this loss of micro-resilience occurs in a sufficiently diverse manner?

The “Invisible Foot”

The concept of the “Invisible Foot” was introduced by Joseph Berliner as a counterpoint to Adam Smith’s “Invisible Hand” to explain why innovation was so hard in the centrally planned Soviet economy: “Adam Smith taught us to think of competition as an “invisible hand” that guides production into the socially desirable channels….But if Adam Smith had taken as his point of departure not the coordinating mechanism but the innovation mechanism of capitalism, he may well have designated competition not as an invisible hand but as an invisible foot. For the effect of competition is not only to motivate profit-seeking entrepreneurs to seek yet more profit but to jolt conservative enterprises into the adoption of new technology and the search for improved processes and products. From the point of view of the static efficiency of resource allocation, the evil of monopoly is that it prevents resources from flowing into those lines of production in which their social value would be greatest. But from the point of view of innovation, the evil of monopoly is that it enables producers to enjoy high rates of profit without having to undertake the exacting and risky activities associated with technological change. A world of monopolies, socialist or capitalist, would be a world with very little technological change.” To maintain an evolvable macro-economy, the invisible foot needs to be “applied vigorously to the backsides of enterprises that would otherwise have been quite content to go on producing the same products in the same ways, and at a reasonable profit, if they could only be protected from the intrusion of competition.”

Entry of New Firms and the Invisible Foot

Burton Klein’s great contribution along with other dynamic economists of the time (notably Gunnar Eliasson) was to highlight the critical importance of entry of new firms in maintaining the efficacy of the invisible foot. Klein believed that “the degree of risk taking is determined by the robustness of dynamic competition, which mainly depends on the rate of entry of new firms. If entry into an industry is fairly steady, the game is likely to have the flavour of a highly competitive sport. When some firms in an industry concentrate on making significant advances that will bear fruit within several years, others must be concerned with making their long-run profits as large as possible, if they hope to survive. But after entry has been closed for a number of years, a tightly organised oligopoly will probably emerge in which firms will endeavour to make their environments highly predictable in order to make their environments highly predictable in order to make their short-run profits as large as possible….Because of new entries, a relatively concentrated industry can remain highly dynamic. But, when entry is absent for some years, and expectations are premised on the future absence of entry, a relatively concentrated industry is likely to evolve into a tight oligopoly. In particular, when entry is long absent, managers are likely to be more and more narrowly selected; and they will probably engage in such parallel behaviour with respect to products and prices that it might seem that the entire industry is commanded by a single general!”

Again, it can’t be emphasised enough that this argument does not depend on incumbent firms leaving money on the table – on the contrary, they may redouble their attempts at static optimisation. From the perspective of each individual firm, innovation is an incredibly risky process even though the result of such dynamic competition from the perspective of the industry or macro-economy may be reasonably predictable. Of course, firms can and do mitigate this risk by various methods but this argument only claims that any single firm, however dominant cannot replicate the “risk-free” innovation dynamics of a vibrant industry in-house.

Micro-Fragility as the Hidden Hand of Macro-Resilience

In an environment free of irreducible uncertainty, evolvability suffers leading to reduced macro-resilience. “If firms could predict each others’ advances they would not have to insure themselves against uncertainty by taking risks. And no smooth progress would occur” (Klein 1977). Conversely, “because firms cannot predict each other’s discoveries, they undertake different approaches towards achieving the same goal. And because not all of the approaches will turn out to be equally successful, the pursuit of parallel paths provides the options required for smooth progress.”

The Aftermath of the Minsky Moment: A Problem of Micro-Resilience

Within the context of the current crisis, the pre-Minsky moment system was a homogeneous system with no slack which enabled the attainment of “conventional welfare optima” but at the cost of an incredibly fragile and unevolvable condition. The logical evolution of such a system post the Minsky moment is of course still a homogeneous system but with significant firm-level slack built in which is equally unsatisfactory. In such a situation, the kind of macro-economic intervention matters as much as the force of intervention. For example, in an ideal world, monetary policy aimed at reducing borrowing rates of incumbent banks and corporates will flow through into reduced borrowing rates for new firms. In a dynamically uncompetitive world, such a policy will only serve the interests of the incumbents.

The “Invisible Foot” and Employment

Vivek Wadhwa argues that startups are the main source of net job growth in the US economy and Mark Thoma links to research that confirms this thesis. Even if one disagrees with this thesis, the “invisible foot” thesis argues that if the old guard is to contribute to employment, they must be forced to give up their “slack” by the strength of dynamic competition and dynamic competition is maintained by preserving conditions that encourage entry of new firms.

MICRO-EVOLVABILITY AND MACRO-RESILIENCE IN BIOLOGY AND ECOLOGY

Note: The aim of this section is not to draw any false precise equivalences between economic resilience and ecological or biological resilience but simply to highlight the commonality of the micro-macro fallacy of composition across complex adaptive systems – a detailed comparison will hopefully be the subject of a future post. I have tried to keep the section on biological resilience as brief and simple as possible but an understanding of the genotype-phenotype distinction and neutral networks is essential to make sense of it.

Biology: Genotypic Variation and Phenotypic Robustness

In the specific context of biology, evolvability can be defined as “the capacity to generate heritable, selectable phenotypic variation. This capacity may have two components: (i) to reduce the potential lethality of mutations and (ii) to reduce the number of mutations needed to produce phenotypically novel traits” (Kirschner and Gerhart 1998). The apparent conflict between evolvability and robustness can be reconciled by distinguishing between genotypic and phenotypic robustness and evolvability. James Whitacre summarises Andrew Wagner’s work on RNA genotypes and their structure phenotypes as follows: “this conflict is unresolvable only when robustness is conferred in both the genotype and the phenotype. On the other hand, if the phenotype is robustly maintained in the presence of genetic mutations, then a number of cryptic genetic changes may be possible and their accumulation over time might expose a broad range of distinct phenotypes, e.g. by movement across a neutral network. In this way, robustness of the phenotype might actually enhance access to heritable phenotypic variation and thereby improve long-term evolvability.”

Ecology: Species-Level Variability and Functional Stability

The notion of micro-variability being consistent with and even being responsible for macro-resilience is an old one in ecology as Simon Levin and Jane Lubchenco summarise here: “That the robustness of an ensemble may rest upon the high turnover of the units that make it up is a familiar notion in community ecology. MacArthur and Wilson (1967), in their foundational work on island biogeography, contrasted the constancy and robustness of the number of species on an island with the ephemeral nature of species composition. Similarly, Tilman and colleagues (1996) found that the robustness of total yield in high-diversity assemblages arises not in spite of, but primarily because of, the high variability of individual population densities.”

The concept is also entirely consistent with the “Panarchy” thesis which views an ecosystem as a nested hierarchy of adaptive cycles: “Adaptive cycles are nested in a hierarchy across time and space which helps explain how adaptive systems can, for brief moments, generate novel recombinations that are tested during longer periods of capital accumulation and storage. These windows of experimentation open briefly, but the results do not trigger cascading instabilities of the whole because of the stabilizing nature of nested hierarchies. In essence, larger and slower components of the hierarchy provide the memory of the past and of the distant to allow recovery of smaller and faster adaptive cycles.”

Misc. Notes

1. It must be emphasised that micro-fragility is a necessary, but not a sufficient condition for an evolvable and robust macro-system. The role of not just redundancy but degeneracy is critical as is the size of the population.

2. Many commentators use resilience and robustness interchangeably. I draw a distinction primarily because my definitions of robustness and evolvability are borrowed from biology and my definition of resilience is borrowed from ecology which in my opinion defines a robust and evolvable system as a resilient one.

Bookmark and Share

Written by Ashwin Parameswaran

August 30th, 2010 at 8:38 am

Raghuram Rajan on Monetary Policy and Macroeconomic Resilience

with 16 comments

Amongst economic commentators, Raghuram Rajan has stood out recently for his consistent calls to raise interest rates from “ultra-low to the merely low”. Predictably, this suggestion has been met with outright condemnation by many economists, both of Keynesian and monetarist persuasion. Rajan’s case against ultra-low rates utilises many arguments but this post will focus on just one of these arguments that is straight out of the “resilience” playbook. In 2008, Raghu Rajan and Doug Diamond co-authored a paper, the conclusion of which Rajan summarises in his FT article: “the pattern of Fed policy over time builds expectations. The market now thinks that whenever the financial sector’s actions result in unemployment, the Fed will respond with ultra-low rates and easy liquidity. So even as the Fed has maintained credibility as an inflation fighter, it has lost credibility in fighting financial adventurism. This cannot augur well for the future.”

Much like he accused the Austrians, Paul Krugman accuses Rajan of being a “liquidationist”. This is not a coincidence – Rajan and Diamond’s thesis is quite explicit about its connections to Austrian Business Cycle Theory: “a central bank that promises to cut interest rates conditional on stress, or that is biased towards low interest rates favouring entrepreneurs, will induce banks to promise higher payouts or take more illiquid projects. This in turn can make the illiquidity crisis more severe and require a greater degree of intervention, a view reminiscent of the Austrian theory of cycles.” But as the summary hints, Rajan and Diamond’s thesis is fundamentally different from ABCT. The conventional Austrian story identifies excessive credit inflation and interest rates below the “natural” rate of interest as the driver of the boom/bust cycle but Rajan and Diamond’s thesis identifies the anticipation by economic agents of low rates and “liquidity” facilities every time there is an economic downturn as the driver of systemic fragility. The adaptation of banks and other market players to this regime makes the eventual bust all the more likely. As Rajan and Diamond note: “If the authorities are expected to reduce interest rates when liquidity is at a premium, banks will take on more short-term leverage or illiquid loans, thus bringing about the very states where intervention is needed.”

Rajan and Diamond’s thesis is limited to the impact of such policies on banks but as I noted in a previous post, market players also adapt to this implicit commitment from the central bank to follow easy money policies at the first hint of economic trouble. This thesis is essentially a story of the Greenspan-Bernanke era and the damage that the Greenspan Put has caused. It also explains the dramatically diminishing returns inherent in the Greenspan Put strategy as the stabilising policies of the central bank become entrenched in the expectations of market players and crucially banks – in each subsequent cycle, the central bank has to do more and more (lower rates, larger liquidity facilities) to achieve less and less.

Bookmark and Share

Written by Ashwin Parameswaran

August 3rd, 2010 at 6:30 am

Critical Transitions in Markets and Macroeconomic Systems

with 6 comments

This post is the first in a series that takes an ecological and dynamic approach to analysing market/macroeconomic regimes and transitions between these regimes.

Normal, Pre-Crisis and Crisis Regimes

In a post on market crises, Rick Bookstaber identified three regimes that any model of the market must represent (normal, pre-crisis and crisis) and analysed the statistical properties (volatility,correlation etc) of each of these regimes. The framework below however characterises each regime by the varying combinations of positive and negative feedback processes and the variations and regime shifts are determined by the adaptive and evolutionary processes operating within the system.

1. Normal regimes are resilient regimes. They are characterised by a balanced and diverse mix of positive and negative feedback processes. For every momentum trader who bets on the continuation of a trend, there is a contrarian who bets the other way.

2. Pre-crisis regimes are characterised by an increasing dominance of positive feedback processes. An unusually high degree of stability or a persistent trend progressively weeds out negative feedback processes from the system thus leaving it vulnerable to collapse even as a result of disturbances that it could easily absorb in its previously resilient normal state. Such regimes can arise from bubbles but this is not necessary. Pre-crisis only implies that a regime change into the crisis regime is increasingly likely – in ecological terms, the pre-crisis regime is fragile and has suffered a significant loss of resilience.

3. Crisis regimes are essentially transitional  – the disturbance has occurred and the positive feedback processes that dominated the previous regime have now reversed direction. However, the final destination of this transition is uncertain – if the system is left alone, it will undergo a discontinuous transition to a normal regime. However, if sufficient external stabilisation pressures are exerted upon the system, it may revert to the pre-crisis regime or even stay in the crisis regime for a longer period. It’s worth noting that I define a normal regime only by its resilience and not by its desirability – even a state of civilizational collapse can be incredibly resilient.

“Critical Transitions” from the Pre-Crisis to the Crisis Regime

In fragile systems even a minor disturbance can trigger a discontinuous move to an alternative regime – Marten Scheffer refers to such moves as “critical transitions”. Figures a,b,c and d below represent a continuum of ways in which the system can react to changing external conditions (ref Scheffer et al) . Although I will frequently refer to “equilibria” and “states” in the discussion below, these are better described as “attractors” and “regimes” given the dynamic nature of the system – the static terminology is merely a simplification.

In Figure a, the system state reacts smoothly to perturbations – for example, a large external change will trigger a large move in the state of the system. The dotted arrows denote the direction in which the system moves when it is not on the curve i.e. in equilibrium.  Any move away from equilibrium triggers forces that bring it back to the curve. In Figure b, the transition is non-linear and a small perturbation can trigger a regime shift – however a reversal of conditions of an equally small magnitude can reverse the regime shift. Clearly, such a system does not satisfactorily explain our current economic predicament where monetary and fiscal intervention far in excess of the initial sub-prime shock have failed to bring the system back to its previous state.

Figure c however may be a more accurate description of the current state of the economy and the market – for a certain range of conditions, there exist two alternative stable states separated by an unstable equilibrium (marked by the dotted line). As the dotted arrows indicate, movement away from the unstable equilibrium can carry the system to either of the two alternative stable states. Figure d illustrates how a small perturbation past the point F2 triggers a “catastrophic” transition from the upper branch to the lower branch – moreover, unless conditions are reversed all the way back to the point F1, the system will not revert back to the upper branch stable state. The system therefore exhibits “hysteresis” – i.e. the path matters. The forward and backward switches occur at different points F2 and F1 respectively, which implies that reversing such transitions is not easy. A comprehensive discussion of the conditions that will determine the extent of hysteresis is beyond the scope of this post – however it is worth mentioning that cognitive and organisational rigidity in the absence of sufficient diversity is a sufficient condition for hysteresis in the macro-system.

Before I apply the above framework to some events in the market, it is worth clarifying how the states in Figure d correspond to those chosen by Rick Bookstaber. The “normal” regime refers to the parts of the upper and lower branch stable states that are far from the points F1 and F2 i.e. the system is resilient to a change in external conditions. As I mentioned earlier, normal does not equate to desirable – the lower branch could be a state of collapse. If we designate the upper branch as a desirable normal state and the lower branch as an undesirable one, then the zone close to point F2 on the upper branch is the pre-crisis regime. The crisis regime is the short catastrophic transition from F2 to the lower branch if the system is left alone. If forces external to the system are applied to prevent a transition to the lower branch, then the system could either revert back to the upper branch or even stay in the crisis regime on the dotted line unstable equilibrium for a longer period.

The Magnetar Trade revisited

In an earlier post, I analysed how the infamous Magnetar Trade could be explained with a framework that incorporates catastrophic transitions between alternative stable states. As I noted: “The Magnetar trade would pay off in two scenarios – if there were no defaults in any of their CDOs, or if there were so many defaults that the tranches that they were short also defaulted alongwith the equity tranche. The trade would likely lose money if there were limited defaults in all the CDOs and the senior tranches did not default. Essentially, the trade was attractive if one believed that this intermediate scenario was improbable…Intermediate scenarios are unlikely when the system is characterised by multiple stable states and catastrophic transitions between these states. In adaptive systems such as ecosystems or macroeconomies, such transitions are most likely when the system is fragile and in a state of low resilience. The system tends to be dominated by positive feedback processes that amplify the impact of small perturbations, with no negative feedback processes present that can arrest this snowballing effect.”

In the language of critical transitions, Magnetar calculated that the real estate and MBS markets were in a fragile pre-crisis state and no intervention would prevent the rapid critical transition from F2 to the lower branch.

“Schizophrenic” Markets and the Long Crisis

Recently, many commentators have noted the apparently schizophrenic nature of the markets, turning from risk-on to risk-off at the drop of a hat. For example, John Kemp argues that the markets are “trapped between euphoria and despair” and notes the U-shaped distribution of Bank of England’s inflation forecasts (table 5.13). Although at first glance this sort of behaviour seems irrational, it may not be – As PIMCO’s Richard Clarida notes: “we are in a world in which average outcomes – for growth, inflation, corporate and sovereign defaults, and the investment returns driven by these outcomes – will matter less and less for investors and policymakers. This is because we are in a New Normal world in which the distribution of outcomes is flatter and the tails are fatter. As such, the mean of the distribution becomes an observation that is very rarely realized”

Richard Clarida’s New Normal is analogous to the crisis regime (the dotted line unstable equilibrium in Figures c and d). Any movement in either direction is self-fulfilling and leads to either a much stronger economy or a much weaker economy. So why is the current crisis regime such a long one? As I mentioned earlier, external stabilisation (in this case monetary and fiscal policy) can keep the system from collapsing down to the lower branch normal regime – the “schizophrenia” only indicates that the market may make a decisive break to a stable state sooner rather than later.

Bookmark and Share

Written by Ashwin Parameswaran

July 29th, 2010 at 3:27 am

Agent Irrationality and Macroeconomics

with 2 comments

In a recent post, Rajiv Sethi questions the tendency to find behavioural explanations for financial crises and argues for an ecological approach instead – a sentiment that I agree with and have touched upon in previous posts on this blog. This post expands upon some of these themes.

A More Realistic View of Rationality and Human Cognition, Not Irrationality

Much of the debate on rationality in economics focuses on whether we as human beings are rational in the “homo economicus” sense. The “heuristics and biases” program pioneered by Daniel Kahneman and Amos Tversky argues that we are not “rational” – however, it does not question whether the definition of rationality implicit in “rational choice theory” is valid or not. Many researchers in the neural and cognitive sciences now believe that the conventional definition of rationality needs to be radically overhauled.

Most heuristics/biases are not a sign of irrationality but an entirely rational form of decision-making when faced with uncertainty. In an earlier post, I explained how Ronald Heiner’s framework can explain our neglect of tail events as a logical response to an uncertain environment, but the best exposition of this viewpoint can be seen in Gerd Gigerenzer’s work which itself is inspired by Herbert Simon’s ideas on “bounded rationality”. In his aptly named book “Rationality for Mortals: How People Cope with Uncertainty”, Gigerenzer explains the two key building blocks of “the science of heuristics”:

  • The Adaptive Toolbox: “the building blocks for fast and frugal heuristics that work in real-world environments of natural complexity, where an optimal strategy is often unknown or computationally intractable”
  • Ecological Rationality: “the environmental structures in which a given heuristic is successful” and the “coevolution between heuristics and environments”

The irony of course is that many classical economists had a more accurate definition of rationality than the one implicit in “rational choice theory” (See Brian Loasby’s book which I discussed here). Much of the work done in the neural sciences confirms the more nuanced view of human cognition espoused in Hayek’s “The Sensory Order” or Ken Boulding’s “The Image” (See Joaquin Fuster on Hayek or the similarities between Ken Boulding’s views and V.S. Ramachandran’s work discussed here).

Macro-Rationality is consistent with Micro-Irrationality

Even a more realistic definition of rationality doesn’t preclude individual irrationality. However, as Michael Mauboussin pointed out: “markets can still be rational when investors are individually irrational. Sufficient investor diversity is the essential feature in efficient price formation. Provided the decision rules of investors are diverse—even if they are suboptimal—errors tend to cancel out and markets arrive at appropriate prices. Similarly, if these decision rules lose diversity, markets become fragile and susceptible to inefficiency. So the issue is not whether individuals are irrational (they are) but whether they are irrational in the same way at the same time. So while understanding individual behavioral pitfalls may improve your own decision making, appreciation of the dynamics of the collective is key to outperforming the market.”

Economies as Complex Adaptive Systems: Behavioural Heterogeneity, Selection Pressures and Emphasis on System Dynamics

In my view, the ecological approach to macroeconomics is essentially a systems approach with the emphasis on the “adaptive” nature of the system i.e. incentives matter and the actors in a system tend to find ways to work around imposed rules that try to fight the impact of misaligned incentives. David Merkel explained it well when he noted: “People hate having their freedom restrained, and so when arbitrary rules are imposed, even smart rules, they look for means of escape.” And many of the posts on this blog have focused on how rules can be subverted even when economic agents don’t actively intend to do so.

The ecological approach emphasises the diversity of behavioural preferences and the role of incentives/institutions/rules in “selecting” from this pool of possible agent behaviours or causing agent behaviour to adapt in reaction to these incentives. When a behaviourally homogeneous pool of agents is observed, the ecological approach focuses on the selection pressures and incentives that could have caused this loss of diversity rather than attempting to lay the blame on some immutable behavioural trait. Again, as Rajiv Sethi puts it here: “human behavior differs substantially across career paths because of selection both into and within occupations….[Regularities] identified in controlled laboratory experiments with standard subject pools have limited application to environments in which the distribution of behavioral propensities is both endogenous and psychologically rare. This is the case in financial markets, which are subject to selection at a number of levels. Those who enter the profession are unlikely to be psychologically typical, and market conditions determine which behavioral propensities survive and thrive at any point in historical time.”


Bookmark and Share

Written by Ashwin Parameswaran

June 24th, 2010 at 8:50 am

A “Systems” Explanation of How Bailouts can Cause Business Cycles

with 3 comments

In a previous post, I quoted Richard Fisher’s views on how bailouts cause business cycles and financial crises: “The system has become slanted not only toward bigness but also high risk…..if the central bank and regulators view any losses to big bank creditors as systemically disruptive, big bank debt will effectively reign on high in the capital structure. Big banks would love leverage even more, making regulatory attempts to mandate lower leverage in boom times all the more difficult…..It is not difficult to see where this dynamic leads—to more pronounced financial cycles and repeated crises.”

Fisher utilises the “incentives” argument but the same argument could also be made via the language of natural selection and Hannan and Freeman did exactly that in their seminal paper that launched the field of Organizational Ecology”. Hannan and Freeman wrote the below in the context of the bailout of Lockheed in 1971 but it is as relevant today as it has ever been: “we must consider what one anonymous reader, caught up in the spirit of our paper, called the anti-eugenic actions of the state in saving firms such as Lockheed from failure. This is a dramatic instance of the way in which large dominant organizations can create linkages with other large and powerful ones so as to reduce selection pressures. If such moves are effective, they alter the pattern of selection. In our view, the selection pressure is bumped up to a higher level. So instead of individual organizations failing, entire networks fail. The general consequence of a large number of linkages of this sort is an increase in the instability of the entire system and therefore we should see boom and bust cycles of organizational outcomes.”

Bookmark and Share

Written by Ashwin Parameswaran

June 8th, 2010 at 3:45 pm

The “Crash of 2:45 p.m.” as a Consequence of System Fragility

with 7 comments

When the WSJ provides us with the least plausible explanation of the “Crash of 2:45 p.m.”, it is only fitting that Jon Stewart provides us with the most succinct and accurate diagnosis of the crash.

Most explanations of the crash either focus on the proximate cause of the crash or blame it all on the “perfect storm”. The “perfect storm” explanation absolves us from analysing the crash too closely, the implicit conclusion being that such an event doesn’t occur too often and not much needs to or can be done to prevent its recurrence. There are two problems with this explanation. For one, it violates Occam’s Razor – it is easy to construct an ex-post facto explanation that depends upon a confluence of events that have not occurred together before. And more crucially, perfect storms seem to occur all too often. As Jon Stewart put it: “Why is it that whenever something happens to the people that should’ve seen it coming didn’t see coming, it’s blamed on one of these rare, once in a century, perfect storms that for some reason take place every f–king two weeks. I’m beginning to think these are not perfect storms. I’m beginning to think these are regular storms and we have a shty boat.”

The focus on proximate causes ignores the complexity and nonlinearity of market systems. Michael Mauboussin explained it best when he remarked: “Cause and effect thinking is dangerous. Humans like to link effects with causes, and capital markets activities are no different. For example, politicians created numerous panels after the market crash in 1987 to identify its “cause.” A nonlinear approach, however, suggests that large-scale changes can come from small-scale inputs. As a result, cause-and-effect thinking can be both simplistic and counterproductive.” The true underlying causes may be far removed from the effect, both in time and in space and the proximate cause may only be the “straw that broke the camel’s back”.

So what is the true underlying cause of the crash? In my opinion, the crash was the inevitable consequence of a progressive loss of system resilience. Why and how has the system become fragile? A static view of markets frequently attributes loss of resilience to the presence of positive feedback processes such as margin calls on levered bets, stop-loss orders, dynamic hedging of short-gamma positions and even just plain vanilla momentum trading strategies – Laura Kodres‘ paper here has an excellent discussion on “destabilizing” hedge fund strategies. However, in a dynamic conception of markets, a resilient market is characterised not by the absence of positive feedback processes but by the presence of a balanced and diverse mix of positive and negative feedback processes.

Policy measures that aim to stabilise the system by countering the impact of positive feedback processes select against and weed out negative feedback processes – Stabilisation reduces system resilience. The decision to cancel errant trades is an example of such a measure. It is critical that all market participants who implement positive feedback strategies (such as stop-loss market orders) suffer losses and those who step in to buy in times of chaos i.e. the negative-feedback providers are not denied of the profits that would accrue to them if markets recover. This is the real damage done by policy paradigms such as the “Greenspan/Bernanke Put” that implicitly protect asset markets. They leave us with a fragile market prone to collapse even with a “normal storm”, unless there is further intervention as we saw from the EU/ECB. Of course, every subsequent intervention that aims to stabilise the system only further reduces its resilience.

As positive feedback processes become increasingly dominant, even normal storms that were easily absorbed earlier will cause a catastrophic transition in the system. There are many examples of the loss of system resilience being characterised by its vulnerability to a “normal” disturbance, such as in Minsky’s Financial Instability Hypothesis or Buzz Holling’s conception of ecological resilience, both of which I have discussed earlier.

The Role of Waddell & Reed

In the framework I have outlined above, the appropriate question to ask of the Waddell & Reed affair is whether their sell order was a “normal” storm or an “abnormal” storm? More specifically, pinning the blame on a single order requires us to prove that each time in the past an order of this size was executed, the market crashed in a similar manner. It is also probable that the sell order itself was a component of a positive feedback hedging strategy and Waddell’s statement that it was selling the futures to “protect fund investors from downside risk” confirms this assessment. In this case, the Waddell sell order was an endogenous event in the framework and not an exogenous shock. Mitigating the impact of such positive feedback strategies only makes the system less resilient in the long run.

As Taleb puts it: “When a bridge collapses, you don’t look at the last truck that was on it, you look at the engineer. You’re looking for the straw that broke the camel’s back. Let’s not worry about the straw, focus on the back.” Or as Jon Stewart would say, let’s figure out why we have a shty boat.

Bookmark and Share

Written by Ashwin Parameswaran

May 16th, 2010 at 4:42 am

Organisational Rigidity, Crony Capitalism, Too-Big-To-Fail and Macro-Resilience

with 11 comments

In a previous post, I outlined why cognitive rigidity is not necessarily irrational even though it may lead to a loss of resilience. However, if the universe of agent strategies is sufficiently diverse, a macro-system comprising of fragile, inflexible agents can be incredibly resilient. So a simple analysis of micro-fragility does not enable us to reach any definitive conclusions about macro-resilience – organisations and economies may retain significant resilience and an ability to cope with novelty despite the fragility of their component agents.

Yet, there is significant evidence that organisations exhibit rigidity and although some of this rigidity can be perceived as irrational or perverse, much of it arises as a rational response to uncertainty. In Hannan and Freeman’s work on Organizational Ecology”, the presence of significant organisational rigidity is the basis of a selection-based rather than an adaptation-based explanation of organisational diversity. There are many factors driving organisational inertia, some of which have been summarised in this paper by Hannan and Freeman. These include internal considerations such as sunk costs, informational constraints, political constraints etc as well as external considerations such as barriers to entry and exit. In a later paper, Hannan and Freeman also justify organisational inertia as a means to an end, the end being “reliability”. Just as was the case in Ronald Heiner’s and V.S. Ramachandran’s framework discussed previously, inertia is a perfectly logical response to an uncertain environment.

Hannan and Freeman also hypothesise that older and larger organizations are more structurally inert and less capable of adapting to novel situations. In his book “Dynamic Economics”, Burton Klein analysed the historical record and found that advances that “resulted in new S-shaped curves in relatively static industries” do not come from the established players in an industry. In an excellent post, Sean Park summarises exactly why large organizations find it so difficult to innovate and also points to the pre-eminent reference in the management literature on this topic – Clayton Christensen’s “The Innovator’s Dilemma”. Christensen’s work is particularly relevant as it elaborates how established firms can fail not because of any obvious weaknesses, but as a direct consequence of their focus on core clients’ demands.

The inability of older and larger firms to innovate and adapt to novelty can be understood within the framework of the exploration-exploitation tradeoff as an inability to “explore” in an effective manner. As Levinthal and March put it, “past exploitation in a given domain makes future exploitation in the same domain even more efficient….As they develop greater and greater competence at a particular activity, they engage in that activity more, thus further increasing competence and the opportunity cost of exploration.” Exploration is also anathema to large organisations as it seems to imply a degree of managerial indecision. David Ellerman captures the essence of this thought process: “The organization’s experts will decide on the best experiment or approach—otherwise the organization would appear “not to know what it’s doing.””

A crony capitalist economic system that protects the incumbent firms hampers the ability of the system to innovate and adapt to novelty. It is obvious how the implicit subsidy granted to our largest financial institutions via the Too-Big-To-Fail doctrine represents a transfer of wealth from the taxpayer to the financial sector. It is also obvious how the subsidy encourages a levered, homogenous and therefore fragile financial sector that is susceptible to collapse. What is less obvious is the paralysis that it induces in the financial sector and by extension the macroeconomy long after the bailouts and the Minsky moment have passed.

We shouldn’t conflate this paralysis with an absence of competition between the incumbents – the competition between the incumbents may even be intense enough to ensure that they retain only a small portion of the rents that they fight so desperately to retain. What the paralysis does imply is a fierce and unified defence of the local peak that they compete for. Their defence is directed not so much against new entrants who want to play the incumbents at their own game, but at those who seek to change the rules of the game.

The best example of this is the OTC derivatives market which is the benefits of TBTF to the big banks are most evident. Bob Litan notes that clients “wanted the comfort of knowing that they were dealing with large, well-capitalized financial institutions” when dealing in CDS and this observation holds for most other OTC derivative markets. He also correctly identifies that the crucial component of effective reform is removing the advantage that the “Derivative Dealers’ Club” currently possess: “Systemic risk also would be reduced with true derivatives market reforms that would have the effect of removing the balance sheet advantage of the incumbent dealers now most likely regarded as TBTF. If end-users know that when their trades are completed with a clearinghouse, they are free to trade with any market maker – not just the specific dealer with whom they now customarily do business – that is willing to provide the right price, the resulting trades are more likely to be the end-users’ advantage. In short, in a reformed market, the incumbent dealers would face much greater competition.”

Innovation in the financial sector is also hampered because of the outsized contribution it already makes to economic activity in the United States, which makes market-broadening innovations extremely unlikely. James Utterback identified how difficult it is for new entrants to immediately substitute incumbent players: “Innovations that broaden a market create room for new firms to start. Innovation-inspired substitutions may cause established firms to hang on all the more tenaciously, making it extremely difficult for an outsider to gain a foothold along with the cash flow needed to expand and become a player in the industry.” Of course, the incumbents may eventually break away from the local peak but an extended period of stagnation is more likely.

Sustaining an environment conducive to the entry of new firms is critical to the maintenance of a resilient macroeconomy that is capable of innovating and dealing with novelty. The very least that financial sector reform must achieve is to eliminate the benefits of TBTF that currently make it all but impossible for a new entrant to challenge the status quo.

Bookmark and Share

Written by Ashwin Parameswaran

May 2nd, 2010 at 3:48 pm

The Magnetar Trade

with 7 comments

The Magnetar Trade according to ProPublica’s recent article is a long-short strategy that worked due to the perverse incentives operating in the CDO market during the boom. According to Jesse Eisinger and Jake Bernstein, Magnetar went long the equity tranche and short the senior tranches and used their position as the buyer of the equity tranche to ensure that the asset quality of the CDO was poorer than it would otherwise be. If ProPublica’s account is true, then this is a moral hazard trade i.e. Magnetar buys insurance against the burning down of a house and uses its influence as an equity buyer to significantly improve the odds of the house burning.

However, there are some hints in Magnetar’s response to the story that cast significant doubt on the accuracy of ProPublica’s narrative. To understand why this is the case, we need to understand what exactly the Magnetar trade as described in the story would look like. Magnetar’s portfolio was most likely a “close to carry neutral” portfolio consisting of long equity tranche positions and short senior/mezzanine tranche positions. In order to be carry-neutral, the notional value of senior tranches that are shorted needs to be an order of magnitude higher than the notional value of equity tranches purchased. In option parlance, this is equivalent to a zero-premium strategy consisting of short ATM options and long OTM options.

There are two reasons to execute such a strategy – one, simply to fund a “short options” strategy and the second, to execute a market-neutral “arbitrage” strategy. The significant advantage that such a long-short strategy has over a “naked short” strategy a la John Paulson is the absence of negative carry. As Taleb explains: “A butterfly position allows you to wait a lot longer for the wings to become profitable. In other words, a strategy that involves a butterfly allows you to be far more aggressive [when buying out-of-the-money options]. When you short near-the-money options, they bring in a lot of cash, so you can afford to spend more on out-of-the-money options. You can do a lot better as a spread trader.”

However, Magnetar describe their portfolio as market-neutral and “designed to have a positive return whether housing performed well or did poorly”.This implies that the portfolio was carry-positive i.e. the coupons on the long-equity positions exceeded the running-premium cost of buying protection on the senior tranches. This ensures that the portfolio will be profitable in the event that there are no defaults in the portfolio.

If the Magnetar Trade was based upon moral hazard, then it would have to short the senior tranches of the same CDO that it bought equity in and the notional of this short position would have to be multiples of the notional value of the equity position. However, Magnetar in their response to ProPublica explicitly deny this and state: “focusing solely on the group of CDOs in which Magnetar was the initial purchaser of the equity, Magnetar had a net long notional position. To put this into perspective, Magnetar would earn materially more money if these CDOs in aggregate performed well than if these CDOs performed poorly.” The operative term here is “net long notional position” as opposed to “net long position”. A net long position measured in delta terms could easily imply a net short notional position in which case the portfolio would outperform if all the tranches in the CDO were wiped out. But Magnetar seem to make it clear in their response that in the deals where they were the initial purchaser of equity, the notional of the equity positions exceeded the notional of the senior positions that they were short. They also assert that “the majority of the notional value of Magnetar’s hedges referenced CDOs in which Magnetar had no long investment” i.e. of course the notional value of their short positions exceeded that of their long positions, but these short positions were in other CDOs in which they did not have a long position.

But what about the fact that Magnetar seemed to be influencing the portfolio composition of these CDOs to include riskier assets in them? Surely this proves conclusively that Magnetar would profit if the CDOs collapsed? To understand why this may not necessarily be true, we need to examine the payoff profile of the Magnetar trade.

As with most market-neutral “arbitrage” trades, it is unlikely that the trade would deliver a positive return in every conceivable scenario. Rather, it would deliver a positive return in every scenario that Magnetar deemed probable. The Magnetar trade would pay off in two scenarios – if there were no defaults in any of their CDOs, or if there were so many defaults that the tranches that they were short also defaulted alongwith the equity tranche. The trade would likely lose money if there were limited defaults in all the CDOs and the senior tranches did not default. Essentially, the trade was attractive if one believed that this intermediate scenario was improbable.

A distribution where intermediate scenarios are improbable can arise from many underlying processes but there is one narrative that is particularly relevant to complex adaptive systems such as financial markets. Intermediate scenarios are unlikely when the system is characterised by multiple stable states and “catastrophic” transitions between these states. In adaptive systems such as ecosystems or macroeconomies, such transitions are most likely when the system is fragile and in a state of low resilience. The system tends to be dominated by positive feedback processes that amplify the impact of small perturbations, with no negative feedback processes present that can arrest this snowballing effect.

It turns out that such a framework was extremely well-suited to describing the housing market before the crash. Once house prices started falling and refinancing was no longer an option, the initial wave of defaults triggered a vicious cycle of house price declines and further defaults. Similarly, collateral requirements on leveraged investors, mark-to-market pressures and other positive feedback processes in the market created a vicious cycle of price declines in the market for mortage-backed securities and CDOs.

So what does all this have to do with Magnetar’s desire to include riskier assets in their long equity portfolios? If one believes that only a small perturbation is required to tip the market over into a state of collapse, then the long position should be weighted towards the riskiest possible asset portfolio. Essentially, the above framework implies that there is no benefit to having “safer” long positions in the long-short portfolio. The fragility of the system means that either there is no perturbation and all assets perform no matter how low-quality they are, or there is a perturbation and even “high quality” assets default.

The above framework of catastrophic shifts between multiple stable states is not uncommon, especially in fixed income markets. In fact, the Greek funding situation is a perfect example. If one had to sketch out a distribution of the yield on Greek debt, it is likely that intermediate levels are the least likely scenarios. In other words, either Greece funds at low sustainable rates or it moves rapidly to a state of default – it is unlikely that Greece raises say 50 billion Euros at an interest rate of 10%. The situation is of course made even more stark by Greece’s inability to inflate away its debt via the printing press. Of course, the bifurcation exists in fiat currency issuing countries as well, but at the point when hyperinflation kicks in.

Bank incentives are the real problem

Even if my arguments are valid, it is nevertheless obvious that even if Magnetar may not have executed the moral hazard trade, someone else could quite easily have done so. But the moral hazard trade was only possible because there was sufficient investor demand for the rated tranches of the CDO and even more crucially, because the originating bank was willing to hold onto the super-senior tranche. As I have discussed many times earlier in detail, bank demand for super-senior tranches is a logical consequence of the cheap leverage that they are afforded via the moral hazard subsidy of the TBTF doctrine. If banks were less levered, many of these deals would not have been issued at all.

In fact, two of the hedging strategies that we know were implemented in banks – UBS’ “AMPS” strategy and Howie Hubler’s trade in Morgan Stanley – were mirror images of the Magnetar trade. It is not a coincidence that bank traders chose the negatively skewed payoff distribution and Magnetar chose the positively skewed one.


Disclaimer: The above note is just my analysis of the facts and assertions in ProPublica’s article. I have no additional knowledge of the facts of the case and it is entirely possible that Magnetar are being less than fully forthright in their responses to the story. The above analysis is more useful as an illustration of how the facts as described in the article can be reconciled to a narrative that does not imply moral hazard.

Bookmark and Share

Written by Ashwin Parameswaran

April 11th, 2010 at 4:19 pm

Micro-Foundations of a Resilience Approach to Macro-Economic Analysis

with 4 comments

Before assessing whether a resilience approach is relevant to macro-economic analysis, we need to define resilience. Resilience is best defined as “the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks.”

The assertion that an ecosystem can lose resilience and become fragile is not controversial. To claim that the same can occur in social systems such as macro-economies is nowhere near as obvious, not least due to our ability to learn, forecast the future and adapt to changes in our environment. Any analysis of how social systems can lose resilience is open to the objection that loss of resilience implies systematic error on the part of economic actors in assessing the economic conditions accurately and an inability to adapt to the new reality. For example, one of the common objections to Minsky’s Financial Instability Hypothesis (FIH) is that it requires irrational behaviour on the part of economic actors. Rajiv Sethi’s post has a summary of this debate with a notable objection coming from Bernanke’s paper on the subject which insists thatHyman Minsky and Charles Kindleberger have in several places argued for the inherent instability of the financial system, but in doing so have had to depart from the assumption of rational behavior.”

One response to this objection is “So What?” and indeed the stability-resilience trade-off can be explained within the Kahneman-Tversky framework. Another response which I’ve invoked on this blog and Rajiv has also mentioned in a recent post focuses on the pervasive principal-agent relationship in the financial economy. However, I am going to focus on a third and a more broadly applicable rationale which utilises a “rationality” that incorporates Knightian uncertainty as the basis for the FIH. The existence of irreducible uncertainty is sufficient to justify an evolutionary approach for any social system, whether it be an organization or a macro-economy.

Cognitive Rigidity as a Rational Response to Uncertainty

Rajiv touches on the crux of the issue when he notes: “Selection of strategies necessarily implies selection of people, since individuals are not infinitely flexible with respect to the range of behavior that they can exhibit.” But is achieving infinite flexibility a worthwhile aim? The evidence suggests that it is not. In the face of true uncertainty, infinite flexibility is not only unrealistic due to finite cognitive resources but it is also counterproductive and may deliver results that are significantly inferior to a partially “rigid” framework. V.S. Ramachandran explains this brilliantly: “At any given moment in out waking lives, our brains are flooded with a bewildering variety of sensory inputs, all of which have to be incorporated into a coherent perspective based on what stored memories already tell us is true about ourselves and the world. In order to act, the brain must have some way of selecting from this superabundance of detail and ordering it into a consistent ‘belief system’, a story that makes sense of the available evidence. When something doesn’t quite fit the script, however, you very rarely tear up the entire story and start from scratch. What you do, instead, is to deny or confabulate in order to make the information fit the big picture. Far from being maladaptive, such everyday defense mechanisms keep the brain from being hounded into directionless indecision by the ‘combinational explosion’ of possible stories that might be written from the material available to the senses.”

This rigidity is far from being maladaptive and appears to be irrational only when measured against a utopian definition of rational choice. Behavioural Economics also frequently commits the same error – As Brian Loasby notes: “It is common to find apparently irrational behaviour attributed to ‘framing effects’, as if ‘framing’ were a remediable distortion. But any action must be taken within a framework.” This notion of true rationality being less than completely flexible is not a new one – Ramachandran’s work provides the neurological bases for the notion of ‘rigidity as a rational response to uncertainty’. I have already discussed Ronald Heiner’s framework in a previous post which bears a striking resemblance to Ramachandran’s thesis:

“Think of an omniscient agent with literally no uncertainty in identifying the most preferred action under any conceivable condition, regardless of the complexity of the environment which he encounters. Intuitively, such an agent would benefit from maximum flexibility to use all potential information or to adjust to all environmental conditions, no matter how rare or subtle those conditions might be. But what if there is uncertainty because agents are unable to decipher all of the complexity of the environment? Will allowing complete flexibility still benefit the agents?

I believe the general answer to this question is negative: that when genuine uncertainty exists, allowing greater flexibility to react to more information or administer a more complex repertoire of actions will not necessarily enhance an agent’s performance.”

Brian Loasby has an excellent account of ‘rationality under uncertainty’ and its evolutionary implications in this excellent book which traces hints of this idea running through the work of Adam Smith, Alfred Marshall, George Kelly’s ‘Personal Construct Theory’ and Hayek’s ‘Sensory Order’. But perhaps the clearest exposition of the idea was provided by Kenneth Boulding in his description of subjective human knowledge as an ‘Image’. Most external information either conforms so closely to the image that it is ignored or it adds to the image in a well-defined manner. But occasionally, we receive information that is at odds with our image. Boulding recognised that such change is usually abrupt and explained it in the following manner: “The sudden and dramatic nature of these reorganizations is perhaps a result of the fact that our image is in itself resistant to change. When it receives messages which conflict with it, its first impulse is to reject them as in some sense untrue….As we continue to receive messages which contradict our image, however, we begin to have doubts, and then one day we receive a message which overthrows our previous image and we revise it completely.” He also recognises that this resistance is not “irrational” but merely a logical response to uncertainty in an “imperfect” market. “The buyer or seller in an imperfect market drives on a mountain highway where he cannot see more than a few feet around each curve; he drives it, moreover, in a dense fog. There is little wonder, therefore, that he tends not to drive it at all but to stay where he is. The well-known stability or stickiness of prices in imperfect markets may have much more to do with the uncertain nature of the image involved than with any ideal of maximizing behavior.”

Loasby describes the key principles of this framework as follows: “The first principle is that all action is decided in the space of representations. These representations include, for example, neural networks formed in the brain by processes which are outside our conscious control…None are direct copies of reality; all truncate complexity and suppress uncertainty……The second principle of this inquiry is that viable processes must operate within viable boundaries; in human affairs these boundaries limit our attention and our procedures to what is manageable without, we hope, being disastrously misleading – though no guarantees are available……The third principle is that these frameworks are useless unless they persist, even when they do not fit very well. Hahn’s definition of equilibrium as a situation in which the messages received by agents do not cause them to change the theories that they hold or the policies that they pursue offers a useful framework for the analysis both of individual behaviour and of the co-ordination of economic activity across a variety of circumstances precisely because it is not to be expected that theories and policies will be readily changed just because some evidence does not appear readily compatible with them.” (For a more detailed account, read Chapter 3 ‘Cognition and Institutions’ of the aforementioned book or his papers here and here.)

The above principles are similar to Ronald Heiner’s assertion that actions chosen under true uncertainty must satisfy a ‘reliability condition’. It also accounts for the existence of the stability-resilience trade-off. In Loasby’s words: “If behaviour is a selected adaptation and not a specific application of a general logic of choice, then the introduction of substantial novelty – a change not of weather but of climate – is liable to be severely disruptive, as Schumpeter also insisted. In biological systems it can lead to the extinction of species, sometimes on a very large scale.” Extended periods of stability narrow the scope of events that fit the script and correspondingly broaden the scope of events that appear to be anomalous and novel. When the inevitable anomalous event comes along, we either adapt too slowly or in extreme cases, not at all.

Bookmark and Share

Written by Ashwin Parameswaran

April 11th, 2010 at 7:51 am