macroresilience

resilience, not stability

Archive for the ‘Resilience’ Category

The Cause and Impact of Crony Capitalism: the Great Stagnation and the Great Recession

with 23 comments

STABILITY AS THE PRIMARY CAUSE OF CRONY CAPITALISM

The core insight of the Minsky-Holling resilience framework is that stability and stabilisation breed fragility and loss of system resilience . TBTF protection and the moral hazard problem is best seen as a subset of the broader policy of stabilisation, of which policies such as the Greenspan Put are much more pervasive and dangerous.

By itself, stabilisation is not sufficient to cause cronyism and rent seeking. Once a system has undergone a period of stabilisation, the system manager is always tempted to prolong the stabilisation for fear of the short-term disruption or even collapse. However, not all crisis-mitigation strategies involve bailouts and transfers of wealth to the incumbent corporates. As Mancur Olson pointed out, society can confine its “distributional transfers to poor and unfortunate individuals” rather than bailing out incumbent firms and still hope to achieve the same results.

To fully explain the rise of crony capitalism, we need to combine the Minsky-Holling framework with Mancur Olson’s insight that extended periods of stability trigger a progressive increase in the power of special interests and rent-seeking activity. Olson also noted the self-preserving nature of this phenomenon.  Once rent-seeking has achieved sufficient scale, “distributional coalitions have the incentive and..the power to prevent changes that would deprive them of their enlarged share of the social output”.

SYSTEMIC IMPACT OF CRONY CAPITALISM

Crony capitalism results in a homogenous, tightly coupled and fragile macroeconomy. The key question is: Via which channels does this systemic malformation occur? As I have touched upon in some earlier posts [1,2], the systemic implications of crony capitalism arise from its negative impact on new firm entry. In the context of the exploration vs exploitation framework, absence of new firm entry tilts the system towards over-exploitation1 .

Exploration vs Exploitation: The Importance of New Firm Entry in Sustaining Exploration

In a seminal article, James March distinguished between “the exploration of new possibilities and the exploitation of old certainties. Exploration includes things captured by terms such as search, variation, risk taking, experimentation, play, flexibility, discovery, innovation. Exploitation includes such things as refinement, choice, production, efficiency, selection, implementation, execution.” True innovation is an act of exploration under conditions of irreducible uncertainty whereas exploitation is an act of optimisation under a known distribution.

The assertion that dominant incumbent firms find it hard to sustain exploratory innovation is not a controversial one. I do not intend to reiterate the popular arguments in the management literature, many of which I explored in a previous post. Moreover, the argument presented here is more subtle: I do not claim that incumbents cannot explore effectively but simply that they can explore effectively only when pushed to do so by a constant stream of new entrants. This is of course the “invisible foot” argument of Joseph Berliner and Burton Klein for which the exploration-exploitation framework provides an intuitive and rigorous rationale.

Let us assume a scenario where the entry of new firms has slowed to a trickle, the sector is dominated by a few dominant incumbents and the S-curve of growth is about to enter its maturity/decline phase. To trigger off a new S-curve of growth, the incumbents need to explore. However, almost by definition, the odds that any given act of exploration will be successful is small. Moreover, the positive payoff from any exploratory search almost certainly lies far in the future. For an improbable shot at moving from a position of comfort to one of dominance in the distant future, an incumbent firm needs to divert resources from optimising and efficiency-increasing initiatives that will deliver predictable profits in the near future. Of course if a significant proportion of its competitors adopt an exploratory strategy, even an incumbent firm will be forced to follow suit for fear of loss of market share. But this critical mass of exploratory incumbents never comes about. In essence, the state where almost all incumbents are content to focus their energies on exploitation is a Nash equilibrium.

On the other hand, the incentives of any new entrant are almost entirely skewed in favour of exploratory strategies. Even an improbable shot at glory is enough to outweigh the minor consequences of failure2 . It cannot be emphasised enough that this argument does not depend upon the irrationality of the entrant. The same incremental payoff that represents a minor improvement for the incumbent is a life-changing event for the entrepreneur. When there exists a critical mass of exploratory new entrants, the dominant incumbents are compelled to follow suit and the Nash equilibrium of the industry shifts towards the appropriate mix of exploitation and exploration.

The Crony Capitalist Boom-Bust Cycle: A Tradeoff between System Resilience and Full Employment

Due to insufficient exploratory innovation, a crony capitalist economy is not diverse enough. But this does not imply that the system is fragile either at firm/micro level or at the level of the macroeconomy. In the absence of any risk of being displaced by new entrants, incumbent firms can simply maintain significant financial slack3. If incumbents do maintain significant financial slack, sustainable full employment is impossible almost by definition.  However, full employment can be achieved temporarily in two ways: Either incumbent corporates can gradually give up their financial slack and lever up as the period of stability extends as Minsky’s Financial Instability Hypothesis (FIH) would predict, or the household or government sector can lever up to compensate for the slack held by the corporate sector.

Most developed economies went down the route of increased household and corporate leverage with the process aided and abetted by monetary and regulatory policy. But it is instructive that developing economies such as India faced exactly the same problem in their “crony socialist” days. In keeping with its ideological leanings pre-1990, India tackled the unemployment problem via increased government spending. Whatever the chosen solution, full employment is unsustainable in the long run unless the core problem of cronyism is tackled. The current over-leveraged state of the consumer in the developed world can be papered over by increased government spending but in the face of increased cronyism, it only kicks the can further down the road. Restoring corporate animal spirits depends upon corporate slack being utilised in exploratory investment, which as discussed above is inconsistent with a cronyist economy.

Micro-Fragility as the Key to a Resilient Macroeconomy and Sustainable Full Employment

At the appropriate mix of exploration and exploitation, individual incumbent and new entrant firms are both incredibly vulnerable. Most exploratory investments are destined to fail as are most firms, sooner or later. Yet due to the diversity of firm-level strategies, the macroeconomy of vulnerable firms is incredibly resilient. At the same time, the transfer of wealth from incumbent corporates to the household sector via reduced corporate slack and increased investment means that sustainable full employment can be achieved without undue leverage. The only question is whether we can break out of the Olsonian special interest trap without having to suffer a systemic collapse in the process.

  1. It cannot be emphasized enough that absence of new firm entry is simply the channel through which crony capitalism malforms the macroeconomy. Therefore, attempts to artificially boost new firm entry are likely to fail unless they tackle the ultimate cause of the problem which is stabilisation []
  2. It is critical that the personal consequences of firm failure are minor for the entrepreneur – this is not the case for cultural and legal reasons in many countries around the world but is largely still true in the United States. []
  3. It could be argued that incumbents could follow this strategy even when new entrants threaten them. This strategy however has its limits – an extended period of standing on the sidelines of exploratory activity can degrade the ability of the incumbent to rejoin the fray. As Brian Loasby remarked : “For many years, Arnold Weinberg chose to build up GEC’s reserves against an uncertain technological future in the form of cash rather than by investing in the creation of technological capabilities of unknown value. This policy, one might suggest, appears much more attractive in a financial environment where technology can often be bought by buying companies than in one where the market for corporate control is more tightly constrained; but it must be remembered that some, perhaps substantial, technological capability is likely to be needed in order to judge what companies are worth acquiring, and to make effective use of the acquisitions. As so often, substitutes are also in part complements.” []
Bookmark and Share

Written by Ashwin Parameswaran

November 24th, 2010 at 6:01 pm

The Resilience Stability Tradeoff: Drawing Analogies between River Flood Management and Macroeconomic Management

with 9 comments

In an earlier post, I drew an analogy between Minsky’s Financial Instability Hypothesis (FIH) and the ecologist Buzz Holling’s work on the resilience-stability tradeoff in ecosystems. Extended periods of stability reduce system resilience in complex adaptive systems such as ecologies and economies. By extension, policies that focus on stabilisation cause a loss of system resilience. Holling and Meffe called this the Pathology of Natural Resource Management which they described as follows: “when the range of natural variation in a system is reduced, the system loses resilience.That is, a system in which natural levels of variation have been reduced through command-and-control activities will be less resilient than an unaltered system when subsequently faced with external perturbations.” This pathology is as relevant to macroeconomic systems as it is to ecosystems and I briefly drew an analogy between forest fire management and economic management in the earlier post. In this post, I analyse the dilemmas faced in river flood management and their relevance to macroeconomic management.

A Case Study of River Flood Management: River Kosi

The Kosi is one of the most flood-prone rivers in India. The brunt of its fury is borne by the northern Indian state of Bihar and the Kosi is aptly also known as the “Sorrow of Bihar”. Like many other flood-prone rivers, the root cause lies in the extraordinary amount of silt that the Kosi carries from the Himalayas to the plains of Bihar. The silt deposition raises the river bed and gravity causes the river to seek out a new course – in this manner, it has been estimated that the river Kosi may have moved westwards by an incredible 210 km in the last 250 years. During the 1950s, in an effort to provide “permanent salvation from floods” the Indian government embarked on a program of building embankments on the river to curb the periodic shifting of the Kosi’s course – the embankments were aimed at converting the unpredictable behaviour of the river into something more predictable and by extension, more manageable. It was assumed that the people of Bihar would benefit from a stabilised and predictable river.

Unfortunately, the reality of the flood management program on the river Kosi has turned out to be anything but beneficial. The culmination of the failure of the program was the 2008 Bihar flood which was one of the most disastrous floods in the history of the state. So what went wrong? Was this just a result of an extraordinary natural event? Most certainly not – As Dinesh Mishra notes, in 2008 the Kosi carried only  1/7th of the capacity of the embankments and at various points of time since the 50s, the river had carried far greater quantities of water without causing anywhere near the damage it caused in 2008. This was a disaster caused by the loss of system resilience, highlighted by the inability of the system to “withstand even modest adverse shocks” after prolonged periods of stability.

So what caused this loss of system resilience? As Dinesh Mishra explains: “By building embankments on either side of a river and trying to confine it to its channel, its heavy silt and sand load is made to settle within the embanked area itself, raising the river bed and the flood water level. The embankments too are therefore raised progressively until a limit is reached when it is no longer possible to do so. The population of the surrounding areas is then at the mercy of an unstable river with a dangerous flood water level , which could any day flow over or make a disastrous breach.” As expected, the eventual breach was catastrophic – the course of the Kosi moved more than 120 kilometres eastwards in a matter of weeks. In the absence of the embankments, such a dramatic shift would have taken decades. With the passage of time, a progressively greater degree of resources were required to maintain system stability and the eventual failure was a catastrophic one rather than a moderate one.

As the above analysis highlights, the stabilisation did not merely substitute a series of regular moderately damaging outcomes for an occasional catastrophic outcome (although this alone would be a cause for concern if a catastrophic outcome was capable of triggering systemic collapse). In fact, the stabilisation transformed the system into a state where eventually even minor and frequently observed disturbances would trigger a catastrophic outcome. As Jon Stewart put it, even “regular storms” would topple a fragile boat. When faced with the possibility of a catastrophic outcome, the managing agency has two choices, neither of which are attractive.

Either it can continue to stabilise the system using ever-increasing resources in an effort to avoid the catastrophic outcome. But this option must only be followed if the managing agency has infinite resources or if there is some absolute limit to this vicious cycle of cost escalation that is within the resource capabilities of the agency. Or it can allow the catastrophic outcome to occur in an effort to restore the system to its unstabilised state. But this option risks systemic collapse – it is not just the unprecedented nature of the outcome that we have to fear from, but the very fact that the adaptive agents of the complex system may have lost the ability to deal with even the occasional moderate failures that the unstabilised system would throw up. In other words, once the system has lost resilience, managing it is akin to choosing between the frying pan and the fire.

For example, in the pre-embankment era when the Kosi was allowed to meander and change course in a natural manner, the villagers on its banks had a deep understanding of the river’s patterns and its vagaries. The floods sustained the fertility of the soil and ensured that groundwater resources were plentiful. This is not to deny that the Kosi caused damage but because the people had adapted to its regular flooding patterns, systemic damage only occured during the proverbial 100-year flood. This highlights an important lesson in complex adaptive systems: The impact of disturbances cannot be analysed in isolation to the adaptive capacities of the agents in the system. If disturbances are regular and predictable, agents will likely be adapted to them and conversely, prolonged periods of stability will render agents vulnerable to even the smallest disturbance.

The problems of managing floods on the river Kosi are not unique – many rivers around the world pose similar challenges. For example, the Yellow River, aptly named the “Sorrow of China” and the Mississippi river basin, the story of which was captured so well by John McPhee. So is there any way to avoid this evolutionary arms race against nature? Are we to conclude that the only sustainable strategy is to avoid any intervention in the complex adaptive system? Not necessarily – interventions on the system must avoid tampering with the fundamental patterns and evolutionary dynamics of the system. Indeed the best example of river management that works with the natural flow of the river rather than against it is the Dutch government’s aptly named “Room for the River” project in the Rhine river valley. Instead of building higher dikes, the Dutch have chosen to build lower dikes that allow the Rhine to flood over a larger area thus easing the pressure on the dike system as a whole. This program has been adopted despite the fact that many farmers need to be relocated out of the newly expanded flood zones of the river.

Macroeconomic Parallels

Axel Leijonhufvud’s “Corridor Hypothesis” postulates that a macroeconomy will adapt well to small shocks but “outside of a certain zone or “corridor” around its long-run growth path, it will only very sluggishly react to sufficiently large, infrequent shocks.” The adaptive nature of the macroeconomy implies that stability and by extension stabilisation reduces the width of the corridor to the point where even a small shock is enough to push the system outside the corridor. Just as embankments induced fragility in the river Kosi, bailouts and other economic transfers to specific firms and industries induce fragility into the macroeconomic system. Economic policy must allow the “river” of the macroeconomy to flow in a natural manner and restrict its interventions to insuring individual economic agents against the occasional severe flood.

This sentiment was also expressed by that great evolutionary macroeconomist of our time, Mancur Olson. In his final work “Power and Prosperity”, Olson notes: “subsidizing industries, firms and localities that lose money…at the expense of those that make money…is typically disastrous for the efficiency and dynamism of the economy, in a way that transfers unnecessarily to poor individuals…A society that does not shift resources from the losing activities to those that generate a social surplus is irrational, since it is throwing away useful resources in a way that ruins economic performance without the least assurance that it is helping individuals with low incomes. A rational and humane society, then, will confine its distributional transfers to poor and unfortunate individuals.” Olson understood the damage inflicted by rent-seeking not only from a systemic perspective but from a perspective of social justice. The logical consequence of micro-stabilisation is a crony capitalist economy – rents invariably flow to the strong and the result is a sluggish and an inegalitarian economic system, not unlike many developing economies. Contrary to popular opinion, it is not limiting handouts to the poor that defines a free and dynamic economy but limiting rents that flow to the privileged.

On the Damage Done by the Greenspan Put Variant of Monetary Policy

Clearly, some fiscal policies aimed at firm and industry stabilisation harm the economic system. But what about monetary policy? Isn’t monetary policy close-to-neutral and therefore exempt from the above criticism? On the contrary – the Greenspan Put variant of monetary policy damages macroeconomic resilience as well as being inegalitarian and unjust. Monetary policy during the Greenspan-Bernanke era has focused on stabilising incumbent banks and helping them shore up their capital in response to every economic shock, as well as a focus on asset prices as a transmission channel of monetary policy i.e. the Greenspan Put. Unlike a river system where the buildup of silt is a clear indicator of growing fragility, there are no clear signs of loss of system resilience in a macroeconomy. However, we can infer loss of macroeconomic resilience from the ever-increasing resources that are required to maintain system stability. Just as the embankments of the Kosi were raised higher and higher to combat even a minor flood, the resources needed to stabilise the financial system have grown over the last 25 years. In the early 90s, bank capital could be rebuilt by a few years of low rates but now we need a panoply of “liquidity” facilities, near-zero rates and quantitative easing aimed at compressing the entire yield curve to achieve the same result.

As I mentioned earlier, such a stabilisation policy may be credible if there is a limit to the costs of stabilisation. For example, the rents that can be extracted by any small, isolated sector of the economy are limited. Unfortunately, and this is a point that cannot be emphasised enough, there is no limit to the rents that can be extracted by the financial sector. Every commitment by the Central Bank to insure the financial sector against bad outcomes will be arbitraged for all its worth until the cost of maintaining the commitment becomes so prohibitive that it is no longer tenable. Of course, as long as the stabilising policy is in operation it appears to be a “free lunch” – the costs of programs such as the TARP appear to be limited and well worth their macroeconomic benefits just like flood protection appears to be a successful choice in the long period of calm before the eventual disaster. The loss of resilience and rent extraction is exacerbated as other financial market players are encouraged to mimic banks and take on similarly negatively skewed bets such as investing the proceeds from securities lending in “safe” assets.

In my last post, I noted the connection between inequality and rents emanating from the moral hazard subsidy but the larger culprit is the toxic combination of Greenspan Put monetary policy and a dynamically uncompetitive cronyist financial sector. Even if the sector were more competitive it is inevitable that monetary policy focused on shoring up asset prices will benefit the primary asset-holders in the economy, which in itself is a regressive transfer of wealth to the rich. The idea that supporting asset prices is the best way to support the wider economy is not far away from the notion of trickle-down economics (or as Will Rogers put it: “money was all appropriated for the top in hopes that it would trickle down to the needy.”).

Finally, although it goes without saying that even a fiat currency-issuing central bank does not have infinite resources, the move over the last century from a gold standard to a fiat money regime does have some important implications for system resilience. In evolving from a decentralised gold standard monetary system to a fiat-currency issuing central bank regime, the flexibility and resources at the monetary authority’s disposal have increased significantly. In the hands of a responsible central bank the ability to issue a fiat currency is beneficial, but in an excessively stabilised economy, it allows the process of stabilisation to be maintained for far longer than it would otherwise be. And just like in the case of the river Kosi, the longer the period of the stabilisation the more catastrophic are the results of the inevitable normal disturbance.

Bookmark and Share

Written by Ashwin Parameswaran

October 18th, 2010 at 11:35 am

Uncertainty and the Cyclical vs Structural Unemployment Debate

with 6 comments

There are two schools of thought on the primary cause of our current unemployment problem: Some claim that the unemployment is cyclical (low aggregate demand) whereas others think it’s structural (mismatch in the labour market). The “Structuralists” point to the apparent shift in the Beveridge curve and the increased demand in healthcare and technology whereas the “Cyclicalists” point to the fall in employment across all other sectors. So who’s right? In my opinion, neither explanation is entirely satisfactory. This post is an expansion of some thoughts I touched upon in my last post that describe the “persistent unemployment” problem as a logical consequence of a dynamically uncompetitive “Post Minsky Moment” economy.

Narayana Kocherlakota explains the mismatch thesis as follows: “Firms have jobs, but can’t find appropriate workers. The workers want to work, but can’t find appropriate jobs. There are many possible sources of mismatch—geography, skills, demography—and they are probably all at work….the Fed does not have a means to transform construction workers into manufacturing workers.” Undoubtedly this argument has some merit – the real question is how much of our current unemployment can be attributed to the mismatch problem? Kocherlakota draws on work done by Robert Shimer and extrapolates from the Beveridge curve relationship since 2000 to arrive at a implied unemployment rate of 6.3% if mismatch were not a bigger problem and the Beveridge curve relationship had not broken down. Jan Hatzius of Goldman Sachs on the other hand attributes as little as 0.75% of the current unemployment problem to structural reasons. Murat Tasci and Dave Lindner however conclude that the recent behaviour of the Beveridge curve is not anomalous when viewed in the context of previous post-war recessions. Shimer himself was wary of extrapolating too much from the limited data set from 2000 (see pg 12-13 here)  This would imply that Kocherlakota’s estimate is an overestimate even if Jan Hatzius’ may be an underestimate.

Incorporating Uncertainty into the Mismatch Argument

It is likely therefore that there is a significant pool of unemployment that cannot be justified by the simple mismatch argument. But this does not mean that the “recalculation” thesis is not valid. The simple mismatch argument ignores the uncertainty involved in the “Post-Minsky Moment economy” – it assumes that firms have known jobs that remain unfilled whereas in reality, firms need to engage in a process of exploration that will determine the nature of jobs consistent with the new economic reality before they search for suitable workers. The problem we face right now is of firms unwilling to take on the risk inherent in such an exploration. The central message in my previous posts on evolvability and organisational rigidity is that this process of exploration is dependent upon the maintenance of a dynamically competitive economy rather than a statically competitive economy. Continuous entry of new firms is of critical importance in maintaining a dynamically competitive economy that retains the ability to evolve and reconfigure itself when faced with a dramatic change in circumstances.

The “Post Minsky Moment” Economy

In Minsky’s Financial Instability Hypothesis, the long period of stability before the crash creates a homogeneous and fragile ecosystem – the fragility arises due to the fragility of the individual firms as well the absence of diversity. Post the inevitable crash, the system inevitably regains some of its robustness via the slack built up by the incumbent firms, usually in the form of financial liquidity. However, so long as this slack at firm level is maintained, the macro-system cannot possibly revert to a state where it attains conventional welfare optima such as full employment. The conventional Keynesian solution suggests that the state pick up the slack in economic activity whereas some assume that sooner or later, market forces will reorganise to utilise this firm-level slack. This post is an attempt to partially refute both explanations – As Burton Klein often notedthere is no hidden hand that can miraculously restore the “animal spirits” of an economy or an industry once it has lost its evolvability. Similarly, Keynesian policies that shore up the position of the incumbent firms can cause fatal damage to the evolvability of the macro-economy.

Corporate Profits and Unemployment

This thesis does not imply that incumbent firms leave money on the table. In fact, incumbents typically redouble their efforts at static optimisation – hence the rise in corporate profits. Some may argue that this rise in profitability is illusory and represents capital consumption i.e. short-term gain at the expense of long-term loss of competence and capabilities at firm level. But in the absence of new firm entry, it is unlikely that there is even a long-term threat to incumbents’ survival i.e. firms are making a calculated bet that loss of evolvability represents a minor risk. It is only the invisible foot of the threat of new firms that prevents incumbents from going down this route.

Small Business Financing Constraints as a Driver of Unemployment

The role of new firms in generating employment is well-established and my argument implies that incumbent firms will effectively contribute to solving the unemployment problem only when prodded to do so by the hidden foot of new firm entry. The credit conditions faced by small businesses remain extremely tight despite funding costs for big incumbent firms having eased considerably since the peak of the crisis. Of course this may be due to insufficient investment opportunities – some of which may be due to dominant large incumbents in specific sectors. But a more plausible explanation lies in the unevolvable and incumbent-dominated state of our banking sector. Expanding lending to new firms is an act of exploration and incumbent banks are almost certainly content with exploiting their known and low-risk sources of income instead. One of Burton Klein’s key insights was how only a few key dynamically uncompetitive sectors can act as a deadweight drag on the entire economy and banking certainly fits the bill.

Bookmark and Share

Written by Ashwin Parameswaran

September 8th, 2010 at 9:21 am

Evolvability, Robustness and Resilience in Complex Adaptive Systems

with 14 comments

In a previous post, I asserted that “the existence of irreducible uncertainty is sufficient to justify an evolutionary approach for any social system, whether it be an organization or a macro-economy.” This is not a controversial statement – Nelson and Winter introduced their seminal work on evolutionary economics as follows: “Our evolutionary theory of economic change…is not an interpretation of economic reality as a reflection of supposedly constant “given data” but a scheme that may help an observer who is sufficiently knowledgeable regarding the facts of the present to see a little further through the mist that obscures the future.”

In microeconomics, irreducible uncertainty implies a world of bounded rationality where many heuristics become not signs of irrationality but a rational and effective tool of decision-making. But it is the implications of human action under uncertainty for macro-economic outcomes that is the focus of this blog – In previous posts (1,2) I have elaborated upon the resilience-stability tradeoff and its parallels in economics and ecology. This post focuses on another issue critical to the functioning of all complex adaptive systems: the relationship between evolvability and robustness.

Evolvability and Robustness Defined

Hiroaki Kitano defines robustness as follows: “Robustness is a property that allows a system to maintain its functions despite external and internal perturbations….A system must be robust to function in unpredictable environments using unreliable components.” Kitano makes it explicit that robustness is concerned with the maintenance of functionality rather than specific components: “Robustness is often misunderstood to mean staying unchanged regardless of stimuli or mutations, so that the structure and components of the system, and therefore the mode of operation, is unaffected. In fact, robustness is the maintenance of specific functionalities of the system against perturbations, and it often requires the system to change its mode of operation in a flexible way. In other words, robustness allows changes in the structure and components of the system owing to perturbations, but specific functions are maintained.”

Evolvability is defined as the ability of the system to generate novelty and innovate thus enabling the system to “adapt in ways that exploit new resources or allow them to persist under unprecedented environmental regime shifts” (Whitacre 2010). At first glance, evolvability and robustness appear to be incompatible: Generation of novelty involves a leap into the dark, an exploration rather than an act of “rational choice” and the search for a beneficial innovation carries with it a significant risk of failure. It’s worth noting that in social systems, this dilemma vanishes in the absence of irreducible uncertainty. If all adaptations are merely a realignment to a known systemic configuration (“known” in either a deterministic or a probabilistic sense), then an inability to adapt needs other explanations such as organisational rigidity.

Evolvability, Robustness and Resilience

Although it is typical to equate resilience with robustness, resilient complex adaptive systems also need to possess the ability to innovate and generate novelty. As Allen and Holling put it : “Novelty and innovation are required to keep existing complex systems resilient and to create new structures and dynamics following system crashes”. Evolvability also enables the system to undergo fundamental transformational change – it could be argued that such innovations are even more important in a modern capitalist economic system than they are in the biological or ecological arena. The rest of this post will focus on elaborating upon how macro-economic systems can be both robust and evolvable at the same time – the apparent conflict between evolvability and robustness arises from a fallacy of composition where macro-resilience is assumed to arise from micro-resilience, when in fact it arises from the very absence of micro-resilience.

EVOLVABILITY, ROBUSTNESS AND RESILIENCE IN MACRO-ECONOMIC SYSTEMS

The pre-eminent reference on how a macro-economic system can be both robust and evolvable at the same time is the work of Burton Klein in his books “Dynamic Economics” and “Prices, Wages and Business Cycles: A Dynamic Theory”. But as with so many other topics in evolutionary economics, no one has summarised it better than Brian Loasby: “Any economic system which is to remain viable over a long period must be able to cope with unexpected change. It must be able to revise or replace policies which have worked well. Yet this ability is problematic. Two kinds of remedy may be tried, at two different system levels. One is to try to sensitize those working within a particular research programme to its limitations and to possible alternatives, thus following Menger’s principle of creating private reserves against unknown but imaginable dangers, and thereby enhancing the capacity for internal adaptation….But reserves have costs; and it may be better , from a system-wide perspective, to accept the vulnerability of a sub-system in order to exploit its efficiency, while relying on the reserves which are the natural product of a variety of sub-systems….
Research programmes, we should recall, are imperfectly specified, and two groups starting with the same research programme are likely to become progressively differentiated by their experience, if there are no strong pressures to keep them closely aligned. The long-run equilibrium of the larger system might therefore be preserved by substitution between sub-systems as circumstances change. External selection may achieve the same overall purpose as internal adaptation – but only if the system has generated adequate variety from which the selection may be made. An obvious corollary which has been emphasised by Klein (1977) is that attempts to preserve sub-system stability may wreck the larger system. That should not be a threatening notion to economists; it also happens to be exemplified by Marshall’s conception of the long-period equilibrium of the industry as a population equilibrium, which is sustained by continued change in the membership of that population. The tendency of variation is not only a chief cause of progress; it is also an aid to stability in a changing environment (Eliasson, 1991). The homogeneity which is conducive to the attainment of conventional welfare optima is a threat to the resilience which an economy needs.”

Uncertainty can be tackled at the micro-level by maintaining reserves and slack (liquidity, retained profits) but this comes at the price of slack at the macro-level in terms of lost output and employment. Note that this is essentially a Keynesian conclusion, similar to how individually rational saving decisions can lead to collectively sub-optimal outcomes. From a systemic perspective, it is more preferable to substitute the micro-resilience with a diverse set of micro-fragilities. But how do we induce the loss of slack at firm-level? And how do we ensure that this loss of micro-resilience occurs in a sufficiently diverse manner?

The “Invisible Foot”

The concept of the “Invisible Foot” was introduced by Joseph Berliner as a counterpoint to Adam Smith’s “Invisible Hand” to explain why innovation was so hard in the centrally planned Soviet economy: “Adam Smith taught us to think of competition as an “invisible hand” that guides production into the socially desirable channels….But if Adam Smith had taken as his point of departure not the coordinating mechanism but the innovation mechanism of capitalism, he may well have designated competition not as an invisible hand but as an invisible foot. For the effect of competition is not only to motivate profit-seeking entrepreneurs to seek yet more profit but to jolt conservative enterprises into the adoption of new technology and the search for improved processes and products. From the point of view of the static efficiency of resource allocation, the evil of monopoly is that it prevents resources from flowing into those lines of production in which their social value would be greatest. But from the point of view of innovation, the evil of monopoly is that it enables producers to enjoy high rates of profit without having to undertake the exacting and risky activities associated with technological change. A world of monopolies, socialist or capitalist, would be a world with very little technological change.” To maintain an evolvable macro-economy, the invisible foot needs to be “applied vigorously to the backsides of enterprises that would otherwise have been quite content to go on producing the same products in the same ways, and at a reasonable profit, if they could only be protected from the intrusion of competition.”

Entry of New Firms and the Invisible Foot

Burton Klein’s great contribution along with other dynamic economists of the time (notably Gunnar Eliasson) was to highlight the critical importance of entry of new firms in maintaining the efficacy of the invisible foot. Klein believed that “the degree of risk taking is determined by the robustness of dynamic competition, which mainly depends on the rate of entry of new firms. If entry into an industry is fairly steady, the game is likely to have the flavour of a highly competitive sport. When some firms in an industry concentrate on making significant advances that will bear fruit within several years, others must be concerned with making their long-run profits as large as possible, if they hope to survive. But after entry has been closed for a number of years, a tightly organised oligopoly will probably emerge in which firms will endeavour to make their environments highly predictable in order to make their environments highly predictable in order to make their short-run profits as large as possible….Because of new entries, a relatively concentrated industry can remain highly dynamic. But, when entry is absent for some years, and expectations are premised on the future absence of entry, a relatively concentrated industry is likely to evolve into a tight oligopoly. In particular, when entry is long absent, managers are likely to be more and more narrowly selected; and they will probably engage in such parallel behaviour with respect to products and prices that it might seem that the entire industry is commanded by a single general!”

Again, it can’t be emphasised enough that this argument does not depend on incumbent firms leaving money on the table – on the contrary, they may redouble their attempts at static optimisation. From the perspective of each individual firm, innovation is an incredibly risky process even though the result of such dynamic competition from the perspective of the industry or macro-economy may be reasonably predictable. Of course, firms can and do mitigate this risk by various methods but this argument only claims that any single firm, however dominant cannot replicate the “risk-free” innovation dynamics of a vibrant industry in-house.

Micro-Fragility as the Hidden Hand of Macro-Resilience

In an environment free of irreducible uncertainty, evolvability suffers leading to reduced macro-resilience. “If firms could predict each others’ advances they would not have to insure themselves against uncertainty by taking risks. And no smooth progress would occur” (Klein 1977). Conversely, “because firms cannot predict each other’s discoveries, they undertake different approaches towards achieving the same goal. And because not all of the approaches will turn out to be equally successful, the pursuit of parallel paths provides the options required for smooth progress.”

The Aftermath of the Minsky Moment: A Problem of Micro-Resilience

Within the context of the current crisis, the pre-Minsky moment system was a homogeneous system with no slack which enabled the attainment of “conventional welfare optima” but at the cost of an incredibly fragile and unevolvable condition. The logical evolution of such a system post the Minsky moment is of course still a homogeneous system but with significant firm-level slack built in which is equally unsatisfactory. In such a situation, the kind of macro-economic intervention matters as much as the force of intervention. For example, in an ideal world, monetary policy aimed at reducing borrowing rates of incumbent banks and corporates will flow through into reduced borrowing rates for new firms. In a dynamically uncompetitive world, such a policy will only serve the interests of the incumbents.

The “Invisible Foot” and Employment

Vivek Wadhwa argues that startups are the main source of net job growth in the US economy and Mark Thoma links to research that confirms this thesis. Even if one disagrees with this thesis, the “invisible foot” thesis argues that if the old guard is to contribute to employment, they must be forced to give up their “slack” by the strength of dynamic competition and dynamic competition is maintained by preserving conditions that encourage entry of new firms.

MICRO-EVOLVABILITY AND MACRO-RESILIENCE IN BIOLOGY AND ECOLOGY

Note: The aim of this section is not to draw any false precise equivalences between economic resilience and ecological or biological resilience but simply to highlight the commonality of the micro-macro fallacy of composition across complex adaptive systems – a detailed comparison will hopefully be the subject of a future post. I have tried to keep the section on biological resilience as brief and simple as possible but an understanding of the genotype-phenotype distinction and neutral networks is essential to make sense of it.

Biology: Genotypic Variation and Phenotypic Robustness

In the specific context of biology, evolvability can be defined as “the capacity to generate heritable, selectable phenotypic variation. This capacity may have two components: (i) to reduce the potential lethality of mutations and (ii) to reduce the number of mutations needed to produce phenotypically novel traits” (Kirschner and Gerhart 1998). The apparent conflict between evolvability and robustness can be reconciled by distinguishing between genotypic and phenotypic robustness and evolvability. James Whitacre summarises Andrew Wagner’s work on RNA genotypes and their structure phenotypes as follows: “this conflict is unresolvable only when robustness is conferred in both the genotype and the phenotype. On the other hand, if the phenotype is robustly maintained in the presence of genetic mutations, then a number of cryptic genetic changes may be possible and their accumulation over time might expose a broad range of distinct phenotypes, e.g. by movement across a neutral network. In this way, robustness of the phenotype might actually enhance access to heritable phenotypic variation and thereby improve long-term evolvability.”

Ecology: Species-Level Variability and Functional Stability

The notion of micro-variability being consistent with and even being responsible for macro-resilience is an old one in ecology as Simon Levin and Jane Lubchenco summarise here: “That the robustness of an ensemble may rest upon the high turnover of the units that make it up is a familiar notion in community ecology. MacArthur and Wilson (1967), in their foundational work on island biogeography, contrasted the constancy and robustness of the number of species on an island with the ephemeral nature of species composition. Similarly, Tilman and colleagues (1996) found that the robustness of total yield in high-diversity assemblages arises not in spite of, but primarily because of, the high variability of individual population densities.”

The concept is also entirely consistent with the “Panarchy” thesis which views an ecosystem as a nested hierarchy of adaptive cycles: “Adaptive cycles are nested in a hierarchy across time and space which helps explain how adaptive systems can, for brief moments, generate novel recombinations that are tested during longer periods of capital accumulation and storage. These windows of experimentation open briefly, but the results do not trigger cascading instabilities of the whole because of the stabilizing nature of nested hierarchies. In essence, larger and slower components of the hierarchy provide the memory of the past and of the distant to allow recovery of smaller and faster adaptive cycles.”

Misc. Notes

1. It must be emphasised that micro-fragility is a necessary, but not a sufficient condition for an evolvable and robust macro-system. The role of not just redundancy but degeneracy is critical as is the size of the population.

2. Many commentators use resilience and robustness interchangeably. I draw a distinction primarily because my definitions of robustness and evolvability are borrowed from biology and my definition of resilience is borrowed from ecology which in my opinion defines a robust and evolvable system as a resilient one.

Bookmark and Share

Written by Ashwin Parameswaran

August 30th, 2010 at 8:38 am

Raghuram Rajan on Monetary Policy and Macroeconomic Resilience

with 16 comments

Amongst economic commentators, Raghuram Rajan has stood out recently for his consistent calls to raise interest rates from “ultra-low to the merely low”. Predictably, this suggestion has been met with outright condemnation by many economists, both of Keynesian and monetarist persuasion. Rajan’s case against ultra-low rates utilises many arguments but this post will focus on just one of these arguments that is straight out of the “resilience” playbook. In 2008, Raghu Rajan and Doug Diamond co-authored a paper, the conclusion of which Rajan summarises in his FT article: “the pattern of Fed policy over time builds expectations. The market now thinks that whenever the financial sector’s actions result in unemployment, the Fed will respond with ultra-low rates and easy liquidity. So even as the Fed has maintained credibility as an inflation fighter, it has lost credibility in fighting financial adventurism. This cannot augur well for the future.”

Much like he accused the Austrians, Paul Krugman accuses Rajan of being a “liquidationist”. This is not a coincidence – Rajan and Diamond’s thesis is quite explicit about its connections to Austrian Business Cycle Theory: “a central bank that promises to cut interest rates conditional on stress, or that is biased towards low interest rates favouring entrepreneurs, will induce banks to promise higher payouts or take more illiquid projects. This in turn can make the illiquidity crisis more severe and require a greater degree of intervention, a view reminiscent of the Austrian theory of cycles.” But as the summary hints, Rajan and Diamond’s thesis is fundamentally different from ABCT. The conventional Austrian story identifies excessive credit inflation and interest rates below the “natural” rate of interest as the driver of the boom/bust cycle but Rajan and Diamond’s thesis identifies the anticipation by economic agents of low rates and “liquidity” facilities every time there is an economic downturn as the driver of systemic fragility. The adaptation of banks and other market players to this regime makes the eventual bust all the more likely. As Rajan and Diamond note: “If the authorities are expected to reduce interest rates when liquidity is at a premium, banks will take on more short-term leverage or illiquid loans, thus bringing about the very states where intervention is needed.”

Rajan and Diamond’s thesis is limited to the impact of such policies on banks but as I noted in a previous post, market players also adapt to this implicit commitment from the central bank to follow easy money policies at the first hint of economic trouble. This thesis is essentially a story of the Greenspan-Bernanke era and the damage that the Greenspan Put has caused. It also explains the dramatically diminishing returns inherent in the Greenspan Put strategy as the stabilising policies of the central bank become entrenched in the expectations of market players and crucially banks – in each subsequent cycle, the central bank has to do more and more (lower rates, larger liquidity facilities) to achieve less and less.

Bookmark and Share

Written by Ashwin Parameswaran

August 3rd, 2010 at 6:30 am

Critical Transitions in Markets and Macroeconomic Systems

with 6 comments

This post is the first in a series that takes an ecological and dynamic approach to analysing market/macroeconomic regimes and transitions between these regimes.

Normal, Pre-Crisis and Crisis Regimes

In a post on market crises, Rick Bookstaber identified three regimes that any model of the market must represent (normal, pre-crisis and crisis) and analysed the statistical properties (volatility,correlation etc) of each of these regimes. The framework below however characterises each regime by the varying combinations of positive and negative feedback processes and the variations and regime shifts are determined by the adaptive and evolutionary processes operating within the system.

1. Normal regimes are resilient regimes. They are characterised by a balanced and diverse mix of positive and negative feedback processes. For every momentum trader who bets on the continuation of a trend, there is a contrarian who bets the other way.

2. Pre-crisis regimes are characterised by an increasing dominance of positive feedback processes. An unusually high degree of stability or a persistent trend progressively weeds out negative feedback processes from the system thus leaving it vulnerable to collapse even as a result of disturbances that it could easily absorb in its previously resilient normal state. Such regimes can arise from bubbles but this is not necessary. Pre-crisis only implies that a regime change into the crisis regime is increasingly likely – in ecological terms, the pre-crisis regime is fragile and has suffered a significant loss of resilience.

3. Crisis regimes are essentially transitional  – the disturbance has occurred and the positive feedback processes that dominated the previous regime have now reversed direction. However, the final destination of this transition is uncertain – if the system is left alone, it will undergo a discontinuous transition to a normal regime. However, if sufficient external stabilisation pressures are exerted upon the system, it may revert to the pre-crisis regime or even stay in the crisis regime for a longer period. It’s worth noting that I define a normal regime only by its resilience and not by its desirability – even a state of civilizational collapse can be incredibly resilient.

“Critical Transitions” from the Pre-Crisis to the Crisis Regime

In fragile systems even a minor disturbance can trigger a discontinuous move to an alternative regime – Marten Scheffer refers to such moves as “critical transitions”. Figures a,b,c and d below represent a continuum of ways in which the system can react to changing external conditions (ref Scheffer et al) . Although I will frequently refer to “equilibria” and “states” in the discussion below, these are better described as “attractors” and “regimes” given the dynamic nature of the system – the static terminology is merely a simplification.

In Figure a, the system state reacts smoothly to perturbations – for example, a large external change will trigger a large move in the state of the system. The dotted arrows denote the direction in which the system moves when it is not on the curve i.e. in equilibrium.  Any move away from equilibrium triggers forces that bring it back to the curve. In Figure b, the transition is non-linear and a small perturbation can trigger a regime shift – however a reversal of conditions of an equally small magnitude can reverse the regime shift. Clearly, such a system does not satisfactorily explain our current economic predicament where monetary and fiscal intervention far in excess of the initial sub-prime shock have failed to bring the system back to its previous state.

Figure c however may be a more accurate description of the current state of the economy and the market – for a certain range of conditions, there exist two alternative stable states separated by an unstable equilibrium (marked by the dotted line). As the dotted arrows indicate, movement away from the unstable equilibrium can carry the system to either of the two alternative stable states. Figure d illustrates how a small perturbation past the point F2 triggers a “catastrophic” transition from the upper branch to the lower branch – moreover, unless conditions are reversed all the way back to the point F1, the system will not revert back to the upper branch stable state. The system therefore exhibits “hysteresis” – i.e. the path matters. The forward and backward switches occur at different points F2 and F1 respectively, which implies that reversing such transitions is not easy. A comprehensive discussion of the conditions that will determine the extent of hysteresis is beyond the scope of this post – however it is worth mentioning that cognitive and organisational rigidity in the absence of sufficient diversity is a sufficient condition for hysteresis in the macro-system.

Before I apply the above framework to some events in the market, it is worth clarifying how the states in Figure d correspond to those chosen by Rick Bookstaber. The “normal” regime refers to the parts of the upper and lower branch stable states that are far from the points F1 and F2 i.e. the system is resilient to a change in external conditions. As I mentioned earlier, normal does not equate to desirable – the lower branch could be a state of collapse. If we designate the upper branch as a desirable normal state and the lower branch as an undesirable one, then the zone close to point F2 on the upper branch is the pre-crisis regime. The crisis regime is the short catastrophic transition from F2 to the lower branch if the system is left alone. If forces external to the system are applied to prevent a transition to the lower branch, then the system could either revert back to the upper branch or even stay in the crisis regime on the dotted line unstable equilibrium for a longer period.

The Magnetar Trade revisited

In an earlier post, I analysed how the infamous Magnetar Trade could be explained with a framework that incorporates catastrophic transitions between alternative stable states. As I noted: “The Magnetar trade would pay off in two scenarios – if there were no defaults in any of their CDOs, or if there were so many defaults that the tranches that they were short also defaulted alongwith the equity tranche. The trade would likely lose money if there were limited defaults in all the CDOs and the senior tranches did not default. Essentially, the trade was attractive if one believed that this intermediate scenario was improbable…Intermediate scenarios are unlikely when the system is characterised by multiple stable states and catastrophic transitions between these states. In adaptive systems such as ecosystems or macroeconomies, such transitions are most likely when the system is fragile and in a state of low resilience. The system tends to be dominated by positive feedback processes that amplify the impact of small perturbations, with no negative feedback processes present that can arrest this snowballing effect.”

In the language of critical transitions, Magnetar calculated that the real estate and MBS markets were in a fragile pre-crisis state and no intervention would prevent the rapid critical transition from F2 to the lower branch.

“Schizophrenic” Markets and the Long Crisis

Recently, many commentators have noted the apparently schizophrenic nature of the markets, turning from risk-on to risk-off at the drop of a hat. For example, John Kemp argues that the markets are “trapped between euphoria and despair” and notes the U-shaped distribution of Bank of England’s inflation forecasts (table 5.13). Although at first glance this sort of behaviour seems irrational, it may not be – As PIMCO’s Richard Clarida notes: “we are in a world in which average outcomes – for growth, inflation, corporate and sovereign defaults, and the investment returns driven by these outcomes – will matter less and less for investors and policymakers. This is because we are in a New Normal world in which the distribution of outcomes is flatter and the tails are fatter. As such, the mean of the distribution becomes an observation that is very rarely realized”

Richard Clarida’s New Normal is analogous to the crisis regime (the dotted line unstable equilibrium in Figures c and d). Any movement in either direction is self-fulfilling and leads to either a much stronger economy or a much weaker economy. So why is the current crisis regime such a long one? As I mentioned earlier, external stabilisation (in this case monetary and fiscal policy) can keep the system from collapsing down to the lower branch normal regime – the “schizophrenia” only indicates that the market may make a decisive break to a stable state sooner rather than later.

Bookmark and Share

Written by Ashwin Parameswaran

July 29th, 2010 at 3:27 am

A “Systems” Explanation of How Bailouts can Cause Business Cycles

with 3 comments

In a previous post, I quoted Richard Fisher’s views on how bailouts cause business cycles and financial crises: “The system has become slanted not only toward bigness but also high risk…..if the central bank and regulators view any losses to big bank creditors as systemically disruptive, big bank debt will effectively reign on high in the capital structure. Big banks would love leverage even more, making regulatory attempts to mandate lower leverage in boom times all the more difficult…..It is not difficult to see where this dynamic leads—to more pronounced financial cycles and repeated crises.”

Fisher utilises the “incentives” argument but the same argument could also be made via the language of natural selection and Hannan and Freeman did exactly that in their seminal paper that launched the field of Organizational Ecology”. Hannan and Freeman wrote the below in the context of the bailout of Lockheed in 1971 but it is as relevant today as it has ever been: “we must consider what one anonymous reader, caught up in the spirit of our paper, called the anti-eugenic actions of the state in saving firms such as Lockheed from failure. This is a dramatic instance of the way in which large dominant organizations can create linkages with other large and powerful ones so as to reduce selection pressures. If such moves are effective, they alter the pattern of selection. In our view, the selection pressure is bumped up to a higher level. So instead of individual organizations failing, entire networks fail. The general consequence of a large number of linkages of this sort is an increase in the instability of the entire system and therefore we should see boom and bust cycles of organizational outcomes.”

Bookmark and Share

Written by Ashwin Parameswaran

June 8th, 2010 at 3:45 pm

Richard Fisher of the Dallas Fed on Financial Reform

with 6 comments

Richard Fisher of the Dallas Fed delivered a speech last week( h/t Zerohedge) on the topic of financial reform, which delivered some of the most brutally honest analysis of the problem at hand that I’ve seen from anyone at the Fed. It also made a few points that I felt deserved further analysis and elaboration.

The Dynamics of the TBTF Problem

In Fisher’s words: “Big banks that took on high risks and generated unsustainable losses received a public benefit: TBTF support. As a result, more conservative banks were denied the market share that would have been theirs if mismanaged big banks had been allowed to go out of business. In essence, conservative banks faced publicly backed competition…..It is my view that, by propping up deeply troubled big banks, authorities have eroded market discipline in the financial system.

The system has become slanted not only toward bigness but also high risk…..if the central bank and regulators view any losses to big bank creditors as systemically disruptive, big bank debt will effectively reign on high in the capital structure. Big banks would love leverage even more, making regulatory attempts to mandate lower leverage in boom times all the more difficult…..

It is not difficult to see where this dynamic leads—to more pronounced financial cycles and repeated crises.”

Fisher correctly notes that TBTF support damages system resilience not only by encouraging higher leverage amongst large banks, but by disadvantaging conservative banks that would otherwise have gained market share during the crisis. As I have noted many times on this blog, the dynamic, evolutionary view of moral hazard focuses not only on the protection provided to destabilising positive feedback forces, but on how stabilising negative feedback forces that might have flourished in the absence of the stabilising actions are selected against and progressively weeded out of the system.

Regulatory Discretion and the Time Consistency Problem

Fisher: “Language that includes a desire to minimize moral hazard—and directs the FDIC as receiver to consider “the potential for serious adverse effects”—provides wiggle room to perpetuate TBTF.” Fisher notes that it’s difficult to credibly commit ex-ante not to bail out TBTF creditors – as long as the regulator retains any amount of discretion with the purpose of maintaining systemic stability, they will be tempted to use it.

On the Ineffectiveness of Regulation Alone

Fisher: “While it is certainly true that ineffective regulation of systemically important institutions—like big commercial banking companies—contributed to the crisis, I find it highly unlikely that such institutions can be effectively regulated, even after reform…Simple regulatory changes in most cases represent a too-late attempt to catch up with the tricks of the regulated—the trickiest of whom tend to be large. In the U.S. financial system, what passed as “innovation” was in large part circumvention, as financial engineers invented ways to get around the rules of the road. There is little evidence that new regulations, involving capital and liquidity rules, could ever contain the circumvention instinct.”

This is a sentiment I don’t often hear expressed by a regulator – As I have opined before on this blog, regulations alone just don’t work. The history of banking is one of repeated circumvention of regulations by banks, a process that has only accelerated with the increased completeness of markets. The question is not whether deregulation accelerated the process of banks’ maximising the moral hazard subsidy – it almost certainly did and this was understood even by the Fed as early as 1983. As John Kareken noted, “Deregulation Is the Cart, Not the Horse”. The question is whether re-regulation has any chance of succeeding without fixing the incentives guiding the actors in the system – it does not.

Bailouts Come in Many Shapes and Sizes

Fisher: “Even if an effective resolution regime can be written down, chances are it might not be used. There are myriad ways for regulators to forbear. Accounting forbearance, for example, could artificially boost regulatory capital levels at troubled big banks. Special liquidity facilities could provide funding relief. In this and similar manners, crisis-related events that might trigger the need for resolution could be avoided, making resolution a moot issue.”

A watertight resolution regime may only encourage regulators to aggressively utilise other forbearance mechanisms. Fisher mentions accounting and liquidity relief but fails to mention the most important “alternative bailout mechanism” – the “Greenspan Put” variant of monetary policy.

Preventing Systemic Risk perpetuates the Too-Big-To-Fail Problem

Fisher: “Consider the idea of limiting any and all financial support strictly to the system as a whole, thus preventing any one firm from receiving individual assistance….If authorities wanted to support a big bank in trouble, they would need only institute a systemwide program. Big banks could then avail themselves of the program, even if nobody else needed it. Systemwide programs are unfortunately a perfect back door through which to channel big bank bailouts.”

“System-wide” programs by definition get activated only when big banks and non-banking financial institutions such as GE Capital are in trouble. Apart from perpetuating TBTF, they encourage smaller banks to mimic big banks and take on similar tail risk thus reducing system diversity.

Shrink the TBTF Banks?

Fisher clearly prefers that the big banks be shrunk as a “second-best” solution to the incentive problems that both regulators and banks face in our current system. Although I’m not convinced that shrinking the banks is a sufficient response, even a “free market” solution to the crisis will almost certainly imply a more dispersed banking sector, due to the removal of the TBTF subsidy. The gist of the problem is not size but insufficient diversity. Fisher argues “there is considerable diversity in strategy and performance among banks that are not TBTF.” This is the strongest and possibly even the only valid argument for breaking up the big banks. My concern is that even a more dispersed banking sector will evolve towards a tightly coupled and homogenous outcome due to the protection against systemic risk provided by the “alternative bailout mechanisms”, particularly the Greenspan Put.

The fact that Richard Fisher’s comments echo themes popular with both left-wing and right-wing commentators is not a coincidence. In the fitness landscape of our financial system, our current choice is not so much a local peak as a deep valley – tinkering will get us nowhere and a significant move either to the left or to the right is likely to be an improvement.

Bookmark and Share

Written by Ashwin Parameswaran

June 6th, 2010 at 1:30 pm

The “Crash of 2:45 p.m.” as a Consequence of System Fragility

with 7 comments

When the WSJ provides us with the least plausible explanation of the “Crash of 2:45 p.m.”, it is only fitting that Jon Stewart provides us with the most succinct and accurate diagnosis of the crash.

Most explanations of the crash either focus on the proximate cause of the crash or blame it all on the “perfect storm”. The “perfect storm” explanation absolves us from analysing the crash too closely, the implicit conclusion being that such an event doesn’t occur too often and not much needs to or can be done to prevent its recurrence. There are two problems with this explanation. For one, it violates Occam’s Razor – it is easy to construct an ex-post facto explanation that depends upon a confluence of events that have not occurred together before. And more crucially, perfect storms seem to occur all too often. As Jon Stewart put it: “Why is it that whenever something happens to the people that should’ve seen it coming didn’t see coming, it’s blamed on one of these rare, once in a century, perfect storms that for some reason take place every f–king two weeks. I’m beginning to think these are not perfect storms. I’m beginning to think these are regular storms and we have a shty boat.”

The focus on proximate causes ignores the complexity and nonlinearity of market systems. Michael Mauboussin explained it best when he remarked: “Cause and effect thinking is dangerous. Humans like to link effects with causes, and capital markets activities are no different. For example, politicians created numerous panels after the market crash in 1987 to identify its “cause.” A nonlinear approach, however, suggests that large-scale changes can come from small-scale inputs. As a result, cause-and-effect thinking can be both simplistic and counterproductive.” The true underlying causes may be far removed from the effect, both in time and in space and the proximate cause may only be the “straw that broke the camel’s back”.

So what is the true underlying cause of the crash? In my opinion, the crash was the inevitable consequence of a progressive loss of system resilience. Why and how has the system become fragile? A static view of markets frequently attributes loss of resilience to the presence of positive feedback processes such as margin calls on levered bets, stop-loss orders, dynamic hedging of short-gamma positions and even just plain vanilla momentum trading strategies – Laura Kodres‘ paper here has an excellent discussion on “destabilizing” hedge fund strategies. However, in a dynamic conception of markets, a resilient market is characterised not by the absence of positive feedback processes but by the presence of a balanced and diverse mix of positive and negative feedback processes.

Policy measures that aim to stabilise the system by countering the impact of positive feedback processes select against and weed out negative feedback processes – Stabilisation reduces system resilience. The decision to cancel errant trades is an example of such a measure. It is critical that all market participants who implement positive feedback strategies (such as stop-loss market orders) suffer losses and those who step in to buy in times of chaos i.e. the negative-feedback providers are not denied of the profits that would accrue to them if markets recover. This is the real damage done by policy paradigms such as the “Greenspan/Bernanke Put” that implicitly protect asset markets. They leave us with a fragile market prone to collapse even with a “normal storm”, unless there is further intervention as we saw from the EU/ECB. Of course, every subsequent intervention that aims to stabilise the system only further reduces its resilience.

As positive feedback processes become increasingly dominant, even normal storms that were easily absorbed earlier will cause a catastrophic transition in the system. There are many examples of the loss of system resilience being characterised by its vulnerability to a “normal” disturbance, such as in Minsky’s Financial Instability Hypothesis or Buzz Holling’s conception of ecological resilience, both of which I have discussed earlier.

The Role of Waddell & Reed

In the framework I have outlined above, the appropriate question to ask of the Waddell & Reed affair is whether their sell order was a “normal” storm or an “abnormal” storm? More specifically, pinning the blame on a single order requires us to prove that each time in the past an order of this size was executed, the market crashed in a similar manner. It is also probable that the sell order itself was a component of a positive feedback hedging strategy and Waddell’s statement that it was selling the futures to “protect fund investors from downside risk” confirms this assessment. In this case, the Waddell sell order was an endogenous event in the framework and not an exogenous shock. Mitigating the impact of such positive feedback strategies only makes the system less resilient in the long run.

As Taleb puts it: “When a bridge collapses, you don’t look at the last truck that was on it, you look at the engineer. You’re looking for the straw that broke the camel’s back. Let’s not worry about the straw, focus on the back.” Or as Jon Stewart would say, let’s figure out why we have a shty boat.

Bookmark and Share

Written by Ashwin Parameswaran

May 16th, 2010 at 4:42 am

Organisational Rigidity, Crony Capitalism, Too-Big-To-Fail and Macro-Resilience

with 11 comments

In a previous post, I outlined why cognitive rigidity is not necessarily irrational even though it may lead to a loss of resilience. However, if the universe of agent strategies is sufficiently diverse, a macro-system comprising of fragile, inflexible agents can be incredibly resilient. So a simple analysis of micro-fragility does not enable us to reach any definitive conclusions about macro-resilience – organisations and economies may retain significant resilience and an ability to cope with novelty despite the fragility of their component agents.

Yet, there is significant evidence that organisations exhibit rigidity and although some of this rigidity can be perceived as irrational or perverse, much of it arises as a rational response to uncertainty. In Hannan and Freeman’s work on Organizational Ecology”, the presence of significant organisational rigidity is the basis of a selection-based rather than an adaptation-based explanation of organisational diversity. There are many factors driving organisational inertia, some of which have been summarised in this paper by Hannan and Freeman. These include internal considerations such as sunk costs, informational constraints, political constraints etc as well as external considerations such as barriers to entry and exit. In a later paper, Hannan and Freeman also justify organisational inertia as a means to an end, the end being “reliability”. Just as was the case in Ronald Heiner’s and V.S. Ramachandran’s framework discussed previously, inertia is a perfectly logical response to an uncertain environment.

Hannan and Freeman also hypothesise that older and larger organizations are more structurally inert and less capable of adapting to novel situations. In his book “Dynamic Economics”, Burton Klein analysed the historical record and found that advances that “resulted in new S-shaped curves in relatively static industries” do not come from the established players in an industry. In an excellent post, Sean Park summarises exactly why large organizations find it so difficult to innovate and also points to the pre-eminent reference in the management literature on this topic – Clayton Christensen’s “The Innovator’s Dilemma”. Christensen’s work is particularly relevant as it elaborates how established firms can fail not because of any obvious weaknesses, but as a direct consequence of their focus on core clients’ demands.

The inability of older and larger firms to innovate and adapt to novelty can be understood within the framework of the exploration-exploitation tradeoff as an inability to “explore” in an effective manner. As Levinthal and March put it, “past exploitation in a given domain makes future exploitation in the same domain even more efficient….As they develop greater and greater competence at a particular activity, they engage in that activity more, thus further increasing competence and the opportunity cost of exploration.” Exploration is also anathema to large organisations as it seems to imply a degree of managerial indecision. David Ellerman captures the essence of this thought process: “The organization’s experts will decide on the best experiment or approach—otherwise the organization would appear “not to know what it’s doing.””

A crony capitalist economic system that protects the incumbent firms hampers the ability of the system to innovate and adapt to novelty. It is obvious how the implicit subsidy granted to our largest financial institutions via the Too-Big-To-Fail doctrine represents a transfer of wealth from the taxpayer to the financial sector. It is also obvious how the subsidy encourages a levered, homogenous and therefore fragile financial sector that is susceptible to collapse. What is less obvious is the paralysis that it induces in the financial sector and by extension the macroeconomy long after the bailouts and the Minsky moment have passed.

We shouldn’t conflate this paralysis with an absence of competition between the incumbents – the competition between the incumbents may even be intense enough to ensure that they retain only a small portion of the rents that they fight so desperately to retain. What the paralysis does imply is a fierce and unified defence of the local peak that they compete for. Their defence is directed not so much against new entrants who want to play the incumbents at their own game, but at those who seek to change the rules of the game.

The best example of this is the OTC derivatives market which is the benefits of TBTF to the big banks are most evident. Bob Litan notes that clients “wanted the comfort of knowing that they were dealing with large, well-capitalized financial institutions” when dealing in CDS and this observation holds for most other OTC derivative markets. He also correctly identifies that the crucial component of effective reform is removing the advantage that the “Derivative Dealers’ Club” currently possess: “Systemic risk also would be reduced with true derivatives market reforms that would have the effect of removing the balance sheet advantage of the incumbent dealers now most likely regarded as TBTF. If end-users know that when their trades are completed with a clearinghouse, they are free to trade with any market maker – not just the specific dealer with whom they now customarily do business – that is willing to provide the right price, the resulting trades are more likely to be the end-users’ advantage. In short, in a reformed market, the incumbent dealers would face much greater competition.”

Innovation in the financial sector is also hampered because of the outsized contribution it already makes to economic activity in the United States, which makes market-broadening innovations extremely unlikely. James Utterback identified how difficult it is for new entrants to immediately substitute incumbent players: “Innovations that broaden a market create room for new firms to start. Innovation-inspired substitutions may cause established firms to hang on all the more tenaciously, making it extremely difficult for an outsider to gain a foothold along with the cash flow needed to expand and become a player in the industry.” Of course, the incumbents may eventually break away from the local peak but an extended period of stagnation is more likely.

Sustaining an environment conducive to the entry of new firms is critical to the maintenance of a resilient macroeconomy that is capable of innovating and dealing with novelty. The very least that financial sector reform must achieve is to eliminate the benefits of TBTF that currently make it all but impossible for a new entrant to challenge the status quo.

Bookmark and Share

Written by Ashwin Parameswaran

May 2nd, 2010 at 3:48 pm