Archive for May, 2010
Most explanations of the crash either focus on the proximate cause of the crash or blame it all on the “perfect storm”. The “perfect storm” explanation absolves us from analysing the crash too closely, the implicit conclusion being that such an event doesn’t occur too often and not much needs to or can be done to prevent its recurrence. There are two problems with this explanation. For one, it violates Occam’s Razor – it is easy to construct an ex-post facto explanation that depends upon a confluence of events that have not occurred together before. And more crucially, perfect storms seem to occur all too often. As Jon Stewart put it: “Why is it that whenever something happens to the people that should’ve seen it coming didn’t see coming, it’s blamed on one of these rare, once in a century, perfect storms that for some reason take place every f–king two weeks. I’m beginning to think these are not perfect storms. I’m beginning to think these are regular storms and we have a sh—ty boat.”
The focus on proximate causes ignores the complexity and nonlinearity of market systems. Michael Mauboussin explained it best when he remarked: “Cause and effect thinking is dangerous. Humans like to link effects with causes, and capital markets activities are no different. For example, politicians created numerous panels after the market crash in 1987 to identify its “cause.” A nonlinear approach, however, suggests that large-scale changes can come from small-scale inputs. As a result, cause-and-effect thinking can be both simplistic and counterproductive.” The true underlying causes may be far removed from the effect, both in time and in space and the proximate cause may only be the “straw that broke the camel’s back”.
So what is the true underlying cause of the crash? In my opinion, the crash was the inevitable consequence of a progressive loss of system resilience. Why and how has the system become fragile? A static view of markets frequently attributes loss of resilience to the presence of positive feedback processes such as margin calls on levered bets, stop-loss orders, dynamic hedging of short-gamma positions and even just plain vanilla momentum trading strategies – Laura Kodres‘ paper here has an excellent discussion on “destabilizing” hedge fund strategies. However, in a dynamic conception of markets, a resilient market is characterised not by the absence of positive feedback processes but by the presence of a balanced and diverse mix of positive and negative feedback processes.
Policy measures that aim to stabilise the system by countering the impact of positive feedback processes select against and weed out negative feedback processes – Stabilisation reduces system resilience. The decision to cancel errant trades is an example of such a measure. It is critical that all market participants who implement positive feedback strategies (such as stop-loss market orders) suffer losses and those who step in to buy in times of chaos i.e. the negative-feedback providers are not denied of the profits that would accrue to them if markets recover. This is the real damage done by policy paradigms such as the “Greenspan/Bernanke Put” that implicitly protect asset markets. They leave us with a fragile market prone to collapse even with a “normal storm”, unless there is further intervention as we saw from the EU/ECB. Of course, every subsequent intervention that aims to stabilise the system only further reduces its resilience.
As positive feedback processes become increasingly dominant, even normal storms that were easily absorbed earlier will cause a catastrophic transition in the system. There are many examples of the loss of system resilience being characterised by its vulnerability to a “normal” disturbance, such as in Minsky’s Financial Instability Hypothesis or Buzz Holling’s conception of ecological resilience, both of which I have discussed earlier.
The Role of Waddell & Reed
In the framework I have outlined above, the appropriate question to ask of the Waddell & Reed affair is whether their sell order was a “normal” storm or an “abnormal” storm? More specifically, pinning the blame on a single order requires us to prove that each time in the past an order of this size was executed, the market crashed in a similar manner. It is also probable that the sell order itself was a component of a positive feedback hedging strategy and Waddell’s statement that it was selling the futures to “protect fund investors from downside risk” confirms this assessment. In this case, the Waddell sell order was an endogenous event in the framework and not an exogenous shock. Mitigating the impact of such positive feedback strategies only makes the system less resilient in the long run.
As Taleb puts it: “When a bridge collapses, you don’t look at the last truck that was on it, you look at the engineer. You’re looking for the straw that broke the camel’s back. Let’s not worry about the straw, focus on the back.” Or as Jon Stewart would say, let’s figure out why we have a sh—ty boat.
In a previous post, I outlined why cognitive rigidity is not necessarily irrational even though it may lead to a loss of resilience. However, if the universe of agent strategies is sufficiently diverse, a macro-system comprising of fragile, inflexible agents can be incredibly resilient. So a simple analysis of micro-fragility does not enable us to reach any definitive conclusions about macro-resilience – organisations and economies may retain significant resilience and an ability to cope with novelty despite the fragility of their component agents.
Yet, there is significant evidence that organisations exhibit rigidity and although some of this rigidity can be perceived as irrational or perverse, much of it arises as a rational response to uncertainty. In Hannan and Freeman’s work on “Organizational Ecology”, the presence of significant organisational rigidity is the basis of a selection-based rather than an adaptation-based explanation of organisational diversity. There are many factors driving organisational inertia, some of which have been summarised in this paper by Hannan and Freeman. These include internal considerations such as sunk costs, informational constraints, political constraints etc as well as external considerations such as barriers to entry and exit. In a later paper, Hannan and Freeman also justify organisational inertia as a means to an end, the end being “reliability”. Just as was the case in Ronald Heiner’s and V.S. Ramachandran’s framework discussed previously, inertia is a perfectly logical response to an uncertain environment.
Hannan and Freeman also hypothesise that older and larger organizations are more structurally inert and less capable of adapting to novel situations. In his book “Dynamic Economics”, Burton Klein analysed the historical record and found that advances that “resulted in new S-shaped curves in relatively static industries” do not come from the established players in an industry. In an excellent post, Sean Park summarises exactly why large organizations find it so difficult to innovate and also points to the pre-eminent reference in the management literature on this topic – Clayton Christensen’s “The Innovator’s Dilemma”. Christensen’s work is particularly relevant as it elaborates how established firms can fail not because of any obvious weaknesses, but as a direct consequence of their focus on core clients’ demands.
The inability of older and larger firms to innovate and adapt to novelty can be understood within the framework of the exploration-exploitation tradeoff as an inability to “explore” in an effective manner. As Levinthal and March put it, “past exploitation in a given domain makes future exploitation in the same domain even more efficient….As they develop greater and greater competence at a particular activity, they engage in that activity more, thus further increasing competence and the opportunity cost of exploration.” Exploration is also anathema to large organisations as it seems to imply a degree of managerial indecision. David Ellerman captures the essence of this thought process: “The organization’s experts will decide on the best experiment or approach—otherwise the organization would appear “not to know what it’s doing.””
A crony capitalist economic system that protects the incumbent firms hampers the ability of the system to innovate and adapt to novelty. It is obvious how the implicit subsidy granted to our largest financial institutions via the Too-Big-To-Fail doctrine represents a transfer of wealth from the taxpayer to the financial sector. It is also obvious how the subsidy encourages a levered, homogenous and therefore fragile financial sector that is susceptible to collapse. What is less obvious is the paralysis that it induces in the financial sector and by extension the macroeconomy long after the bailouts and the Minsky moment have passed.
We shouldn’t conflate this paralysis with an absence of competition between the incumbents – the competition between the incumbents may even be intense enough to ensure that they retain only a small portion of the rents that they fight so desperately to retain. What the paralysis does imply is a fierce and unified defence of the local peak that they compete for. Their defence is directed not so much against new entrants who want to play the incumbents at their own game, but at those who seek to change the rules of the game.
The best example of this is the OTC derivatives market which is the benefits of TBTF to the big banks are most evident. Bob Litan notes that clients “wanted the comfort of knowing that they were dealing with large, well-capitalized financial institutions” when dealing in CDS and this observation holds for most other OTC derivative markets. He also correctly identifies that the crucial component of effective reform is removing the advantage that the “Derivative Dealers’ Club” currently possess: “Systemic risk also would be reduced with true derivatives market reforms that would have the effect of removing the balance sheet advantage of the incumbent dealers now most likely regarded as TBTF. If end-users know that when their trades are completed with a clearinghouse, they are free to trade with any market maker – not just the specific dealer with whom they now customarily do business – that is willing to provide the right price, the resulting trades are more likely to be the end-users’ advantage. In short, in a reformed market, the incumbent dealers would face much greater competition.”
Innovation in the financial sector is also hampered because of the outsized contribution it already makes to economic activity in the United States, which makes market-broadening innovations extremely unlikely. James Utterback identified how difficult it is for new entrants to immediately substitute incumbent players: “Innovations that broaden a market create room for new firms to start. Innovation-inspired substitutions may cause established firms to hang on all the more tenaciously, making it extremely difficult for an outsider to gain a foothold along with the cash flow needed to expand and become a player in the industry.” Of course, the incumbents may eventually break away from the local peak but an extended period of stagnation is more likely.
Sustaining an environment conducive to the entry of new firms is critical to the maintenance of a resilient macroeconomy that is capable of innovating and dealing with novelty. The very least that financial sector reform must achieve is to eliminate the benefits of TBTF that currently make it all but impossible for a new entrant to challenge the status quo.