macroresilience

resilience, not stability

Archive for the ‘Resilience’ Category

The Magnetar Trade

with 7 comments

The Magnetar Trade according to ProPublica’s recent article is a long-short strategy that worked due to the perverse incentives operating in the CDO market during the boom. According to Jesse Eisinger and Jake Bernstein, Magnetar went long the equity tranche and short the senior tranches and used their position as the buyer of the equity tranche to ensure that the asset quality of the CDO was poorer than it would otherwise be. If ProPublica’s account is true, then this is a moral hazard trade i.e. Magnetar buys insurance against the burning down of a house and uses its influence as an equity buyer to significantly improve the odds of the house burning.

However, there are some hints in Magnetar’s response to the story that cast significant doubt on the accuracy of ProPublica’s narrative. To understand why this is the case, we need to understand what exactly the Magnetar trade as described in the story would look like. Magnetar’s portfolio was most likely a “close to carry neutral” portfolio consisting of long equity tranche positions and short senior/mezzanine tranche positions. In order to be carry-neutral, the notional value of senior tranches that are shorted needs to be an order of magnitude higher than the notional value of equity tranches purchased. In option parlance, this is equivalent to a zero-premium strategy consisting of short ATM options and long OTM options.

There are two reasons to execute such a strategy – one, simply to fund a “short options” strategy and the second, to execute a market-neutral “arbitrage” strategy. The significant advantage that such a long-short strategy has over a “naked short” strategy a la John Paulson is the absence of negative carry. As Taleb explains: “A butterfly position allows you to wait a lot longer for the wings to become profitable. In other words, a strategy that involves a butterfly allows you to be far more aggressive [when buying out-of-the-money options]. When you short near-the-money options, they bring in a lot of cash, so you can afford to spend more on out-of-the-money options. You can do a lot better as a spread trader.”

However, Magnetar describe their portfolio as market-neutral and “designed to have a positive return whether housing performed well or did poorly”.This implies that the portfolio was carry-positive i.e. the coupons on the long-equity positions exceeded the running-premium cost of buying protection on the senior tranches. This ensures that the portfolio will be profitable in the event that there are no defaults in the portfolio.

If the Magnetar Trade was based upon moral hazard, then it would have to short the senior tranches of the same CDO that it bought equity in and the notional of this short position would have to be multiples of the notional value of the equity position. However, Magnetar in their response to ProPublica explicitly deny this and state: “focusing solely on the group of CDOs in which Magnetar was the initial purchaser of the equity, Magnetar had a net long notional position. To put this into perspective, Magnetar would earn materially more money if these CDOs in aggregate performed well than if these CDOs performed poorly.” The operative term here is “net long notional position” as opposed to “net long position”. A net long position measured in delta terms could easily imply a net short notional position in which case the portfolio would outperform if all the tranches in the CDO were wiped out. But Magnetar seem to make it clear in their response that in the deals where they were the initial purchaser of equity, the notional of the equity positions exceeded the notional of the senior positions that they were short. They also assert that “the majority of the notional value of Magnetar’s hedges referenced CDOs in which Magnetar had no long investment” i.e. of course the notional value of their short positions exceeded that of their long positions, but these short positions were in other CDOs in which they did not have a long position.

But what about the fact that Magnetar seemed to be influencing the portfolio composition of these CDOs to include riskier assets in them? Surely this proves conclusively that Magnetar would profit if the CDOs collapsed? To understand why this may not necessarily be true, we need to examine the payoff profile of the Magnetar trade.

As with most market-neutral “arbitrage” trades, it is unlikely that the trade would deliver a positive return in every conceivable scenario. Rather, it would deliver a positive return in every scenario that Magnetar deemed probable. The Magnetar trade would pay off in two scenarios – if there were no defaults in any of their CDOs, or if there were so many defaults that the tranches that they were short also defaulted alongwith the equity tranche. The trade would likely lose money if there were limited defaults in all the CDOs and the senior tranches did not default. Essentially, the trade was attractive if one believed that this intermediate scenario was improbable.

A distribution where intermediate scenarios are improbable can arise from many underlying processes but there is one narrative that is particularly relevant to complex adaptive systems such as financial markets. Intermediate scenarios are unlikely when the system is characterised by multiple stable states and “catastrophic” transitions between these states. In adaptive systems such as ecosystems or macroeconomies, such transitions are most likely when the system is fragile and in a state of low resilience. The system tends to be dominated by positive feedback processes that amplify the impact of small perturbations, with no negative feedback processes present that can arrest this snowballing effect.

It turns out that such a framework was extremely well-suited to describing the housing market before the crash. Once house prices started falling and refinancing was no longer an option, the initial wave of defaults triggered a vicious cycle of house price declines and further defaults. Similarly, collateral requirements on leveraged investors, mark-to-market pressures and other positive feedback processes in the market created a vicious cycle of price declines in the market for mortage-backed securities and CDOs.

So what does all this have to do with Magnetar’s desire to include riskier assets in their long equity portfolios? If one believes that only a small perturbation is required to tip the market over into a state of collapse, then the long position should be weighted towards the riskiest possible asset portfolio. Essentially, the above framework implies that there is no benefit to having “safer” long positions in the long-short portfolio. The fragility of the system means that either there is no perturbation and all assets perform no matter how low-quality they are, or there is a perturbation and even “high quality” assets default.

The above framework of catastrophic shifts between multiple stable states is not uncommon, especially in fixed income markets. In fact, the Greek funding situation is a perfect example. If one had to sketch out a distribution of the yield on Greek debt, it is likely that intermediate levels are the least likely scenarios. In other words, either Greece funds at low sustainable rates or it moves rapidly to a state of default – it is unlikely that Greece raises say 50 billion Euros at an interest rate of 10%. The situation is of course made even more stark by Greece’s inability to inflate away its debt via the printing press. Of course, the bifurcation exists in fiat currency issuing countries as well, but at the point when hyperinflation kicks in.

Bank incentives are the real problem

Even if my arguments are valid, it is nevertheless obvious that even if Magnetar may not have executed the moral hazard trade, someone else could quite easily have done so. But the moral hazard trade was only possible because there was sufficient investor demand for the rated tranches of the CDO and even more crucially, because the originating bank was willing to hold onto the super-senior tranche. As I have discussed many times earlier in detail, bank demand for super-senior tranches is a logical consequence of the cheap leverage that they are afforded via the moral hazard subsidy of the TBTF doctrine. If banks were less levered, many of these deals would not have been issued at all.

In fact, two of the hedging strategies that we know were implemented in banks – UBS’ “AMPS” strategy and Howie Hubler’s trade in Morgan Stanley – were mirror images of the Magnetar trade. It is not a coincidence that bank traders chose the negatively skewed payoff distribution and Magnetar chose the positively skewed one.


Disclaimer: The above note is just my analysis of the facts and assertions in ProPublica’s article. I have no additional knowledge of the facts of the case and it is entirely possible that Magnetar are being less than fully forthright in their responses to the story. The above analysis is more useful as an illustration of how the facts as described in the article can be reconciled to a narrative that does not imply moral hazard.

Bookmark and Share

Written by Ashwin Parameswaran

April 11th, 2010 at 4:19 pm

Micro-Foundations of a Resilience Approach to Macro-Economic Analysis

with 4 comments

Before assessing whether a resilience approach is relevant to macro-economic analysis, we need to define resilience. Resilience is best defined as “the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks.”

The assertion that an ecosystem can lose resilience and become fragile is not controversial. To claim that the same can occur in social systems such as macro-economies is nowhere near as obvious, not least due to our ability to learn, forecast the future and adapt to changes in our environment. Any analysis of how social systems can lose resilience is open to the objection that loss of resilience implies systematic error on the part of economic actors in assessing the economic conditions accurately and an inability to adapt to the new reality. For example, one of the common objections to Minsky’s Financial Instability Hypothesis (FIH) is that it requires irrational behaviour on the part of economic actors. Rajiv Sethi’s post has a summary of this debate with a notable objection coming from Bernanke’s paper on the subject which insists thatHyman Minsky and Charles Kindleberger have in several places argued for the inherent instability of the financial system, but in doing so have had to depart from the assumption of rational behavior.”

One response to this objection is “So What?” and indeed the stability-resilience trade-off can be explained within the Kahneman-Tversky framework. Another response which I’ve invoked on this blog and Rajiv has also mentioned in a recent post focuses on the pervasive principal-agent relationship in the financial economy. However, I am going to focus on a third and a more broadly applicable rationale which utilises a “rationality” that incorporates Knightian uncertainty as the basis for the FIH. The existence of irreducible uncertainty is sufficient to justify an evolutionary approach for any social system, whether it be an organization or a macro-economy.

Cognitive Rigidity as a Rational Response to Uncertainty

Rajiv touches on the crux of the issue when he notes: “Selection of strategies necessarily implies selection of people, since individuals are not infinitely flexible with respect to the range of behavior that they can exhibit.” But is achieving infinite flexibility a worthwhile aim? The evidence suggests that it is not. In the face of true uncertainty, infinite flexibility is not only unrealistic due to finite cognitive resources but it is also counterproductive and may deliver results that are significantly inferior to a partially “rigid” framework. V.S. Ramachandran explains this brilliantly: “At any given moment in out waking lives, our brains are flooded with a bewildering variety of sensory inputs, all of which have to be incorporated into a coherent perspective based on what stored memories already tell us is true about ourselves and the world. In order to act, the brain must have some way of selecting from this superabundance of detail and ordering it into a consistent ‘belief system’, a story that makes sense of the available evidence. When something doesn’t quite fit the script, however, you very rarely tear up the entire story and start from scratch. What you do, instead, is to deny or confabulate in order to make the information fit the big picture. Far from being maladaptive, such everyday defense mechanisms keep the brain from being hounded into directionless indecision by the ‘combinational explosion’ of possible stories that might be written from the material available to the senses.”

This rigidity is far from being maladaptive and appears to be irrational only when measured against a utopian definition of rational choice. Behavioural Economics also frequently commits the same error – As Brian Loasby notes: “It is common to find apparently irrational behaviour attributed to ‘framing effects’, as if ‘framing’ were a remediable distortion. But any action must be taken within a framework.” This notion of true rationality being less than completely flexible is not a new one – Ramachandran’s work provides the neurological bases for the notion of ‘rigidity as a rational response to uncertainty’. I have already discussed Ronald Heiner’s framework in a previous post which bears a striking resemblance to Ramachandran’s thesis:

“Think of an omniscient agent with literally no uncertainty in identifying the most preferred action under any conceivable condition, regardless of the complexity of the environment which he encounters. Intuitively, such an agent would benefit from maximum flexibility to use all potential information or to adjust to all environmental conditions, no matter how rare or subtle those conditions might be. But what if there is uncertainty because agents are unable to decipher all of the complexity of the environment? Will allowing complete flexibility still benefit the agents?

I believe the general answer to this question is negative: that when genuine uncertainty exists, allowing greater flexibility to react to more information or administer a more complex repertoire of actions will not necessarily enhance an agent’s performance.”

Brian Loasby has an excellent account of ‘rationality under uncertainty’ and its evolutionary implications in this excellent book which traces hints of this idea running through the work of Adam Smith, Alfred Marshall, George Kelly’s ‘Personal Construct Theory’ and Hayek’s ‘Sensory Order’. But perhaps the clearest exposition of the idea was provided by Kenneth Boulding in his description of subjective human knowledge as an ‘Image’. Most external information either conforms so closely to the image that it is ignored or it adds to the image in a well-defined manner. But occasionally, we receive information that is at odds with our image. Boulding recognised that such change is usually abrupt and explained it in the following manner: “The sudden and dramatic nature of these reorganizations is perhaps a result of the fact that our image is in itself resistant to change. When it receives messages which conflict with it, its first impulse is to reject them as in some sense untrue….As we continue to receive messages which contradict our image, however, we begin to have doubts, and then one day we receive a message which overthrows our previous image and we revise it completely.” He also recognises that this resistance is not “irrational” but merely a logical response to uncertainty in an “imperfect” market. “The buyer or seller in an imperfect market drives on a mountain highway where he cannot see more than a few feet around each curve; he drives it, moreover, in a dense fog. There is little wonder, therefore, that he tends not to drive it at all but to stay where he is. The well-known stability or stickiness of prices in imperfect markets may have much more to do with the uncertain nature of the image involved than with any ideal of maximizing behavior.”

Loasby describes the key principles of this framework as follows: “The first principle is that all action is decided in the space of representations. These representations include, for example, neural networks formed in the brain by processes which are outside our conscious control…None are direct copies of reality; all truncate complexity and suppress uncertainty……The second principle of this inquiry is that viable processes must operate within viable boundaries; in human affairs these boundaries limit our attention and our procedures to what is manageable without, we hope, being disastrously misleading – though no guarantees are available……The third principle is that these frameworks are useless unless they persist, even when they do not fit very well. Hahn’s definition of equilibrium as a situation in which the messages received by agents do not cause them to change the theories that they hold or the policies that they pursue offers a useful framework for the analysis both of individual behaviour and of the co-ordination of economic activity across a variety of circumstances precisely because it is not to be expected that theories and policies will be readily changed just because some evidence does not appear readily compatible with them.” (For a more detailed account, read Chapter 3 ‘Cognition and Institutions’ of the aforementioned book or his papers here and here.)

The above principles are similar to Ronald Heiner’s assertion that actions chosen under true uncertainty must satisfy a ‘reliability condition’. It also accounts for the existence of the stability-resilience trade-off. In Loasby’s words: “If behaviour is a selected adaptation and not a specific application of a general logic of choice, then the introduction of substantial novelty – a change not of weather but of climate – is liable to be severely disruptive, as Schumpeter also insisted. In biological systems it can lead to the extinction of species, sometimes on a very large scale.” Extended periods of stability narrow the scope of events that fit the script and correspondingly broaden the scope of events that appear to be anomalous and novel. When the inevitable anomalous event comes along, we either adapt too slowly or in extreme cases, not at all.

Bookmark and Share

Written by Ashwin Parameswaran

April 11th, 2010 at 7:51 am

Diversity and the Political Economy of Banking

without comments

From a system resilience viewpoint, there are many reasons why a reduction in diversity is harmful. But one of the lesser appreciated benefits of a diverse pool of firms in an industry is the impact it has in reducing the political clout that the industry wields. Diversity is one of the best defences against crony capitalism. As Luigi Zingales explains, commenting here on the political impact of Gramm-Leach-Bliley: “The real effect of Gramm-Leach-Bliley was political, not directly economic. Under the old regime, commercial banks, investment banks, and insurance companies had different agendas, and so their lobbying efforts tended to offset one another. But after the restrictions were lifted, the interests of all the major players in the financial industry became aligned, giving the industry disproportionate power in shaping the political agenda. The concentration of the banking industry only added to this power.”

There’s been a lot of discussion recently on the merits of breaking up the big banks and one of the arguments in favour of this policy is the perceived reduction in the political clout that the banks would possess. Arnold Kling, for example, lays out the thesis in this recent article. Breaking up the banks may help but I would argue that the impact of such a move on the political economy of banking will be limited unless the industry becomes less homogenous.

The prime driver of this homogeneity is the combination of the moral hazard subsidy and regulatory capital guidelines which ensures that there is one optimal strategy that maximises this subsidy and outcompetes all other strategies. This strategy is of course to maintain a highly levered balance sheet invested in low capital-intensity, highly-rated assets.

Bookmark and Share

Written by Ashwin Parameswaran

April 6th, 2010 at 4:21 pm

Stability and Macro-Stabilisation as a Profound Form of the Moral Hazard Problem

with 5 comments

I have argued previously that the moral hazard explanation of the crisis fits the basic facts i.e. bank balance sheets were highly levered and invested in assets with severely negatively skewed payoffs. But this still leaves another objection to the moral hazard story unanswered – It was not only the banks with access to cheap leverage that were heavily invested in “safe” assets, but also asset managers, money market mutual funds and even ordinary investors. Why was this the case?

A partial explanation which I have discussed many times before relies on the preference of agents (in the principal-agent sense) for such bets. But this is an incomplete explanation. Apart from not being applicable to investors who are not agents, it neglects the principal’s option to walk away. A much better explanation that I mentioned here and here is the role of extended periods of stability in creating “moral hazard-like” outcomes. This is an altogether more profound and pervasive form of the moral hazard problem and lies at the heart of the Minsky-Holling thesis that stability breeds loss of resilience.

It is important to note that such an outcome can arise endogenously without any government intervention. Minsky argued that such an endogenous loss of resilience was inevitable but this is not obvious. As I noted here: “The assertion that an economy can move outside the corridor due to endogenous factors is difficult to reject. All it takes is a chance prolonged period of stability. However, this does not imply that the economy must move outside the corridor, which requires us to prove that prolonged periods of stability are the norm rather than the exception in a capitalist economy.”

But it can also arise as a result of macro-stabilising fiscal and monetary policies. Whether the current crisis was endogenous or not is essentially an empirical question. I have argued in previous posts that it was not and that the “Greenspan Put” monetary policy did as much damage as all the explicit bailouts did. The evidence behind such a view has been put forth well by David Merkel here and by Barry Ritholz in his book or in this excellent episode of Econtalk.

Bookmark and Share

Written by Ashwin Parameswaran

March 7th, 2010 at 10:07 am

Mark-to-Market Accounting and the Financial Crisis

with 5 comments

Mark-to-Market (MtM) Accounting is usually cast as a villain of the piece in most financial crises. This note aims to rebut this criticism from a “system resilience” perspective. It also expands on the role that MtM Accounting can play in mitigating agents’ preference for severely negatively skewed payoffs, a theme I touched upon briefly in an earlier note.


The “Downward Spiral” of Mark-to-Market Accounting


If there’s anything that can be predicted with certainty in a financial crisis, it is that sooner or later banks will plead to their regulators and/or FASB asking for relaxation of MtM accounting rules. The results are usually favourable. So in the S&L crisis, we got the infamous “Memorandum R-49” and in the current crisis, we got FAS 157-e.


The most credible argument for such a relaxation of MtM rules is the “downward spiral” theory. Opponents of MtM Accounting argue that it can trigger a downward spiral in asset prices in the midst of a liquidity crisis. As this IIF memorandum puts it: “often dramatic write-downs of sound assets required under the current implementation of fair-value accounting adversely affect market sentiment, in turn leading to further write-downs, margin calls and capital impacts in a downward spiral that may lead to large-scale fire-sales of assets, and destabilizing, pro-cyclical feedback effects. These damaging feedback effects worsen liquidity problems and contribute to the conversion of liquidity problems into solvency problems.” The initial fall in prices feeds upon itself in a “positive feedback” process.


I am not going to debate the conditions necessary for this positive feedback process to hold, not because the case is beyond debate but because MtM is just one in a long list of positive feedback processes in our financial markets. Laura Kodres at the IMF has an excellent discussion on “destabilizing” hedge fund strategies here which identifies some of the most common ones – margin calls on levered bets, stop-loss orders, dynamic hedging of short-gamma positions and even just plain vanilla momentum trading strategies.


The crucial assumption necessary for the downward spiral to hold is that the forces exerting negative feedback on this fall in asset prices are not strong enough to counter the positive feedback process. The relevant question from a system resilience perspective is why this is so. Why are there not enough investors with excess liquidity or banks with capital and liquidity reserves to buy up the “undervalued” assets and prevent collapse?  One answer which I discussed in my previous note is the role of extended periods of stability in reducing system resilience. The narrowing of the “Leijonhufvud Corridor” reduces the margin of error before positive feedback processes kick in. The most obvious example is reduction in collateral required to execute a leveraged bet. The period of stability also weeds out negative feedback strategies or forces them to adapt thereby reducing their influence on the market.


A healthy market is characterised not by the absence of positive feedback processes but by the presence of a balanced mix of positive and negative feedback processes. Eliminating every single one of the positive feedback processes above would mean eliminating a healthy chunk of the market. A better solution is to ensure the persistence of negative feedback processes.


Mark-to-Market Accounting as a Modest Mitigant to the Moral Hazard Problem


As I mentioned in a previous note, marking to a liquid market significantly reduces the attractiveness of severely negatively skewed bets for an agent. If the agent is evaluated on the basis of mark-to-market and not just the final payout, significant losses can be incurred much before the actual event of default on a super-senior bond.


The impact of true mark-to-market is best illustrated by highlighting the difference between Andrew Lo’s example of the Capital Decimation Partners and the super-senior tranches that were the source of losses in the current crisis. In Andrew Lo’s example, the agent sells out-of-the-money (OTM) options on an equity index of a very short tenor (less than three months). This means that there is significant time decay which mitigates the mark-to-market impact of a fall in the underlying. This rapid time decay due to the short tenor of the bet makes the negatively skewed bet worthwhile for the hedge fund manager even though he is subject to constant mark to market. On the other hand, loans/bonds are of a much longer tenor and if they were liquidly traded, the mark-to-market swings would make the negative skew of the final payout superfluous for the purposes of the agent who would be evaluated on the basis of the mark-to-market and not the final payout.


Many of the assets on bank balance sheets however are not subject to mark-to-market accounting or are only subject to mark-to-model on an irregular basis. This enables agents to invest in severely negatively skewed bets of long tenor safe in the knowledge that the low probability of an event of default in the first few years is extremely low. It’s worth noting that mark-to-model is almost as bad as not marking to market at all for such negatively skewed bets, especially if the model is based on parameters drawn from recent historical data during the “stable” period.


On Whether Money Market Mutual Funds (MMMFs) should Mark to Market


The SEC recently announced a new set of money market reforms aimed at fixing the flaws highlighted by Reserve Primary Fund’s “breaking the buck” in September 2008. However, it stopped short of requiring money market funds to post market NAVs that may fluctuate. One of the arguments for why floating rate NAVs are a bad idea is that regulations that force money market funds to hold “safe” assets make mark-to-market superfluous. In fact, exactly the opposite is true. It is essential that assets with severely negatively skewed payoffs such as AAA bonds are marked to market precisely so that agents such as money market fund managers are not tempted to take on uneconomic bets in an attempt to pick up pennies from in front of the bulldozer.


The S&L Crisis: A Case Study on the impact of avoiding MtM


Martin Mayer’s excellent book on the S&L crisis has many examples of the damage that can be done by avoiding MtM accounting especially when the sector has a liquidity backstop via the implicit or explicit guarantee of the FDIC or the Fed. In his words, “As S&L accounting was done, winners could be sold at a profit that the owners could take home as dividends, while the losers could be buried in the portfolio “at historic cost,” the price that had been paid for them, even though they were now worth less, and sometimes much less.”


As Mayer notes, this accounting freedom meant that S&L managers were eager consumers of the myriad varieties of mortgage backed securities that Wall Street conjured up in the 80s in search of extra yield, immune from the requirement to mark these securities to market.


Wall Street’s Opposition to the Floating NAV Requirement for MMMFs


Some commentators such as David Reilly and Felix Salmon pointed out the hypocrisy of investment banks such as Goldman Sachs recommending to the SEC that money market funds not be required to mark to market while rigorously enforcing MtM on their own balance sheets. In fact the above analysis of the S&L crisis shows why their objections are perfectly predictable. Investment banks prefer that their customers not have to mark to market. This increases the demand from agents at these customer firms for “safe” highly rated assets that yield a little extra i.e. the very structured products that Wall Street sells, safe in the knowledge that they are immune from MtM fluctuations.


Mark-to-Market and the OTC-Exchange Debate


Agents’ preference for avoiding marking to market also explains why apart from investment banks, even their clients may prefer to invest in illiquid, opaque OTC products rather than exchange-traded ones. Even if accounting allows one to mark a bond at par, it may be a lot harder to do so if the bond price were quoted in the daily newspaper!


Mark-to-Market and Excess Demand for “Safe” Assets


Many commentators have blamed the current crisis on an excess demand for “safe” assets (See for example Ricardo Caballero). However, a significant proportion of this demand may arise from agents who do not need to mark to market and is entirely avoidable. More widespread enforcement of mark to market should significantly decrease the demand from agents for severely negatively skewed bets i.e. “safe” assets.


Bookmark and Share

Written by Ashwin Parameswaran

February 7th, 2010 at 1:14 pm

Knightian Uncertainty and the Resilience-Stability Trade-off

with 11 comments

This note examines the implications of adaptation by economic agents under Knightian uncertainty for the resilience of the macroeconomic system. It expands on themes I touched upon here and here. To summarise the key conclusions,

  • Under Knightian uncertainty, homo economicus is an irrelevant construct. The “optimal” course of action is one that restricts the choice of actions available and depends on a small set of simple rules and heuristics.
  • The choice of actions is restricted to those that are applicable in reasonably likely or recurrent situations. Actions applicable to rare situations are ignored. Therefore, it is entirely rational to take on severely negatively skewed bets.
  • By the same logic, economic agents find it harder to adapt to severe macroeconomic shocks as compared to mild shocks. This is the rationale for Axel Leijonhufvud’s “Corridor Hypothesis”.
  • Minsky’s Financial Instability Hypothesis states that prolonged periods of stability reduce the width of the “corridor” until the point where a macroeconomic crisis is inevitable.
  • The only assumptions needed to draw the above conclusions are the existence of uncertainty and sufficient adaptive/selective forces operating upon economic agents.
  • Minksy believed that this loss of resilience in the macroeconomic system is endogenous and inevitable. Although such a loss of resilience can arise endogenously, the evidence suggests that a significant proportion of the blame for the current crisis can be attributed to the stabilising policies favoured during the Great Moderation.
  • Buzz Holling’s work on ecosystem resilience has highlighted the peril of stabilising complex adaptive systems and how increased stability reduces system resilience.

Uncertainty and Negatively Skewed Payoffs

In a previous note, I explained how the existence of Knightian uncertainty leads to a perceived preference for severely negatively skewed payoffs. Ronald Heiner explains exactly how this occurs in his seminal paper on decision making under uncertainty.

Heiner argues that in the presence of uncertainty, the “optimal” course of action is one that restricts the choice of actions available and depends on a small set of simple rules and heuristics. In his words,

” Think of an omniscient agent with literally no uncertainty in identifying the most preferred action under any conceivable condition, regardless of the complexity of the environment which he encounters. Intuitively, such an agent would benefit from maximum flexibility to use all potential information or to adjust to all environmental conditions, no matter how rare or subtle those conditions might be. But what if there is uncertainty because agents are unable to decipher all of the complexity of the environment? Will allowing complete flexibility still benefit the agents?

I believe the general answer to this question is negative: that when genuine uncertainty exists, allowing greater flexibility to react to more information or administer a more complex repertoire of actions will not necessarily enhance an agent’s performance. “

In Heiner’s framework, actions chosen must satisfy a “Reliability Condition” which he summarises as: ” do so if the actual reliability in selecting the action exceeds the minimum required reliability necessary to improve performance. ” This required reliability cannot be achieved in the tails of the distribution and economic agents therefore ignore actions that are appropriate only in such situations. This explains our reluctance to insure against rare disasters which Heiner notes:

” Rare events are precisely those which are remote to a person’s normal experience, so that uncertainty in detecting which rare disasters to insure against increases as p(probability of disaster) approaches zero. Such greater uncertainty will reduce the reliability of insurance decisions as disasters become increasingly remote to a person’s normal experience.”

” At some point as p approaches zero, the Reliability Condition will be violated. This implies people will switch from typically buying to typically ignoring insurance conditions, which is just the pattern documented in Kunreuther’s 1978 study.”

Note the similarity between Heiner’s analysis of tail risks under uncertainty and Kahneman and Tversky’s distinction between “possible” and “impossible” events. The reliability problem is also connected to the difficulty of ascertaining the properties of tail events through a statistical analysis of historical data.

In an uncertainty-driven framework, it may be more appropriate to refer to this pattern as a reluctance to insure against tail risks rather than a preference for “blowup risks”. This distinction is also relevant in the moral hazard debate where the actions are often characterised better as a neglect of insurance of tail risks than an explicit taking on of such risks.

Impossible Events and Axel Leijonhufvud’s “Corridor Hypothesis”

Heiner also extends this analysis of the reluctance to insure against “impossible” events to provide the rationale for Axel Leijonhufvud’s “Corridor Hypothesis” of macroeconomic shocks and recessions. In his words:

“Now suppose, analogous to the insurance case, that there are different types of shocks. some more severe than others; where larger shocks are possible but less and less likely to happen. In addition, the reliability of detecting when and how to prepare for large shocks decreases as their determinants and repercussions are more remote to agents’ normal experience.

In a similar manner to that discussed for the insurance case, we can derive that the economy’s structure will evolve so as to prepare for and react quickly to small shocks. However, outside of a certain zone or “corridor” around its long-run growth path, it will only very sluggishly react to sufficiently large, infrequent shocks.”

Minsky’s Financial Instability Hypothesis and Leijonhufvud’s Corridor

Minsky’s Financial Instability Hypothesis (FIH) asserts that stability breeds instability i.e. stability reduces the width of the corridor to the point where even a small shock is enough to push the system outside it. Leijonhufvud acknowledged Minsky’s insight that the width of the corridor was variable and depended upon the recency of past disturbances. In his own words: “Our theory implies a variable width of the corridor. Transactors who have once suffered through a displacement of unanticipated magnitude (on the order of the Great Depression, say) will be encouraged to maintain larger buffers thereafter-until the memory dims…”

The assertion that stability breeds instability is well established in ecology, especially in Buzz Holling’s work as I discussed here. Heiner’s framework explains Minsky’s assertion as the logical consequence of agent adaptation under uncertainty. But the same can also be explained via “natural selection”-like mechanisms as well. The most relevant is the principal-agent relationship. Principals that “select” agents under asymmetric information can effectively mimic the effect of natural selection in ecosystems.

Minsky also argues that sooner or later, a capitalist economy will move outside this corridor due to entirely endogenous reasons. This is a more controversial assertion and can only be evaluated through a careful analysis of the empirical evidence. The assertion that an economy can move outside the corridor due to endogenous factors is difficult to reject. All it takes is a chance prolonged period of stability. However, this does not imply that the economy must move outside the corridor, which requires us to prove that prolonged periods of stability are the norm rather than the exception in a capitalist economy.

Minsky’s Financial Instability Hypothesis and C.S. Holling’s conception of Resilience and Stability

Minsky’s idea that stability breeds instability is an important theme in the field of ecology. Buzz Holling however defined the problem as loss of resilience rather than instability. Resilience and stability are dramatically different concepts and Holling explained the difference in his seminal paper on the topic as follows:

“Resilience determines the persistence of relationships within a system and is a measure of the ability of these systems to absorb changes of state variables, driving variables, and parameters, and still persist. In this definition resilience is the property of the system and persistence or probability of extinction is the result. Stability, on the other hand, is the ability of a system to return to an equilibrium state after a temporary disturbance. The more rapidly it returns, and with the least fluctuation, the more stable it is. In this definition stability is the property of the system and the degree of fluctuation around specific states the result.”

The relevant insight in Holling’s work is that resilience and stability as goals for an ecosystem are frequently at odds with each other. In many ecosystems, “the very fact of low stability seems to produce high resilience“. Conversely, “the goal of producing a maximum sustained yield may result in a more stable system of reduced resilience”. Minsky’s hypothesis is thus better described as “stability breeds loss of resilience”, not “stability breeds instability”.

The Pathology of Macroeconomic Stabilisation

The “Pathology of Natural Resource Management” is described by Holling and Meffe as follows:

“when the range of natural variation in a system is reduced, the system loses resilience.That is, a system in which natural levels of variation have been reduced through command-and-control activities will be less resilient than an unaltered system when subsequently faced with external perturbations.”

Similarly, the dominant macroeconomic policy paradigm explicitly aims to stabilise the macroeconomy. In particular, monetary policy during the Great Moderation was used as a blunt instrument to put out all but the most minor macroeconomic fire. Stabilising policies of this nature can and do cause the same kind of loss of resilience that Minsky describes. Indeed, as I mentioned in my previous note, agent adaptation to stabilising monetary and fiscal policies can be viewed as a more profound kind of moral hazard. Economic agents may take on severely negatively skewed bets not even as an adaptation to uncertainty but merely as a rational response to stabilising macroeconomic policies.

Bookmark and Share

Written by Ashwin Parameswaran

January 30th, 2010 at 2:08 pm

Efficient Markets and Pattern Predictions

with 4 comments

Markets can be “inefficient” and yet almost impossible to beat because of the existence of “Limits to Arbitrage” . It is essential not only to have the correct view but also to know when the view will be realised.

Why is it so difficult to time the market? Because the market is a complex adaptive system and complex adaptive systems are amenable only to what Hayek called “pattern predictions”. Hayek introduced this concept in his essay “The Theory of Complex Phenomena” where he analysed economic and other social phenomena as “phenomena of organised complexity” (A term introduced by Warren Weaver in this essay).

In such phenomena, according to Hayek, only pattern predictions are possible about the social structure as a whole: As Hayek explained in an interview with Leo Rosten:

“We can build up beautiful theories which would explain everything, if we could fit into the blanks of the formulae the specific information; but we never have all the specific information. Therefore, all we can explain is what I like to call “pattern prediction.” You can predict what sort of pattern will form itself, but the specific manifestation of it depends on the number of specific data, which you can never completely ascertain. Therefore, in that intermediate field — intermediate between the fields where you can ascertain all the data and the fields where you can substitute probabilities for the data–you are very limited in your predictive capacities.”

“Our capacity of prediction in a scientific sense is very seriously limited. We must put up with this. We can only understand the principle on which things operate, but these explanations of the principle, as I sometimes call them, do not enable us to make specific predictions on what will happen tomorrow.”

Hayek was adamant however that theories of pattern prediction were useful and scientific and had “empirical significance”. The example he drew upon was the Darwinian theory of evolution by natural selection, which provided only predictions as to the patterns one could observe over evolutionary time at levels of analysis above the individual entity.

Hayek’s intention with his theory was to debunk the utility of statistics and econometrics in the forecast of macroeconomic outcomes (See his Nobel lecture). The current neoclassical defense against their inability to predict the crisis takes the other extreme position i.e. our theories are right because no one could predict the crisis. This contention explicitly denies the possibility of “pattern predictions” and is not a valid defense. Any macroeconomic theory should be capable of explaining the patterns of our economic system – no more, no less.

One of the key reasons why timing and exact prediction is so difficult is the futility of conventional cause-effect thinking in complex adaptive systems. As Michael Mauboussin observed, ” Cause and effect thinking is futile, if not dangerous”. The underlying causes may be far removed from the effect, both in time and in space and the proximate cause may only be the “straw that broke the camel’s back”.

Many excellent examples of “pattern prediction” can be seen in ecology. For example, the proximate cause of the catastrophic degradation of Jamaica’s coral reefs since the 1980s was the mass mortality of the dominant species of urchin (reference). However, the real reason was the progressive loss of diversity due to overfishing since the 1950s.

As CS Holling observed in his analysis of a similar collapse in fisheries in the Great Lakes:

“Whatever the specific causes, it is clear that the precondition for the collapse was set by the harvesting of fish, even though during a long period there were no obvious signs of problems. The fishing activity, however, progressively reduced the resilience of the system so that when the inevitable unexpected event occurred, the populations collapsed. If it had not been the lamprey, it would have been something else: a change in climate as part of the normal pattern of fluctuation, a change in the chemical or physical environment, or a change in competitors or predators.”

The financial crisis of 2008-2009 can be analysed as the inevitable result of a progressive loss of system resilience. Whether the underlying cause was a buildup of debt, moral hazard or monetary policy errors is a different debate and can only be analysed by looking at the empirical evidence. However, just as is the case in ecology, the inability to predict the time of collapse or even the proximate cause of collapse does not equate to an inability to explain macroeconomic patterns.

Bookmark and Share

Written by Ashwin Parameswaran

December 31st, 2009 at 10:52 am

Minsky’s Financial Instability Hypothesis and Holling’s conception of Resilience and Stability

with 10 comments

Minsky’s Financial Instability Hypothesis

Minsky’s Financial Instability Hypothesis (FIH) is best summarised as the idea that “stability is destabilizing”. As Laurence Meyer put it:

“a period of stability induces behavioral responses that erode margins of safety, reduce liquidity, raise cash flow commitments relative to income and profits, and raise the price of risky relative to safe assets–all combining to weaken the ability of the economy to withstand even modest adverse shocks.”

Meyer’s interpretation highlights two important aspects of Minsky’s hypothesis:

  • It is the “behavioral responses” of economic agents that induce the fragility into the macroeconomic system.
  • After a prolonged period of stability, the economy cannot “withstand even modest adverse shocks”.

Holling’s “Pathology of Natural Resource Management”

Minsky’s idea that stability breeds instability is an important theme in the field of ecology. The “Pathology of Natural Resource Management” is described by Holling and Meffe as follows:

“when the range of natural variation in a system is reduced, the system loses resilience.”

Resilience and stability are dramatically different concepts. Holling explained the difference in his seminal paper on the topic as follows:

“Resilience determines the persistence of relationships within a system and is a measure of the ability of these systems to absorb changes of state variables, driving variables, and parameters, and still persist. In this definition resilience is the property of the system and persistence or probability of extinction is the result. Stability, on the other hand, is the ability of a system to return to an equilibrium state after a temporary disturbance. The more rapidly it returns, and with the least fluctuation, the more stable it is. In this definition stability is the property of the system and the degree of fluctuation around specific states the result.”

The relevant insight in Holling’s work is that resilience and stability as goals for an ecosystem are frequently at odds with each other. In many ecosystems, “the very fact of low stability seems to produce high resilience”. Conversely, “the goal of producing a maximum sustained yield may result in a more stable system of reduced resilience”.

Forest Fires: An Example of the Resilience-Stability Tradeoff

One of the most striking examples of the resilience-stability tradeoff in ecosystems is the impact of fire suppression over the last century on the dynamics of forest fires in the United States.

From Holling and Meffe:

“Suppression of fire in fire-prone ecosystems is remarkably successful in reducing the short-term probability of fire in the national parks of the United States and in fire-prone suburban regions. But the consequence is an accumulation of fuel over large areas that eventually produces fires of an intensity, extent, and human cost never before encountered (Kilgore 1976; Christensen et al. 1989). Fire suppression in systems that would frequently experience low-intensity fires results in the systems becoming severely affected by the huge fires that finally erupt; that is, the systems are not resilient to the major fires that occur with large fuel loads and may fundamentally change state after the fire.”

For example, fire suppression “selects” for tree species that are not adapted to frequent fires over species like the Ponderosa Pine that have adapted to survive frequent fires. Over time, the composition of the forest ecosystem tilts towards species that are less capable of withstanding even a minor disturbance that would have been absorbed easily in the absence of fire suppression.

The similarity to Meyer’s interpretation of the FIH is striking. In an ecosystem, it is natural selection rather than adaptation that induces the fragility but the result in both the economic and the ecological system is an inability to absorb a modest shock i.e. a loss of resilience.

Bookmark and Share

Written by Ashwin Parameswaran

December 6th, 2009 at 5:09 pm

Regulatory Arbitrage and the Efficiency-Resilience Tradeoff

with one comment

On the subject of securitization and regulatory arbitrage, Daniel Tarullo notes:

“securitization appears to present a case in which efforts to plug gaps in regulatory coverage are quickly and repeatedly overtaken by innovative arbitraging measures.”

Arnold Kling noted the problem of adaptation of economic agents to changes in the regulatory regime in his paper on the financial crisis:

“The lesson is that financial regulation is not like a math problem, where once you solve it the problem stays solved. Instead, a regulatory regime elicits responses from firms in the private sector. As financial institutions adapt to regulations, they seek to maximize returns within the regulatory constraints. This takes the institutions in the direction of constantly seeking to reduce the regulatory “tax” by pushing to amend rules and by coming up with practices that are within the letter of the rules but contrary to their spirit. This natural process of seeking to maximize profits places any regulatory regime under continual assault, so that over time the regime’s ability to prevent crises degrades.”

Regulatory arbitrage follows from the application of Goodhart’s Law to financial regulation. One of Daniel Tarullo’s key recommendations to counter this arbitrage is the adoption of a “simple leverage ratio requirement” . Such blunt measures reduce efficiency – of course, we can make the system more resilient if we insist on blanket 25% bank capital ratios and ban all bonuses but this would be a grossly inefficient solution.

The tradeoff between efficiency and resilience is a constant theme in fields as diverse as corporate risk management, ecosystem management and in this case, financial regulation.

Bookmark and Share

Written by Ashwin Parameswaran

December 5th, 2009 at 7:19 am