macroresilience

resilience, not stability

Archive for January, 2010

Knightian Uncertainty and the Resilience-Stability Trade-off

with 11 comments

This note examines the implications of adaptation by economic agents under Knightian uncertainty for the resilience of the macroeconomic system. It expands on themes I touched upon here and here. To summarise the key conclusions,

  • Under Knightian uncertainty, homo economicus is an irrelevant construct. The “optimal” course of action is one that restricts the choice of actions available and depends on a small set of simple rules and heuristics.
  • The choice of actions is restricted to those that are applicable in reasonably likely or recurrent situations. Actions applicable to rare situations are ignored. Therefore, it is entirely rational to take on severely negatively skewed bets.
  • By the same logic, economic agents find it harder to adapt to severe macroeconomic shocks as compared to mild shocks. This is the rationale for Axel Leijonhufvud’s “Corridor Hypothesis”.
  • Minsky’s Financial Instability Hypothesis states that prolonged periods of stability reduce the width of the “corridor” until the point where a macroeconomic crisis is inevitable.
  • The only assumptions needed to draw the above conclusions are the existence of uncertainty and sufficient adaptive/selective forces operating upon economic agents.
  • Minksy believed that this loss of resilience in the macroeconomic system is endogenous and inevitable. Although such a loss of resilience can arise endogenously, the evidence suggests that a significant proportion of the blame for the current crisis can be attributed to the stabilising policies favoured during the Great Moderation.
  • Buzz Holling’s work on ecosystem resilience has highlighted the peril of stabilising complex adaptive systems and how increased stability reduces system resilience.

Uncertainty and Negatively Skewed Payoffs

In a previous note, I explained how the existence of Knightian uncertainty leads to a perceived preference for severely negatively skewed payoffs. Ronald Heiner explains exactly how this occurs in his seminal paper on decision making under uncertainty.

Heiner argues that in the presence of uncertainty, the “optimal” course of action is one that restricts the choice of actions available and depends on a small set of simple rules and heuristics. In his words,

” Think of an omniscient agent with literally no uncertainty in identifying the most preferred action under any conceivable condition, regardless of the complexity of the environment which he encounters. Intuitively, such an agent would benefit from maximum flexibility to use all potential information or to adjust to all environmental conditions, no matter how rare or subtle those conditions might be. But what if there is uncertainty because agents are unable to decipher all of the complexity of the environment? Will allowing complete flexibility still benefit the agents?

I believe the general answer to this question is negative: that when genuine uncertainty exists, allowing greater flexibility to react to more information or administer a more complex repertoire of actions will not necessarily enhance an agent’s performance. “

In Heiner’s framework, actions chosen must satisfy a “Reliability Condition” which he summarises as: ” do so if the actual reliability in selecting the action exceeds the minimum required reliability necessary to improve performance. ” This required reliability cannot be achieved in the tails of the distribution and economic agents therefore ignore actions that are appropriate only in such situations. This explains our reluctance to insure against rare disasters which Heiner notes:

” Rare events are precisely those which are remote to a person’s normal experience, so that uncertainty in detecting which rare disasters to insure against increases as p(probability of disaster) approaches zero. Such greater uncertainty will reduce the reliability of insurance decisions as disasters become increasingly remote to a person’s normal experience.”

” At some point as p approaches zero, the Reliability Condition will be violated. This implies people will switch from typically buying to typically ignoring insurance conditions, which is just the pattern documented in Kunreuther’s 1978 study.”

Note the similarity between Heiner’s analysis of tail risks under uncertainty and Kahneman and Tversky’s distinction between “possible” and “impossible” events. The reliability problem is also connected to the difficulty of ascertaining the properties of tail events through a statistical analysis of historical data.

In an uncertainty-driven framework, it may be more appropriate to refer to this pattern as a reluctance to insure against tail risks rather than a preference for “blowup risks”. This distinction is also relevant in the moral hazard debate where the actions are often characterised better as a neglect of insurance of tail risks than an explicit taking on of such risks.

Impossible Events and Axel Leijonhufvud’s “Corridor Hypothesis”

Heiner also extends this analysis of the reluctance to insure against “impossible” events to provide the rationale for Axel Leijonhufvud’s “Corridor Hypothesis” of macroeconomic shocks and recessions. In his words:

“Now suppose, analogous to the insurance case, that there are different types of shocks. some more severe than others; where larger shocks are possible but less and less likely to happen. In addition, the reliability of detecting when and how to prepare for large shocks decreases as their determinants and repercussions are more remote to agents’ normal experience.

In a similar manner to that discussed for the insurance case, we can derive that the economy’s structure will evolve so as to prepare for and react quickly to small shocks. However, outside of a certain zone or “corridor” around its long-run growth path, it will only very sluggishly react to sufficiently large, infrequent shocks.”

Minsky’s Financial Instability Hypothesis and Leijonhufvud’s Corridor

Minsky’s Financial Instability Hypothesis (FIH) asserts that stability breeds instability i.e. stability reduces the width of the corridor to the point where even a small shock is enough to push the system outside it. Leijonhufvud acknowledged Minsky’s insight that the width of the corridor was variable and depended upon the recency of past disturbances. In his own words: “Our theory implies a variable width of the corridor. Transactors who have once suffered through a displacement of unanticipated magnitude (on the order of the Great Depression, say) will be encouraged to maintain larger buffers thereafter-until the memory dims…”

The assertion that stability breeds instability is well established in ecology, especially in Buzz Holling’s work as I discussed here. Heiner’s framework explains Minsky’s assertion as the logical consequence of agent adaptation under uncertainty. But the same can also be explained via “natural selection”-like mechanisms as well. The most relevant is the principal-agent relationship. Principals that “select” agents under asymmetric information can effectively mimic the effect of natural selection in ecosystems.

Minsky also argues that sooner or later, a capitalist economy will move outside this corridor due to entirely endogenous reasons. This is a more controversial assertion and can only be evaluated through a careful analysis of the empirical evidence. The assertion that an economy can move outside the corridor due to endogenous factors is difficult to reject. All it takes is a chance prolonged period of stability. However, this does not imply that the economy must move outside the corridor, which requires us to prove that prolonged periods of stability are the norm rather than the exception in a capitalist economy.

Minsky’s Financial Instability Hypothesis and C.S. Holling’s conception of Resilience and Stability

Minsky’s idea that stability breeds instability is an important theme in the field of ecology. Buzz Holling however defined the problem as loss of resilience rather than instability. Resilience and stability are dramatically different concepts and Holling explained the difference in his seminal paper on the topic as follows:

“Resilience determines the persistence of relationships within a system and is a measure of the ability of these systems to absorb changes of state variables, driving variables, and parameters, and still persist. In this definition resilience is the property of the system and persistence or probability of extinction is the result. Stability, on the other hand, is the ability of a system to return to an equilibrium state after a temporary disturbance. The more rapidly it returns, and with the least fluctuation, the more stable it is. In this definition stability is the property of the system and the degree of fluctuation around specific states the result.”

The relevant insight in Holling’s work is that resilience and stability as goals for an ecosystem are frequently at odds with each other. In many ecosystems, “the very fact of low stability seems to produce high resilience“. Conversely, “the goal of producing a maximum sustained yield may result in a more stable system of reduced resilience”. Minsky’s hypothesis is thus better described as “stability breeds loss of resilience”, not “stability breeds instability”.

The Pathology of Macroeconomic Stabilisation

The “Pathology of Natural Resource Management” is described by Holling and Meffe as follows:

“when the range of natural variation in a system is reduced, the system loses resilience.That is, a system in which natural levels of variation have been reduced through command-and-control activities will be less resilient than an unaltered system when subsequently faced with external perturbations.”

Similarly, the dominant macroeconomic policy paradigm explicitly aims to stabilise the macroeconomy. In particular, monetary policy during the Great Moderation was used as a blunt instrument to put out all but the most minor macroeconomic fire. Stabilising policies of this nature can and do cause the same kind of loss of resilience that Minsky describes. Indeed, as I mentioned in my previous note, agent adaptation to stabilising monetary and fiscal policies can be viewed as a more profound kind of moral hazard. Economic agents may take on severely negatively skewed bets not even as an adaptation to uncertainty but merely as a rational response to stabilising macroeconomic policies.

Bookmark and Share

Written by Ashwin Parameswaran

January 30th, 2010 at 2:08 pm

On The Futility of Banning Proprietary Risk-Taking by Banks: Redux

with 2 comments

It seems that Obama has come around to Paul Volcker’s position that “protected” financial institutions must not be allowed to take on proprietary risk. In this interview in Der Spiegel, Paul Volcker argues that banks must not be allowed to take on proprietary risk except for risk incidental to “client activities”. Quoting from the interview:

SPIEGEL: Banking should become boring again?

Volcker: Banking will never be boring. Banking is a risky business. They are going to have plenty of activity. They can do underwriting. They can do securitization. They can do a lot of lending. They can do merger and acquisition advice. They can do investment management. These are all client activities. What I don’t want them doing is piling on top of that risky capital market business. That also leads to conflicts of interest.”

This is a more nuanced version of the argument that calls for the reinstatement of the Glass-Steagall Act. But it suffers from two fatal flaws:

Regulatory Arbitrage: Separation of “client risk” and “proprietary risk” sounds good in theory but it’s almost impossible to enforce in practise. As I’ve discussed previously, a detailed and fine-tuned regulatory policy will be easy to arbitrage and a blunt policy will result in a grossly inefficient financial system.

Losses on “Client Activities” were the major driver in the current crisis. My analysis of the UBS shareholder report highlighted how the accumulation of super-senior CDO tranches was justified primarily by their perceived importance in facilitating the sale of fee-generating junior tranches to clients. Quoting from the report: “within the CDO desk, the ability to retain these tranches was seen as a part of the overall CDO business, providing assistance to the structuring business more generally.” It is the losses on these tranches issued in the name of facilitating client business that were at the core of the crisis. It is these tranches that caused the majority of the losses on banks’ balance sheets. It is losses on insuring these tranches that brought down AIG. Segregated proprietary risk is monitored closely by almost all banks. The real villain of the piece was proprietary risk taken on under the cover of facilitating client business.

Implementation of the Ban

Clearly a simple ban on internal hedge funds and proprietary trading desks would not work. All banks trade the same product on their client’s behalf that they do on a proprietary basis and such a ban can be nullified simply by folding all proprietary operations into trading desks that also facilitate client business.

Another alternative would be to enforce market risk limits on banks, based on VaR for example. If VaR was the criteria in enforcing risk limits on banks in the previous crisis, the crisis would not have been averted. The super-senior CDO tranches at the heart of the crisis were low VaR assets on their own and “zero VaR” assets when merely delta hedged without any hedging of higher-order risks.

Again quoting from the UBS report: “MRC VaR methodologies relied on the AAA rating of the Super Senior positions. The AAA rating determined the relevant product-type time series to be used in calculating VaR. In turn, the product-type time series determined the volatility sensitivities to be applied to Super Senior positions. Until Q3 2007, the 5-year time series had demonstrated very low levels of volatility sensitivities. As a consequence, even unhedged Super Senior positions contributed little to VaR utilisation.” “Once hedged, either through NegBasis or AMPS trades, the Super Senior positions were VaR and Stress Testing neutral (i.e., because they were treated as fully hedged, the Super Senior positions were netted to zero and therefore did not utilize VaR and Stress limits). The CDO desk considered a Super Senior hedged with 2% or more of AMPS protection to be fully hedged. In several MRC reports, the long and short positions were netted, and the inventory of Super Seniors was not shown, or was unclear. For AMPS trades, the zero VaR assumption subsequently proved to be incorrect as only a portion of the exposure was hedged as described in section 4.2.3, although it was believed at the time that such protection was sufficient.”

To summarise, it is extremely unlikely that there exists a way to ban proprietary risk-taking that cannot be circumvented by the banks.

Bookmark and Share

Written by Ashwin Parameswaran

January 21st, 2010 at 6:43 pm

Do Investors Prefer Negative Skewness?

with 10 comments

Bootvis asks in a comment on my previous post:

“Financial theory says that rational investors should prefer positive skewness. This is proven under some weak assumptions in “On The Direction of Preference for Moments of Higher Order Than The Variance” by Scott and Horvath (1980) (I can only find it on jstor, behind a wall ).What’s your view on this discrepancy?”

I have not read the above paper and do not have access to JSTOR either. So the below response is just my broad view on the topic.

Agents prefer Negative Skewness

My emphasis so far has been on the preference for maximising negative skewness from an agent’s perspective in a principal-agent relationship. This preference is exacerbated by the moral hazard subsidy. I conclude that the combination of the moral hazard subsidy and the principal-agent problem allows agents to simultaneously maximise negative skewness and improve the risk-return trade-off for owners by increasing leverage.

Whether investors who are not agents would prefer negative skewness is a trickier question. Taleb in this paper clearly concludes that investors prefer negatively skewed bets. But as Bootvis mentions, this contradicts the consensus opinion of financial theory that investors prefer positive skewness. An obvious example of the preference for positive skewness is the phenomenon of “longshot bias” or the popularity of lotteries.

Kahneman-Tversky on Longshots and Black Swans

Kahneman and Tversky offer one way to reconcile these two viewpoints in this paper where they argue that “impossible” events, i.e. black swans, are neglected whereas “possible” but low probability events, i.e. longshots, are overweighted. Preference for negative skewness is not operative for mildly skewed payoffs. It is operative for severely skewed payoffs. As expressed by Kahneman and Tversky: “A change from impossibility to possibility or from possibility to certainty has a bigger impact than a comparable change in the middle of the scale.” In other words, there is a “category-boundary effect” when an event deemed impossible becomes possible. The event is significantly underweighted when deemed impossible and overweighted when it is suddenly deemed possible i.e. the lottery effect only kicks in when the event is deemed possible.

This phenomenon also explains the violence of market reaction and the dramatic move in market prices around this boundary. In fact, it can be argued that the change in market prices itself can cause a move in investor views across this category-boundary in a positive feedback process. For example, if market prices suggest that a tail risk is not improbable, this alone may incentivise economic actors to purchase insurance against the event.

Any behavioural explanation that invokes Kahneman and Tversky does not apply to “rational” investors as defined in modern financial theory. For example, the underweighting of tail events can be explained as a result of investors utilising the “Availability Heuristic” and inducing the probability distribution from past experience. As Andrew Haldane notes: “The longer the period since an event occurred, the lower the subjective probability attached to it by agents (the so-called “availability heuristic”). And below a certain bound, this subjective probability will effectively be set at zero (the “threshold heuristic”).”

Is a Preference for Severe Negative Skewness Irrational?

I would argue that using such heuristics may even be rational when not judged against the unrealistic standards of homo economicus. Inducing probabilities from past experience may be entirely “rational” given bounded rationality and an uncertain environment. As WB Arthur puts it: “Agents “learn” which of their hypotheses work, and from time to time they may discard poorly performing hypotheses and generate new “ideas” to put in their place. A belief model is clung to not because it is “correct”—there is no way to know this—but rather because it has worked in the past, and must cumulate a record of failure before it is worth discarding.”

It can be extremely difficult to ascertain the true distribution of an extremely negatively skewed bet from historical data. A long run without an observed loss makes us less confident about any initial negative thesis. This is also the primary explanation for why we prefer longshots in horse races or play the lottery. Both are fundamentally less uncertain than financial markets. At least we know the full set of outcomes that are possible in a horse race ! Real life markets are nothing like betting markets. They are dominated by true uncertainty and practitioners derive shaky conclusions from historical data and experience. Statistically, it can be extremely difficult to differentiate between alpha and extreme negative skew.

A More Profound “Moral Hazard”

Severely negatively skewed bets usually blow up under conditions of severe distress in the economy when the government is likely to intervene strongly to prevent systemic collapse. As David Merkel mentions in this note, the Great Moderation has been characterised by a Fed that is willing to cut interest rates at the smallest hint of trouble, even in situations where systemic risk was far from severe.

The current “no more Lehmans” policy is practise means that the Fed and the Treasury will do anything to prevent negative tail scenarios. In the face of such an explicit insurance policy, selling tail events may be entirely rational.

Negative Skewness and Fixed Income Markets

Taleb essentially denies that even longshots are overpriced in financial markets. I am not convinced that moderate negative skewness is at all “preferred”. Moreover, most of the empirical evidence he presents pertains to severely skewed payoffs. But there is one point he raises in a reply to Tyler Cowen’s review that deserves more analysis. The vast majority of blowups that Taleb recounts are in the fixed income markets.

Indeed, I think the preference for negative skewness is most relevant in fixed income markets. The original fixed income instrument i.e. the bond has an extremely negatively skewed payoff by construction as does the original “alpha” strategy, the carry trade. Secondly, fixed income markets are dominated to a much larger extent by banks and other agents who are compromised by the moral hazard and/or principal-agent problem. Third, the nature of structured product markets in fixed income are dominated by new methods to construct negatively skewed payoffs. To give a few examples, callable range accruals in interest rate products, the PRDC in currency products and almost any credit structured product that aims to achieve a AAA rating like the leveraged super-senior.

This is not to deny the popularity of severely negatively skewed payoffs in equities (for e.g. the reverse convertible note). But they are nowhere near as predominant.

Conclusion

The moral hazard subsidy, the principal-agent problem and investor “irrationality” each incentivise economic actors to take on considerably negatively skewed bets. Assessing the relative contributions of each from historical market data is extremely difficult given that there is no plausible way to separate the effect of the three causes. The problem is exacerbated by the difficulty in drawing any conclusions about tail events from a study of historical data. However, the concentration of historical blowups in fixed income markets leads me to suspect that the combination of moral hazard and the principal-agent problem had a more prominent role in fuelling the crisis than genuine “irrationality”.

Bookmark and Share

Written by Ashwin Parameswaran

January 13th, 2010 at 5:17 pm

Implications of Moral Hazard in Banking

with 8 comments

In my previous post, I explained how a moral hazard outcome can come out even in the absence of explicit agent intentionality to take on more risk. This post will focus on the practical implications of the moral hazard problem in banking. Much of the below is just a restatement of arguments made in my first post that I felt needed to be highlighted. For references and empirical evidence, please refer to the earlier post.

Moral hazard can persist even if the bailout is uncertain. Even a small probability of a partial bailout will reduce the rate of return demanded by bank creditors and this reduction constitutes an increase in firm value. The implication is that there is no partial solution of the moral hazard problem. There must be a credible and time-consistent commitment that under no circumstances will there even be a partial creditor bailout.

In a simple Modigliani-Miller world, the optimal leverage for a bank is therefore infinite. Even without invoking Modigliani-Miller, the argument for this is intuitive. If each incremental unit of debt issued is issued at less than its true economic cost, it adds to firm value. In reality of course, there are many limits to leverage, the most important being regulatory capital requirements.

Increased leverage and a riskier asset portfolio are not substitutable. Most moral hazard explanations of the crisis claim that the implicit/explicit creditor protection from deposit insurance and the TBTF doctrine causes banks to “take on more risk”, risk being defined as a combination of higher leverage and a riskier asset portfolio. The above arguments show that risk taken on via increased leverage is distinctly superior to the choice of a riskier asset portfolio – Unlike increased leverage, riskier assets do not include any free lunch component.

Regulatory capital requirements force banks to choose from a continuum of choices with low leverage and risky assets combinations on one side to high leverage and “safe” assets on the other (This argument assumes that off balance sheet vehicles cannot fully remove the regulatory capital constraint). Given that high leverage maximises the moral hazard subsidy, banks are biased to move towards the high leverage, “low risk” combination. The frequent divergence between market risk-reward and ratings-implied risk-reward of course means that riskier assets will still be invested in. But they need to clear a higher hurdle than AAA assets.

High-powered incentives encourage managers/traders to operate under high leverage. Bonuses and equity compensation help align the interests of the owner and the manager.

Risk from an agent’s perspective is defined by the skewness of asset returns as well as the volatility. Managers/Traders are motivated to minimise the probability of a negative outcome i.e. maximise negative skew. This tendency is exacerbated in the presence of high-powered incentives. Andrew Lo illustrated this in his example of the Capital Decimation Partners in the context of hedge funds (Hedge fund investors of course do not have an incentive to maximise leverage without limit).

The above is a short explanation of the consequences of moral hazard that explains the key facts of the crisis – high leverage combined with an apparently safe asset portfolio of AAA assets such as super-senior tranches of ABS CDOs. Contrary to conventional wisdom, a moral hazard outcome is characterised by negatively skewed bets, not volatile bets.

The dominance of negatively skewed bets means that it is extremely difficult to detect the outcome of moral hazard by statistical methods. As Nassim Taleb explains here, a large sample size is essential. If the analysis is limited to a “calm” period, the mean as well as the variance of the distribution will be significantly misestimated. Moreover, the problem is exacerbated if one has assumed a symmetric distribution as is often the case. The “low measured variance” is easily misunderstood as a refutation of the moral hazard outcome rather than the confirmation it really represents.

Bookmark and Share

Written by Ashwin Parameswaran

January 6th, 2010 at 1:49 am

Moral Hazard: A Wide Definition

with 19 comments

A common objection to the moral hazard explanation of the financial crisis runs as follows: No banker explicitly factored in the possibility of a bailout into his decision-making process.

The obvious answer to this objection is the one Andrew Haldane noted:

“There was a much simpler explanation according to one of those present. There was absolutely no incentive for individuals or teams to run severe stress tests and show these to management. First, because if there were such a severe shock, they would very likely lose their bonus and possibly their jobs. Second, because in that event the authorities would have to step-in anyway to save a bank and others suffering a similar plight.

All of the other assembled bankers began subjecting their shoes to intense scrutiny. The unspoken words had been spoken. The officials in the room were aghast. Did banks not understand that the official sector would not underwrite banks mismanaging their risks?

Yet history now tells us that the unnamed banker was spot-on. His was a brilliant articulation of the internal and external incentive problem within banks. When the big one came, his bonus went and the government duly rode to the rescue. The time- consistency problem, and its associated negative consequences for risk management, was real ahead of crisis. Events since will have done nothing to lessen this problem, as successively larger waves of institutions have been supported by the authorities.”

Bankers did not consciously take on more risk. They took on less protection against risk, particularly extreme event risk.

But this too is an unnecessarily limited definition of moral hazard. Moral hazard can persist without any explicit intention on the part of the agent to behave differently.

Spontaneous Order

It is not at all necessary that each economic agent is consciously aware of and is trying to maximise the value of the moral hazard subsidy. A system that exploits the subsidy efficiently can arise by each agent merely adapting to and reacting to the local incentives and information put in front of him. For example, the CEO is under pressure to improve return on equity and increases leverage at the firm level. Individual departments of the bank may be extended cheap internal funding and told to hit aggressive profitability targets without using capital. And so on and so forth. It is not at all necessary that each individual trader in the bank is aware of or working towards a common goal.

Nevertheless, the system adapts in a manner as if it was consciously directed towards the goal of maximising the subsidy. In other words, a Hayekian spontaneous order could achieve the same result as a constructed order.

Natural Selection

The system can also move towards a moral hazard outcome without even partial intent or adaptation by economic agents given a sufficiently diverse agent strategy pool, a stable environment and some selection mechanism. This argument is similar to Armen Alchian’s famous paper arguing for the natural selection of profit-maximising firms.

The obvious selection mechanism in banking is the principal-agent relationship at all levels i.e. shareholders can fire CEOs, CEOs can fire managers, managers can fire traders etc. If we start out with a diverse pool of economic agents pursuing different strategies, only one of which is a high-leverage,bet-the-house strategy, sooner or later this strategy will outcompete and dominate all other strategies (provided that the environment is stable).

In the context of Andrew Haldane’s comment on banks’ neglect of risk management, banks that would have invested in risk insurance would have systematically underperformed their peer group during the boom. Any CEO who would have elected to operate with low leverage would have been fired a long time before the crisis hit.

To summarise, moral hazard outcomes can and indeed did drive the financial crisis through a variety of channels: explicit agent intentionality, adaptation of agents to local incentives or merely market pressures weeding out those firms/agents that refuse to maximise the moral hazard free lunch.

Bookmark and Share

Written by Ashwin Parameswaran

January 1st, 2010 at 8:30 pm