resilience, not stability

Archive for July, 2010

Critical Transitions in Markets and Macroeconomic Systems

with 6 comments

This post is the first in a series that takes an ecological and dynamic approach to analysing market/macroeconomic regimes and transitions between these regimes.

Normal, Pre-Crisis and Crisis Regimes

In a post on market crises, Rick Bookstaber identified three regimes that any model of the market must represent (normal, pre-crisis and crisis) and analysed the statistical properties (volatility,correlation etc) of each of these regimes. The framework below however characterises each regime by the varying combinations of positive and negative feedback processes and the variations and regime shifts are determined by the adaptive and evolutionary processes operating within the system.

1. Normal regimes are resilient regimes. They are characterised by a balanced and diverse mix of positive and negative feedback processes. For every momentum trader who bets on the continuation of a trend, there is a contrarian who bets the other way.

2. Pre-crisis regimes are characterised by an increasing dominance of positive feedback processes. An unusually high degree of stability or a persistent trend progressively weeds out negative feedback processes from the system thus leaving it vulnerable to collapse even as a result of disturbances that it could easily absorb in its previously resilient normal state. Such regimes can arise from bubbles but this is not necessary. Pre-crisis only implies that a regime change into the crisis regime is increasingly likely – in ecological terms, the pre-crisis regime is fragile and has suffered a significant loss of resilience.

3. Crisis regimes are essentially transitional  – the disturbance has occurred and the positive feedback processes that dominated the previous regime have now reversed direction. However, the final destination of this transition is uncertain – if the system is left alone, it will undergo a discontinuous transition to a normal regime. However, if sufficient external stabilisation pressures are exerted upon the system, it may revert to the pre-crisis regime or even stay in the crisis regime for a longer period. It’s worth noting that I define a normal regime only by its resilience and not by its desirability – even a state of civilizational collapse can be incredibly resilient.

“Critical Transitions” from the Pre-Crisis to the Crisis Regime

In fragile systems even a minor disturbance can trigger a discontinuous move to an alternative regime – Marten Scheffer refers to such moves as “critical transitions”. Figures a,b,c and d below represent a continuum of ways in which the system can react to changing external conditions (ref Scheffer et al) . Although I will frequently refer to “equilibria” and “states” in the discussion below, these are better described as “attractors” and “regimes” given the dynamic nature of the system – the static terminology is merely a simplification.

In Figure a, the system state reacts smoothly to perturbations – for example, a large external change will trigger a large move in the state of the system. The dotted arrows denote the direction in which the system moves when it is not on the curve i.e. in equilibrium.  Any move away from equilibrium triggers forces that bring it back to the curve. In Figure b, the transition is non-linear and a small perturbation can trigger a regime shift – however a reversal of conditions of an equally small magnitude can reverse the regime shift. Clearly, such a system does not satisfactorily explain our current economic predicament where monetary and fiscal intervention far in excess of the initial sub-prime shock have failed to bring the system back to its previous state.

Figure c however may be a more accurate description of the current state of the economy and the market – for a certain range of conditions, there exist two alternative stable states separated by an unstable equilibrium (marked by the dotted line). As the dotted arrows indicate, movement away from the unstable equilibrium can carry the system to either of the two alternative stable states. Figure d illustrates how a small perturbation past the point F2 triggers a “catastrophic” transition from the upper branch to the lower branch – moreover, unless conditions are reversed all the way back to the point F1, the system will not revert back to the upper branch stable state. The system therefore exhibits “hysteresis” – i.e. the path matters. The forward and backward switches occur at different points F2 and F1 respectively, which implies that reversing such transitions is not easy. A comprehensive discussion of the conditions that will determine the extent of hysteresis is beyond the scope of this post – however it is worth mentioning that cognitive and organisational rigidity in the absence of sufficient diversity is a sufficient condition for hysteresis in the macro-system.

Before I apply the above framework to some events in the market, it is worth clarifying how the states in Figure d correspond to those chosen by Rick Bookstaber. The “normal” regime refers to the parts of the upper and lower branch stable states that are far from the points F1 and F2 i.e. the system is resilient to a change in external conditions. As I mentioned earlier, normal does not equate to desirable – the lower branch could be a state of collapse. If we designate the upper branch as a desirable normal state and the lower branch as an undesirable one, then the zone close to point F2 on the upper branch is the pre-crisis regime. The crisis regime is the short catastrophic transition from F2 to the lower branch if the system is left alone. If forces external to the system are applied to prevent a transition to the lower branch, then the system could either revert back to the upper branch or even stay in the crisis regime on the dotted line unstable equilibrium for a longer period.

The Magnetar Trade revisited

In an earlier post, I analysed how the infamous Magnetar Trade could be explained with a framework that incorporates catastrophic transitions between alternative stable states. As I noted: “The Magnetar trade would pay off in two scenarios – if there were no defaults in any of their CDOs, or if there were so many defaults that the tranches that they were short also defaulted alongwith the equity tranche. The trade would likely lose money if there were limited defaults in all the CDOs and the senior tranches did not default. Essentially, the trade was attractive if one believed that this intermediate scenario was improbable…Intermediate scenarios are unlikely when the system is characterised by multiple stable states and catastrophic transitions between these states. In adaptive systems such as ecosystems or macroeconomies, such transitions are most likely when the system is fragile and in a state of low resilience. The system tends to be dominated by positive feedback processes that amplify the impact of small perturbations, with no negative feedback processes present that can arrest this snowballing effect.”

In the language of critical transitions, Magnetar calculated that the real estate and MBS markets were in a fragile pre-crisis state and no intervention would prevent the rapid critical transition from F2 to the lower branch.

“Schizophrenic” Markets and the Long Crisis

Recently, many commentators have noted the apparently schizophrenic nature of the markets, turning from risk-on to risk-off at the drop of a hat. For example, John Kemp argues that the markets are “trapped between euphoria and despair” and notes the U-shaped distribution of Bank of England’s inflation forecasts (table 5.13). Although at first glance this sort of behaviour seems irrational, it may not be – As PIMCO’s Richard Clarida notes: “we are in a world in which average outcomes – for growth, inflation, corporate and sovereign defaults, and the investment returns driven by these outcomes – will matter less and less for investors and policymakers. This is because we are in a New Normal world in which the distribution of outcomes is flatter and the tails are fatter. As such, the mean of the distribution becomes an observation that is very rarely realized”

Richard Clarida’s New Normal is analogous to the crisis regime (the dotted line unstable equilibrium in Figures c and d). Any movement in either direction is self-fulfilling and leads to either a much stronger economy or a much weaker economy. So why is the current crisis regime such a long one? As I mentioned earlier, external stabilisation (in this case monetary and fiscal policy) can keep the system from collapsing down to the lower branch normal regime – the “schizophrenia” only indicates that the market may make a decisive break to a stable state sooner rather than later.

Bookmark and Share

Written by Ashwin Parameswaran

July 29th, 2010 at 3:27 am

Bank Capital and the Monetary Transmission Channel: The Importance of New Firm Entry

with 10 comments

A popular line of argument blames the lack of bank lending despite the Fed’s extended ZIRP policy on the impaired capital position of the banking sector. For example, one of the central tenets of MMT is the thesis that “banks are capital constrained, not reserve constrained”. Understandably, commentators extrapolate from the importance of bank capital to argue that banks must be somehow recapitalised if the lending channel is to function properly as Michael Pettis does here.

The capital constraint that is an obvious empirical reality for individual banks’ does not imply that bank bailouts are the only way to prevent a collapse of the monetary transmission channel. Although individual banks are capital constrained, the argument that an impairment in capital will induce the bank to turn away profitable lending opportunities assumes that the bank is unable to attract a fresh injection of capital. Again, this is not far from the truth: As I have explained many times on this blog, banks are motivated to minimise capital and given the “liquidity” support extended to them by the central bank during the crisis, they are incentivised to turn away offers for recapitalisation and instead slowly recapitalise by borrowing from the central bank and lending out to low-risk ventures such as T-Bonds or AAA Bonds. This of course means that they are able to avoid injecting new capital unless forced to do so by their regulator. Potential investors know of this incentive structure facing the bank and are wary of offering new equity. Moreover, injecting new capital into existing banks can be a riskier proposition than capitalising a new bank due to the opacity of bank balance sheets.

So the bank capital “limitation” that faces individual banks is real, in no small part due to the incestuous nature of their relationship with the central bank. But does this imply that the banking sector as a whole is capital constrained? The financial intermediation channel as a whole is capital constrained only if there is no entry of new firms into the banking sector despite the presence of profitable lending opportunities. Again this is empirically true but I would argue that changing this empirical reality is critical if we want to achieve a resilient financial system. The opacity of bank balance sheets means that even in the most perfectly competitive of markets, it is unlikely that old banks will find willing new investors when dramatic financial crises hit. However, investors most certainly can and should start up new unimpaired financial intermediary firms if the opportunity is profitable enough.

The onerous regulations and the time required to set up a new bank clearly discourage new entry – see for example the experience of potential new banks in the UK here. But even if we accelerate the regulatory approval process, the fundamental driver that discourages the entry of startup new banks is the Too-Big-To-Fail(TBTF) subsidy extended to the large incumbent banks that ensures that startup banks are forced to operate with significantly higher funding costs than the TBTF banks. This may be the most damaging aspect of TBTF – not only does it discriminate against existing small banks, it discourages new entry into the sector thus crippling the monetary transmission mechanism via the bank capital constraint.

Bookmark and Share

Written by Ashwin Parameswaran

July 12th, 2010 at 7:57 am

Posted in Financial Crisis

Heuristics and Robustness in Asset Allocation: The 1/N Rule, “Hard” Constraints and Fractional Kelly Strategies

with 9 comments

Harry Markowitz received the Nobel Prize in Economics in 1990 for his work on mean-variance optimisation that provided the foundations for Modern Portfolio Theory (MPT). Yet as Gerd Gigerenzer notes, when it came to investing his own money, Markowitz relied on a simple heuristic, the “1/N Rule” which simply allocates equally across all N funds under consideration. At first glance, this may seem to be an incredibly irrational strategy. Yet, there is compelling empirical evidence backing even such a simple heuristic as the 1/N Rule. Gigerenzer points to a study conducted by DeMiguel, Garlappi and Uppal (DMU) which after comparing many asset-allocation strategies including Markowitz mean-variance optimisation concludes that “there is no single model that consistently delivers a Sharpe ratio or a CEQ return that is higher than that of the 1/ N portfolio, which also has a very low turnover.”

Before exploring exactly what the DMU study and Gigerenzer’s work implies, it is worth emphasizing what it does not imply. First, as both DMU and Gigerenzer stress, the purpose of this post is not to argue for the superiority of the 1/N Rule over all other asset-allocation strategies. The aim is just to illustrate how simple heuristics can outperform apparently complex optimisation strategies under certain circumstances. Second, the 1/N Rule does not apply when allocating across securities with excessive idiosyncratic risk e.g. single stocks. In the DMU study for example, the N assets are equity portfolios constructed on the basis of industry classification, countries, firm characteristics etc.

So in what circumstances does the 1/N Rule outperform? Gigerenzer provides a few answers here as do DMU in the above-mentioned study but in my opinion, all of them come down to “the predictive uncertainty of the problem“. When faced with significant irreducible uncertainty, the robustness of the approach is more relevant to its future performance than its optimality. As Gigerenzer notes, this is not about computational intractability – indeed, a more uncertain environment requires a simpler approach, not a more complex one. In his words: “The optimization models performed better than the simple heuristic in data fitting but worse in predicting the future.”

Again, it’s worth reiterating that both studies do not imply that we should abandon all attempts at asset allocation – the DMU study essentially evaluates the 1/N Rule and all other strategies based on their risk-adjusted returns as defined under MPT i.e. by their Sharpe Ratio. Given that most active asset management implies a certain absence of faith in the canonical assumptions underlying MPT, some strategies could outperform if evaluated differently. Nevertheless, the fundamental conclusion regarding the importance of a robust approach holds and a robust asset allocation can be achieved in other ways. For example, when allocating across 20 asset categories, any preferred asset-allocation algorithm could be used with a constraint that the maximum allocation to any category cannot exceed 10%. Such “hard limits” are commonly used by fund managers and although they may not have any justifying rationale under MPT, this does not mean that they are “irrational”.

The need to increase robustness over optimisation when faced with uncertainty is also one of the reasons why the Kelly Criterion is so often implemented in practise as a “Fractional Kelly” strategy. The Kelly Criterion is used to determine the optimal size of sequential bets/investments that maximises the expected growth rate of the portfolio. It depends crucially upon the estimate of the “edge” that the trader possesses. In an uncertain environment, this estimate is less reliable and as Ed Thorp explains here, the edge will most likely be overestimated. In Ed Thorp’s words: “Estimates….in the stock market have many uncertainties and, in cases of forecast excess return, are more likely to be too high than too low. The tendency is to regress towards the mean….The economic situation can change for companies, industries, or the economy as a whole. Systems that worked may be partly or entirely based on data mining….Systems that do work attract capital, which tends to push exceptional [edge] down towards average values.”

Bookmark and Share

Written by Ashwin Parameswaran

July 8th, 2010 at 5:31 am