macroresilience

resilience, not stability

Archive for the ‘Financial Crisis’ Category

Amar Bhide on “Robotic Finance”: An Adaptive Explanation

with 6 comments

In the HBR, Amar Bhide notes that models have replaced discretion in many areas of finance, particularly in banks’ mortgage lending decisions: “Over the past several decades, centralized, mechanistic finance elbowed aside the traditional model….Mortgages are granted or denied (and new mortgage products like option ARMs are designed) using complex models that are conjured up by a small number of faraway rocket scientists and take little heed of the specific facts on the ground.” For the most part, the description of the damage done by “robotic finance” is accurate but the article ignores why this mechanisation came about. It is easy to assume that the dominance of models over discretion may have been a grand error by the banking industry. But in reality, the “excessive” dependence on models was an entirely rational and logical evolution of the banking industry given the incentives and the environment that bankers faced.

An over-reliance on models over discretion cripples the adaptive capabilities of the firm: “No contract can anticipate all contingencies. But securitized financing makes ongoing adaptations infeasible; because of the great difficulty of renegotiating terms, borrowers and lenders must adhere to the deal that was struck at the outset. Securitized mortgages are more likely than mortgages retained by banks to be foreclosed if borrowers fall behind on their payments, as recent research shows.” But why would firms choose such rigid and inflexible solutions? There are many answers to this question but all of them depend on the obvious fact that adaptable solutions entail a higher cost than rigid solutions. It is far less expensive to analyse the creditworthiness of mortgages with standardised models than with people on the ground.

This increased efficiency comes at the cost of catastrophic losses in a crisis but long periods of stability inevitably select for efficient and rigid solutions rather than adaptable and flexible solutions. This may be a consequence of moral hazard or principal-agent problems as I have analysed many times on this blog but it does not depend on either. A preference for rigid routines may be an entirely rational response to a long period of stability under uncertainty – both from an individual’s perspective and an organisation’s perspective. Probably the best exposition of this problem was given by Brian Loasby in his book “Equilibrium and Evolution” (pages 56-7): “Success has its opportunity costs. People who know how to solve their problems can get to work at once, without considering whether some other method might be more effective; they thereby become increasingly efficient, but also increasingly likely to encounter problems which are totally unexpected and which are not amenable to their efficient routines…The patterns which people impose on phenomena have necessarily a limited range of application, and the very success with which they exploit that range tends to make them increasingly careless about its limits. This danger is likely to be exacerbated by formal information systems, which are typically designed to cope with past problems, and which therefore may be worse than useless in signalling new problems. If any warning messages do arrive, they are likely to be ignored, or force-fitted into familiar categories; and if a crisis breaks, the information needed to deal with it may be impossible to obtain.”

Now it is obvious why banks stuck with such rigid models during the “Great Moderation” but it is less obvious why banks don’t discard them voluntarily post the “Minsky Moment”. The answer lies in the difficulty that organisations and other social systems face in making dramatic systemic U-turns even when the logic for doing so is clear, thus the importance of mitigating the TBTF problem and enabling entry of new firms. As I have asserted before: “A crony capitalist economic system that protects the incumbent firms hampers the ability of the system to innovate and adapt to novelty. It is obvious how the implicit subsidy granted to our largest financial institutions via the Too-Big-To-Fail doctrine represents a transfer of wealth from the taxpayer to the financial sector. It is also obvious how the subsidy encourages a levered, homogenous and therefore fragile financial sector that is susceptible to collapse. What is less obvious is the paralysis that it induces in the financial sector and by extension the macroeconomy long after the bailouts and the Minsky moment have passed.”

Bookmark and Share

Written by Ashwin Parameswaran

August 23rd, 2010 at 4:34 am

Raghuram Rajan on Monetary Policy and Macroeconomic Resilience

with 16 comments

Amongst economic commentators, Raghuram Rajan has stood out recently for his consistent calls to raise interest rates from “ultra-low to the merely low”. Predictably, this suggestion has been met with outright condemnation by many economists, both of Keynesian and monetarist persuasion. Rajan’s case against ultra-low rates utilises many arguments but this post will focus on just one of these arguments that is straight out of the “resilience” playbook. In 2008, Raghu Rajan and Doug Diamond co-authored a paper, the conclusion of which Rajan summarises in his FT article: “the pattern of Fed policy over time builds expectations. The market now thinks that whenever the financial sector’s actions result in unemployment, the Fed will respond with ultra-low rates and easy liquidity. So even as the Fed has maintained credibility as an inflation fighter, it has lost credibility in fighting financial adventurism. This cannot augur well for the future.”

Much like he accused the Austrians, Paul Krugman accuses Rajan of being a “liquidationist”. This is not a coincidence – Rajan and Diamond’s thesis is quite explicit about its connections to Austrian Business Cycle Theory: “a central bank that promises to cut interest rates conditional on stress, or that is biased towards low interest rates favouring entrepreneurs, will induce banks to promise higher payouts or take more illiquid projects. This in turn can make the illiquidity crisis more severe and require a greater degree of intervention, a view reminiscent of the Austrian theory of cycles.” But as the summary hints, Rajan and Diamond’s thesis is fundamentally different from ABCT. The conventional Austrian story identifies excessive credit inflation and interest rates below the “natural” rate of interest as the driver of the boom/bust cycle but Rajan and Diamond’s thesis identifies the anticipation by economic agents of low rates and “liquidity” facilities every time there is an economic downturn as the driver of systemic fragility. The adaptation of banks and other market players to this regime makes the eventual bust all the more likely. As Rajan and Diamond note: “If the authorities are expected to reduce interest rates when liquidity is at a premium, banks will take on more short-term leverage or illiquid loans, thus bringing about the very states where intervention is needed.”

Rajan and Diamond’s thesis is limited to the impact of such policies on banks but as I noted in a previous post, market players also adapt to this implicit commitment from the central bank to follow easy money policies at the first hint of economic trouble. This thesis is essentially a story of the Greenspan-Bernanke era and the damage that the Greenspan Put has caused. It also explains the dramatically diminishing returns inherent in the Greenspan Put strategy as the stabilising policies of the central bank become entrenched in the expectations of market players and crucially banks – in each subsequent cycle, the central bank has to do more and more (lower rates, larger liquidity facilities) to achieve less and less.

Bookmark and Share

Written by Ashwin Parameswaran

August 3rd, 2010 at 6:30 am

Critical Transitions in Markets and Macroeconomic Systems

with 6 comments

This post is the first in a series that takes an ecological and dynamic approach to analysing market/macroeconomic regimes and transitions between these regimes.

Normal, Pre-Crisis and Crisis Regimes

In a post on market crises, Rick Bookstaber identified three regimes that any model of the market must represent (normal, pre-crisis and crisis) and analysed the statistical properties (volatility,correlation etc) of each of these regimes. The framework below however characterises each regime by the varying combinations of positive and negative feedback processes and the variations and regime shifts are determined by the adaptive and evolutionary processes operating within the system.

1. Normal regimes are resilient regimes. They are characterised by a balanced and diverse mix of positive and negative feedback processes. For every momentum trader who bets on the continuation of a trend, there is a contrarian who bets the other way.

2. Pre-crisis regimes are characterised by an increasing dominance of positive feedback processes. An unusually high degree of stability or a persistent trend progressively weeds out negative feedback processes from the system thus leaving it vulnerable to collapse even as a result of disturbances that it could easily absorb in its previously resilient normal state. Such regimes can arise from bubbles but this is not necessary. Pre-crisis only implies that a regime change into the crisis regime is increasingly likely – in ecological terms, the pre-crisis regime is fragile and has suffered a significant loss of resilience.

3. Crisis regimes are essentially transitional  – the disturbance has occurred and the positive feedback processes that dominated the previous regime have now reversed direction. However, the final destination of this transition is uncertain – if the system is left alone, it will undergo a discontinuous transition to a normal regime. However, if sufficient external stabilisation pressures are exerted upon the system, it may revert to the pre-crisis regime or even stay in the crisis regime for a longer period. It’s worth noting that I define a normal regime only by its resilience and not by its desirability – even a state of civilizational collapse can be incredibly resilient.

“Critical Transitions” from the Pre-Crisis to the Crisis Regime

In fragile systems even a minor disturbance can trigger a discontinuous move to an alternative regime – Marten Scheffer refers to such moves as “critical transitions”. Figures a,b,c and d below represent a continuum of ways in which the system can react to changing external conditions (ref Scheffer et al) . Although I will frequently refer to “equilibria” and “states” in the discussion below, these are better described as “attractors” and “regimes” given the dynamic nature of the system – the static terminology is merely a simplification.

In Figure a, the system state reacts smoothly to perturbations – for example, a large external change will trigger a large move in the state of the system. The dotted arrows denote the direction in which the system moves when it is not on the curve i.e. in equilibrium.  Any move away from equilibrium triggers forces that bring it back to the curve. In Figure b, the transition is non-linear and a small perturbation can trigger a regime shift – however a reversal of conditions of an equally small magnitude can reverse the regime shift. Clearly, such a system does not satisfactorily explain our current economic predicament where monetary and fiscal intervention far in excess of the initial sub-prime shock have failed to bring the system back to its previous state.

Figure c however may be a more accurate description of the current state of the economy and the market – for a certain range of conditions, there exist two alternative stable states separated by an unstable equilibrium (marked by the dotted line). As the dotted arrows indicate, movement away from the unstable equilibrium can carry the system to either of the two alternative stable states. Figure d illustrates how a small perturbation past the point F2 triggers a “catastrophic” transition from the upper branch to the lower branch – moreover, unless conditions are reversed all the way back to the point F1, the system will not revert back to the upper branch stable state. The system therefore exhibits “hysteresis” – i.e. the path matters. The forward and backward switches occur at different points F2 and F1 respectively, which implies that reversing such transitions is not easy. A comprehensive discussion of the conditions that will determine the extent of hysteresis is beyond the scope of this post – however it is worth mentioning that cognitive and organisational rigidity in the absence of sufficient diversity is a sufficient condition for hysteresis in the macro-system.

Before I apply the above framework to some events in the market, it is worth clarifying how the states in Figure d correspond to those chosen by Rick Bookstaber. The “normal” regime refers to the parts of the upper and lower branch stable states that are far from the points F1 and F2 i.e. the system is resilient to a change in external conditions. As I mentioned earlier, normal does not equate to desirable – the lower branch could be a state of collapse. If we designate the upper branch as a desirable normal state and the lower branch as an undesirable one, then the zone close to point F2 on the upper branch is the pre-crisis regime. The crisis regime is the short catastrophic transition from F2 to the lower branch if the system is left alone. If forces external to the system are applied to prevent a transition to the lower branch, then the system could either revert back to the upper branch or even stay in the crisis regime on the dotted line unstable equilibrium for a longer period.

The Magnetar Trade revisited

In an earlier post, I analysed how the infamous Magnetar Trade could be explained with a framework that incorporates catastrophic transitions between alternative stable states. As I noted: “The Magnetar trade would pay off in two scenarios – if there were no defaults in any of their CDOs, or if there were so many defaults that the tranches that they were short also defaulted alongwith the equity tranche. The trade would likely lose money if there were limited defaults in all the CDOs and the senior tranches did not default. Essentially, the trade was attractive if one believed that this intermediate scenario was improbable…Intermediate scenarios are unlikely when the system is characterised by multiple stable states and catastrophic transitions between these states. In adaptive systems such as ecosystems or macroeconomies, such transitions are most likely when the system is fragile and in a state of low resilience. The system tends to be dominated by positive feedback processes that amplify the impact of small perturbations, with no negative feedback processes present that can arrest this snowballing effect.”

In the language of critical transitions, Magnetar calculated that the real estate and MBS markets were in a fragile pre-crisis state and no intervention would prevent the rapid critical transition from F2 to the lower branch.

“Schizophrenic” Markets and the Long Crisis

Recently, many commentators have noted the apparently schizophrenic nature of the markets, turning from risk-on to risk-off at the drop of a hat. For example, John Kemp argues that the markets are “trapped between euphoria and despair” and notes the U-shaped distribution of Bank of England’s inflation forecasts (table 5.13). Although at first glance this sort of behaviour seems irrational, it may not be – As PIMCO’s Richard Clarida notes: “we are in a world in which average outcomes – for growth, inflation, corporate and sovereign defaults, and the investment returns driven by these outcomes – will matter less and less for investors and policymakers. This is because we are in a New Normal world in which the distribution of outcomes is flatter and the tails are fatter. As such, the mean of the distribution becomes an observation that is very rarely realized”

Richard Clarida’s New Normal is analogous to the crisis regime (the dotted line unstable equilibrium in Figures c and d). Any movement in either direction is self-fulfilling and leads to either a much stronger economy or a much weaker economy. So why is the current crisis regime such a long one? As I mentioned earlier, external stabilisation (in this case monetary and fiscal policy) can keep the system from collapsing down to the lower branch normal regime – the “schizophrenia” only indicates that the market may make a decisive break to a stable state sooner rather than later.

Bookmark and Share

Written by Ashwin Parameswaran

July 29th, 2010 at 3:27 am

Bank Capital and the Monetary Transmission Channel: The Importance of New Firm Entry

with 10 comments

A popular line of argument blames the lack of bank lending despite the Fed’s extended ZIRP policy on the impaired capital position of the banking sector. For example, one of the central tenets of MMT is the thesis that “banks are capital constrained, not reserve constrained”. Understandably, commentators extrapolate from the importance of bank capital to argue that banks must be somehow recapitalised if the lending channel is to function properly as Michael Pettis does here.

The capital constraint that is an obvious empirical reality for individual banks’ does not imply that bank bailouts are the only way to prevent a collapse of the monetary transmission channel. Although individual banks are capital constrained, the argument that an impairment in capital will induce the bank to turn away profitable lending opportunities assumes that the bank is unable to attract a fresh injection of capital. Again, this is not far from the truth: As I have explained many times on this blog, banks are motivated to minimise capital and given the “liquidity” support extended to them by the central bank during the crisis, they are incentivised to turn away offers for recapitalisation and instead slowly recapitalise by borrowing from the central bank and lending out to low-risk ventures such as T-Bonds or AAA Bonds. This of course means that they are able to avoid injecting new capital unless forced to do so by their regulator. Potential investors know of this incentive structure facing the bank and are wary of offering new equity. Moreover, injecting new capital into existing banks can be a riskier proposition than capitalising a new bank due to the opacity of bank balance sheets.

So the bank capital “limitation” that faces individual banks is real, in no small part due to the incestuous nature of their relationship with the central bank. But does this imply that the banking sector as a whole is capital constrained? The financial intermediation channel as a whole is capital constrained only if there is no entry of new firms into the banking sector despite the presence of profitable lending opportunities. Again this is empirically true but I would argue that changing this empirical reality is critical if we want to achieve a resilient financial system. The opacity of bank balance sheets means that even in the most perfectly competitive of markets, it is unlikely that old banks will find willing new investors when dramatic financial crises hit. However, investors most certainly can and should start up new unimpaired financial intermediary firms if the opportunity is profitable enough.

The onerous regulations and the time required to set up a new bank clearly discourage new entry – see for example the experience of potential new banks in the UK here. But even if we accelerate the regulatory approval process, the fundamental driver that discourages the entry of startup new banks is the Too-Big-To-Fail(TBTF) subsidy extended to the large incumbent banks that ensures that startup banks are forced to operate with significantly higher funding costs than the TBTF banks. This may be the most damaging aspect of TBTF – not only does it discriminate against existing small banks, it discourages new entry into the sector thus crippling the monetary transmission mechanism via the bank capital constraint.

Bookmark and Share

Written by Ashwin Parameswaran

July 12th, 2010 at 7:57 am

Posted in Financial Crisis

A “Systems” Explanation of How Bailouts can Cause Business Cycles

with 3 comments

In a previous post, I quoted Richard Fisher’s views on how bailouts cause business cycles and financial crises: “The system has become slanted not only toward bigness but also high risk…..if the central bank and regulators view any losses to big bank creditors as systemically disruptive, big bank debt will effectively reign on high in the capital structure. Big banks would love leverage even more, making regulatory attempts to mandate lower leverage in boom times all the more difficult…..It is not difficult to see where this dynamic leads—to more pronounced financial cycles and repeated crises.”

Fisher utilises the “incentives” argument but the same argument could also be made via the language of natural selection and Hannan and Freeman did exactly that in their seminal paper that launched the field of Organizational Ecology”. Hannan and Freeman wrote the below in the context of the bailout of Lockheed in 1971 but it is as relevant today as it has ever been: “we must consider what one anonymous reader, caught up in the spirit of our paper, called the anti-eugenic actions of the state in saving firms such as Lockheed from failure. This is a dramatic instance of the way in which large dominant organizations can create linkages with other large and powerful ones so as to reduce selection pressures. If such moves are effective, they alter the pattern of selection. In our view, the selection pressure is bumped up to a higher level. So instead of individual organizations failing, entire networks fail. The general consequence of a large number of linkages of this sort is an increase in the instability of the entire system and therefore we should see boom and bust cycles of organizational outcomes.”

Bookmark and Share

Written by Ashwin Parameswaran

June 8th, 2010 at 3:45 pm

Richard Fisher of the Dallas Fed on Financial Reform

with 6 comments

Richard Fisher of the Dallas Fed delivered a speech last week( h/t Zerohedge) on the topic of financial reform, which delivered some of the most brutally honest analysis of the problem at hand that I’ve seen from anyone at the Fed. It also made a few points that I felt deserved further analysis and elaboration.

The Dynamics of the TBTF Problem

In Fisher’s words: “Big banks that took on high risks and generated unsustainable losses received a public benefit: TBTF support. As a result, more conservative banks were denied the market share that would have been theirs if mismanaged big banks had been allowed to go out of business. In essence, conservative banks faced publicly backed competition…..It is my view that, by propping up deeply troubled big banks, authorities have eroded market discipline in the financial system.

The system has become slanted not only toward bigness but also high risk…..if the central bank and regulators view any losses to big bank creditors as systemically disruptive, big bank debt will effectively reign on high in the capital structure. Big banks would love leverage even more, making regulatory attempts to mandate lower leverage in boom times all the more difficult…..

It is not difficult to see where this dynamic leads—to more pronounced financial cycles and repeated crises.”

Fisher correctly notes that TBTF support damages system resilience not only by encouraging higher leverage amongst large banks, but by disadvantaging conservative banks that would otherwise have gained market share during the crisis. As I have noted many times on this blog, the dynamic, evolutionary view of moral hazard focuses not only on the protection provided to destabilising positive feedback forces, but on how stabilising negative feedback forces that might have flourished in the absence of the stabilising actions are selected against and progressively weeded out of the system.

Regulatory Discretion and the Time Consistency Problem

Fisher: “Language that includes a desire to minimize moral hazard—and directs the FDIC as receiver to consider “the potential for serious adverse effects”—provides wiggle room to perpetuate TBTF.” Fisher notes that it’s difficult to credibly commit ex-ante not to bail out TBTF creditors – as long as the regulator retains any amount of discretion with the purpose of maintaining systemic stability, they will be tempted to use it.

On the Ineffectiveness of Regulation Alone

Fisher: “While it is certainly true that ineffective regulation of systemically important institutions—like big commercial banking companies—contributed to the crisis, I find it highly unlikely that such institutions can be effectively regulated, even after reform…Simple regulatory changes in most cases represent a too-late attempt to catch up with the tricks of the regulated—the trickiest of whom tend to be large. In the U.S. financial system, what passed as “innovation” was in large part circumvention, as financial engineers invented ways to get around the rules of the road. There is little evidence that new regulations, involving capital and liquidity rules, could ever contain the circumvention instinct.”

This is a sentiment I don’t often hear expressed by a regulator – As I have opined before on this blog, regulations alone just don’t work. The history of banking is one of repeated circumvention of regulations by banks, a process that has only accelerated with the increased completeness of markets. The question is not whether deregulation accelerated the process of banks’ maximising the moral hazard subsidy – it almost certainly did and this was understood even by the Fed as early as 1983. As John Kareken noted, “Deregulation Is the Cart, Not the Horse”. The question is whether re-regulation has any chance of succeeding without fixing the incentives guiding the actors in the system – it does not.

Bailouts Come in Many Shapes and Sizes

Fisher: “Even if an effective resolution regime can be written down, chances are it might not be used. There are myriad ways for regulators to forbear. Accounting forbearance, for example, could artificially boost regulatory capital levels at troubled big banks. Special liquidity facilities could provide funding relief. In this and similar manners, crisis-related events that might trigger the need for resolution could be avoided, making resolution a moot issue.”

A watertight resolution regime may only encourage regulators to aggressively utilise other forbearance mechanisms. Fisher mentions accounting and liquidity relief but fails to mention the most important “alternative bailout mechanism” – the “Greenspan Put” variant of monetary policy.

Preventing Systemic Risk perpetuates the Too-Big-To-Fail Problem

Fisher: “Consider the idea of limiting any and all financial support strictly to the system as a whole, thus preventing any one firm from receiving individual assistance….If authorities wanted to support a big bank in trouble, they would need only institute a systemwide program. Big banks could then avail themselves of the program, even if nobody else needed it. Systemwide programs are unfortunately a perfect back door through which to channel big bank bailouts.”

“System-wide” programs by definition get activated only when big banks and non-banking financial institutions such as GE Capital are in trouble. Apart from perpetuating TBTF, they encourage smaller banks to mimic big banks and take on similar tail risk thus reducing system diversity.

Shrink the TBTF Banks?

Fisher clearly prefers that the big banks be shrunk as a “second-best” solution to the incentive problems that both regulators and banks face in our current system. Although I’m not convinced that shrinking the banks is a sufficient response, even a “free market” solution to the crisis will almost certainly imply a more dispersed banking sector, due to the removal of the TBTF subsidy. The gist of the problem is not size but insufficient diversity. Fisher argues “there is considerable diversity in strategy and performance among banks that are not TBTF.” This is the strongest and possibly even the only valid argument for breaking up the big banks. My concern is that even a more dispersed banking sector will evolve towards a tightly coupled and homogenous outcome due to the protection against systemic risk provided by the “alternative bailout mechanisms”, particularly the Greenspan Put.

The fact that Richard Fisher’s comments echo themes popular with both left-wing and right-wing commentators is not a coincidence. In the fitness landscape of our financial system, our current choice is not so much a local peak as a deep valley – tinkering will get us nowhere and a significant move either to the left or to the right is likely to be an improvement.

Bookmark and Share

Written by Ashwin Parameswaran

June 6th, 2010 at 1:30 pm

Ratings Reform: The Franken Amendment and Structured Products

with 6 comments

The Franken Amendment draws upon Richardson and White’s idea of a centralised clearing platform which I had criticised earlier. This proposal is based upon a flawed understanding of the structured products’ ratings process and the incentives guiding the agencies during this process and arises from a false extrapolation of the corporate and sovereign bond ratings process into the realm of structured products.

The fatal flaw in our ratings regime is not the issuer-pays model but the fact that ratings agencies only get paid if the bond is issued. In the structured products space, the difference between a potential AAA rating and a AA rating is not just that a higher spread is paid to the investor on the bond. The lower rating usually means that the bond will not be issued at all, which means that the ratings agency will not earn any fees. This problem cannot be solved even if we have a single monopolistic ratings agency paid by the SEC, so long as the fees are payable only upon issuance of the bond. As I have discussed earlier in more detail, ratings agencies are incentivised not only to expand market share but to expand the size of the market for rateable securities.

Let me explain the logic with a simple example. A pension fund approaches a bank for a bespoke AAA tranche on a portfolio of mortgage-backed securities. The bank constructs an appropriate tranche paying Libor + 100 bps and asks for a rating, upon which the clearing platform allocates it an agency. The agency comes back with a AA rating instead – so what does the bank do in this instance? It cannot change the tranching without damaging its own economics and the client will not accept a AA tranche paying the same coupon. So the deal just does not get done and the ratings agency is left without any fee for its opinion.

Let us go a little further along this chain of thought – all competing agencies are similarly stringent in their ratings and discover after six months that their earnings and dealflow have collapsed! At this point, they will of course gradually start easing their ratings requirements and sooner or later we will end up in the same position we were in before the crisis hit us. Its worth noting that this outcome does not change if someone other than the issuer pays the agency or even if we have a monopolistic ratings agency. Provided that the agency is a profit-maximising entity, the removal of direct competition may slow the process of easing of ratings criteria, but it will not change the end result.

In fact, the above example is too generous as it ignores the ease with which the centralised platform process can be gamed by banks. The central problem here is the fact that there are a multitude number of structured bonds that can fulfill a typical client request, such as the one above. For example, let us assume that the bank above constructs a tranche from a portfolio of MBS and applies to the platform which allocates it to Moody’s. If Moody’s comes back with an unsatisfactory rating, it cancels the issuance, makes a small modification to the portfolio and tranching and tries its luck again. The process can continue until the bank gets allocated to a more friendly ratings agency and the desired rating is achieved.

The fundamental issue here is that tinkering with the system in this manner is futile – the problems inherent in our current financial system are too fundamental and we have only two choices as I hinted at in an earlier post. We can either put in place blunt and almost certainly efficiency-reducing regulations or we can move towards a free-market system where the implicit and explicit protection provided to the banking sector is removed in a credible and time-consistent manner. To give you a simple example of a blunt regulation that will reduce the potential for ratings arbitrage, we could legislate that if a portfolio of sub investment-grade assets cannot be tranched to produce a AAA tranche. The price we pay for such regulations is that we eliminate a significant proportion of legitimate tranching, but this trade-off is unavoidable.

Bookmark and Share

Written by Ashwin Parameswaran

June 3rd, 2010 at 4:02 pm

The “Crash of 2:45 p.m.” as a Consequence of System Fragility

with 7 comments

When the WSJ provides us with the least plausible explanation of the “Crash of 2:45 p.m.”, it is only fitting that Jon Stewart provides us with the most succinct and accurate diagnosis of the crash.

Most explanations of the crash either focus on the proximate cause of the crash or blame it all on the “perfect storm”. The “perfect storm” explanation absolves us from analysing the crash too closely, the implicit conclusion being that such an event doesn’t occur too often and not much needs to or can be done to prevent its recurrence. There are two problems with this explanation. For one, it violates Occam’s Razor – it is easy to construct an ex-post facto explanation that depends upon a confluence of events that have not occurred together before. And more crucially, perfect storms seem to occur all too often. As Jon Stewart put it: “Why is it that whenever something happens to the people that should’ve seen it coming didn’t see coming, it’s blamed on one of these rare, once in a century, perfect storms that for some reason take place every f–king two weeks. I’m beginning to think these are not perfect storms. I’m beginning to think these are regular storms and we have a shty boat.”

The focus on proximate causes ignores the complexity and nonlinearity of market systems. Michael Mauboussin explained it best when he remarked: “Cause and effect thinking is dangerous. Humans like to link effects with causes, and capital markets activities are no different. For example, politicians created numerous panels after the market crash in 1987 to identify its “cause.” A nonlinear approach, however, suggests that large-scale changes can come from small-scale inputs. As a result, cause-and-effect thinking can be both simplistic and counterproductive.” The true underlying causes may be far removed from the effect, both in time and in space and the proximate cause may only be the “straw that broke the camel’s back”.

So what is the true underlying cause of the crash? In my opinion, the crash was the inevitable consequence of a progressive loss of system resilience. Why and how has the system become fragile? A static view of markets frequently attributes loss of resilience to the presence of positive feedback processes such as margin calls on levered bets, stop-loss orders, dynamic hedging of short-gamma positions and even just plain vanilla momentum trading strategies – Laura Kodres‘ paper here has an excellent discussion on “destabilizing” hedge fund strategies. However, in a dynamic conception of markets, a resilient market is characterised not by the absence of positive feedback processes but by the presence of a balanced and diverse mix of positive and negative feedback processes.

Policy measures that aim to stabilise the system by countering the impact of positive feedback processes select against and weed out negative feedback processes – Stabilisation reduces system resilience. The decision to cancel errant trades is an example of such a measure. It is critical that all market participants who implement positive feedback strategies (such as stop-loss market orders) suffer losses and those who step in to buy in times of chaos i.e. the negative-feedback providers are not denied of the profits that would accrue to them if markets recover. This is the real damage done by policy paradigms such as the “Greenspan/Bernanke Put” that implicitly protect asset markets. They leave us with a fragile market prone to collapse even with a “normal storm”, unless there is further intervention as we saw from the EU/ECB. Of course, every subsequent intervention that aims to stabilise the system only further reduces its resilience.

As positive feedback processes become increasingly dominant, even normal storms that were easily absorbed earlier will cause a catastrophic transition in the system. There are many examples of the loss of system resilience being characterised by its vulnerability to a “normal” disturbance, such as in Minsky’s Financial Instability Hypothesis or Buzz Holling’s conception of ecological resilience, both of which I have discussed earlier.

The Role of Waddell & Reed

In the framework I have outlined above, the appropriate question to ask of the Waddell & Reed affair is whether their sell order was a “normal” storm or an “abnormal” storm? More specifically, pinning the blame on a single order requires us to prove that each time in the past an order of this size was executed, the market crashed in a similar manner. It is also probable that the sell order itself was a component of a positive feedback hedging strategy and Waddell’s statement that it was selling the futures to “protect fund investors from downside risk” confirms this assessment. In this case, the Waddell sell order was an endogenous event in the framework and not an exogenous shock. Mitigating the impact of such positive feedback strategies only makes the system less resilient in the long run.

As Taleb puts it: “When a bridge collapses, you don’t look at the last truck that was on it, you look at the engineer. You’re looking for the straw that broke the camel’s back. Let’s not worry about the straw, focus on the back.” Or as Jon Stewart would say, let’s figure out why we have a shty boat.

Bookmark and Share

Written by Ashwin Parameswaran

May 16th, 2010 at 4:42 am

Organisational Rigidity, Crony Capitalism, Too-Big-To-Fail and Macro-Resilience

with 11 comments

In a previous post, I outlined why cognitive rigidity is not necessarily irrational even though it may lead to a loss of resilience. However, if the universe of agent strategies is sufficiently diverse, a macro-system comprising of fragile, inflexible agents can be incredibly resilient. So a simple analysis of micro-fragility does not enable us to reach any definitive conclusions about macro-resilience – organisations and economies may retain significant resilience and an ability to cope with novelty despite the fragility of their component agents.

Yet, there is significant evidence that organisations exhibit rigidity and although some of this rigidity can be perceived as irrational or perverse, much of it arises as a rational response to uncertainty. In Hannan and Freeman’s work on Organizational Ecology”, the presence of significant organisational rigidity is the basis of a selection-based rather than an adaptation-based explanation of organisational diversity. There are many factors driving organisational inertia, some of which have been summarised in this paper by Hannan and Freeman. These include internal considerations such as sunk costs, informational constraints, political constraints etc as well as external considerations such as barriers to entry and exit. In a later paper, Hannan and Freeman also justify organisational inertia as a means to an end, the end being “reliability”. Just as was the case in Ronald Heiner’s and V.S. Ramachandran’s framework discussed previously, inertia is a perfectly logical response to an uncertain environment.

Hannan and Freeman also hypothesise that older and larger organizations are more structurally inert and less capable of adapting to novel situations. In his book “Dynamic Economics”, Burton Klein analysed the historical record and found that advances that “resulted in new S-shaped curves in relatively static industries” do not come from the established players in an industry. In an excellent post, Sean Park summarises exactly why large organizations find it so difficult to innovate and also points to the pre-eminent reference in the management literature on this topic – Clayton Christensen’s “The Innovator’s Dilemma”. Christensen’s work is particularly relevant as it elaborates how established firms can fail not because of any obvious weaknesses, but as a direct consequence of their focus on core clients’ demands.

The inability of older and larger firms to innovate and adapt to novelty can be understood within the framework of the exploration-exploitation tradeoff as an inability to “explore” in an effective manner. As Levinthal and March put it, “past exploitation in a given domain makes future exploitation in the same domain even more efficient….As they develop greater and greater competence at a particular activity, they engage in that activity more, thus further increasing competence and the opportunity cost of exploration.” Exploration is also anathema to large organisations as it seems to imply a degree of managerial indecision. David Ellerman captures the essence of this thought process: “The organization’s experts will decide on the best experiment or approach—otherwise the organization would appear “not to know what it’s doing.””

A crony capitalist economic system that protects the incumbent firms hampers the ability of the system to innovate and adapt to novelty. It is obvious how the implicit subsidy granted to our largest financial institutions via the Too-Big-To-Fail doctrine represents a transfer of wealth from the taxpayer to the financial sector. It is also obvious how the subsidy encourages a levered, homogenous and therefore fragile financial sector that is susceptible to collapse. What is less obvious is the paralysis that it induces in the financial sector and by extension the macroeconomy long after the bailouts and the Minsky moment have passed.

We shouldn’t conflate this paralysis with an absence of competition between the incumbents – the competition between the incumbents may even be intense enough to ensure that they retain only a small portion of the rents that they fight so desperately to retain. What the paralysis does imply is a fierce and unified defence of the local peak that they compete for. Their defence is directed not so much against new entrants who want to play the incumbents at their own game, but at those who seek to change the rules of the game.

The best example of this is the OTC derivatives market which is the benefits of TBTF to the big banks are most evident. Bob Litan notes that clients “wanted the comfort of knowing that they were dealing with large, well-capitalized financial institutions” when dealing in CDS and this observation holds for most other OTC derivative markets. He also correctly identifies that the crucial component of effective reform is removing the advantage that the “Derivative Dealers’ Club” currently possess: “Systemic risk also would be reduced with true derivatives market reforms that would have the effect of removing the balance sheet advantage of the incumbent dealers now most likely regarded as TBTF. If end-users know that when their trades are completed with a clearinghouse, they are free to trade with any market maker – not just the specific dealer with whom they now customarily do business – that is willing to provide the right price, the resulting trades are more likely to be the end-users’ advantage. In short, in a reformed market, the incumbent dealers would face much greater competition.”

Innovation in the financial sector is also hampered because of the outsized contribution it already makes to economic activity in the United States, which makes market-broadening innovations extremely unlikely. James Utterback identified how difficult it is for new entrants to immediately substitute incumbent players: “Innovations that broaden a market create room for new firms to start. Innovation-inspired substitutions may cause established firms to hang on all the more tenaciously, making it extremely difficult for an outsider to gain a foothold along with the cash flow needed to expand and become a player in the industry.” Of course, the incumbents may eventually break away from the local peak but an extended period of stagnation is more likely.

Sustaining an environment conducive to the entry of new firms is critical to the maintenance of a resilient macroeconomy that is capable of innovating and dealing with novelty. The very least that financial sector reform must achieve is to eliminate the benefits of TBTF that currently make it all but impossible for a new entrant to challenge the status quo.

Bookmark and Share

Written by Ashwin Parameswaran

May 2nd, 2010 at 3:48 pm

Ratings Reform: The “Centralised Clearing Platform” Proposal

with 5 comments

In an article “berating the raters”, Paul Krugman points to a proposal by Matthew Richardson and Lawrence White who suggest that the SEC create a centralised clearing platform for ratings agencies. Each issuer that would like its debt rated would have to approach this platform which would then choose an agency to rate the debt. The issuer would still have to pay for the rating but the agency would be chosen by the regulator. In their words, “This model has the advantage of simultaneously solving (i) the free rider problem because the issuer still pays, (ii) the conflict of interest problem because the agency is chosen by the regulating body, and (iii) the competition problem because the regulator’s choice can be based on some degree of excellence, thereby providing the rating agency with incentives to invest resources, innovate, and perform high quality work.”

The critical assumption behind the idea of a centralised clearing platform is that the total notional of bonds that exist and need to be rated is constant i.e. it assumes that the actions of the ratings agencies only divide up the ratings market and do not expand or contract it. So for example, a trillion dollars worth of bonds would be issued each year and the regulator would choose who rates which bond thus ensuring that none of the rating agencies are incentivised to give favourable ratings to junk assets. This assumption may hold for corporate bonds, but it is nowhere close to being true for structured bonds like CDOs which were the source of the losses in the crisis.

There are some fundamental differences between the ratings process for corporate bonds and the process for structured products such as CDOs. Whether a corporate bond is issued or not is usually not critically dependent on the rating assigned to it. For example, the fact that a corporate bond is rated as BBB instead of single-A will most likely not prevent the bond from being issued. A firm usually decides to undertake a bond issuance depending on its financing needs and unless the achieved rating is dramatically different from expectations, the bond will be issued and fees will be paid to the ratings agency.

On the other hand, whether a structured bond is issued or not is critically dependent on the ratings methodology applied to it. A structured bond is constructed via an iterative process involving the bank, the investor and the ratings agency. If the ratings methodology for a structured product is not generous enough to provide the investor with a yield comparable to equivalently rated assets and to enable the bank to earn a reasonable fee, the bond will just not get issued. When it comes to structured bonds, bonds are not created first and rated next. Instead the rating given to the bond is critical in determining whether it is issued and correspondingly, whether the ratings agency gets paid.

If each one of the ratings agencies that are part of the centralised platform adopts a stringent ratings methodology that destroys the economics of the trade, the bond is not issued and none of the agencies earns the fee. In this manner, the agencies are still incentivised to loosen their standards even in the absence of competition from other agencies. When it comes to structured bonds, ratings agencies have historically been focused as much on expanding the market as competing with each other. Indeed, the biggest catalyst in expanding rating agency profits over the last two decades has been the steady expansion of the universe of products that they were willing to rate using a generous methodology from vanilla bonds/loans/mortgages to tranches of bond portfolios to tranches of synthetic exposures to even complex algorithms and trading strategies – the crowning example of the last variety being the CPDO. Even when one of the agencies was the first-mover in rating a new product, it could not adopt too stringent a methodology for fear of killing the deal altogether.

Moreover, the very notion of restricting the choice of agencies that can rate a given structured bond is an oxymoron given the iterative process – let us assume that a particular type of structured bond is leniently rated by only one of the three ratings agencies. If the agency assigns the bond at random to another agency, the bank can merely make a small modification to the underlying portfolio and try its luck again till it gets allocated to the right agency. The inherently iterative ratings process also explains why the furore over ratings agencies making their models public is a red herring, as I  have explained in a previous post.

I am not claiming that competition between the agencies did not make things worse at the margin in the financial crisis. But a tangible difference to the outcome in the crisis would have been achieved only if the ratings agencies had adopted such a stringent methodology that would have caused subprime CDOs and many other leveraged/risky structures to not be issued at all. Given the demand for AAA assets with extra yield (driven by internal and external regulations), this could have been achieved only by an explicit ban on rating such structures. Else, even if there was just one monopolistic rating agency that was paid by the regulator, the agency would have been almost as aggressive in rating new structures just because of the indisputable fact that the agency got paid only when a deal got done, and lenient ratings standards got more deals done.

Bookmark and Share

Written by Ashwin Parameswaran

April 27th, 2010 at 4:33 pm

Posted in Financial Crisis