macroresilience

resilience, not stability

Archive for the ‘Moral Hazard’ Category

A “Systems” Explanation of How Bailouts can Cause Business Cycles

with 3 comments

In a previous post, I quoted Richard Fisher’s views on how bailouts cause business cycles and financial crises: “The system has become slanted not only toward bigness but also high risk…..if the central bank and regulators view any losses to big bank creditors as systemically disruptive, big bank debt will effectively reign on high in the capital structure. Big banks would love leverage even more, making regulatory attempts to mandate lower leverage in boom times all the more difficult…..It is not difficult to see where this dynamic leads—to more pronounced financial cycles and repeated crises.”

Fisher utilises the “incentives” argument but the same argument could also be made via the language of natural selection and Hannan and Freeman did exactly that in their seminal paper that launched the field of Organizational Ecology”. Hannan and Freeman wrote the below in the context of the bailout of Lockheed in 1971 but it is as relevant today as it has ever been: “we must consider what one anonymous reader, caught up in the spirit of our paper, called the anti-eugenic actions of the state in saving firms such as Lockheed from failure. This is a dramatic instance of the way in which large dominant organizations can create linkages with other large and powerful ones so as to reduce selection pressures. If such moves are effective, they alter the pattern of selection. In our view, the selection pressure is bumped up to a higher level. So instead of individual organizations failing, entire networks fail. The general consequence of a large number of linkages of this sort is an increase in the instability of the entire system and therefore we should see boom and bust cycles of organizational outcomes.”

Bookmark and Share

Written by Ashwin Parameswaran

June 8th, 2010 at 3:45 pm

Richard Fisher of the Dallas Fed on Financial Reform

with 6 comments

Richard Fisher of the Dallas Fed delivered a speech last week( h/t Zerohedge) on the topic of financial reform, which delivered some of the most brutally honest analysis of the problem at hand that I’ve seen from anyone at the Fed. It also made a few points that I felt deserved further analysis and elaboration.

The Dynamics of the TBTF Problem

In Fisher’s words: “Big banks that took on high risks and generated unsustainable losses received a public benefit: TBTF support. As a result, more conservative banks were denied the market share that would have been theirs if mismanaged big banks had been allowed to go out of business. In essence, conservative banks faced publicly backed competition…..It is my view that, by propping up deeply troubled big banks, authorities have eroded market discipline in the financial system.

The system has become slanted not only toward bigness but also high risk…..if the central bank and regulators view any losses to big bank creditors as systemically disruptive, big bank debt will effectively reign on high in the capital structure. Big banks would love leverage even more, making regulatory attempts to mandate lower leverage in boom times all the more difficult…..

It is not difficult to see where this dynamic leads—to more pronounced financial cycles and repeated crises.”

Fisher correctly notes that TBTF support damages system resilience not only by encouraging higher leverage amongst large banks, but by disadvantaging conservative banks that would otherwise have gained market share during the crisis. As I have noted many times on this blog, the dynamic, evolutionary view of moral hazard focuses not only on the protection provided to destabilising positive feedback forces, but on how stabilising negative feedback forces that might have flourished in the absence of the stabilising actions are selected against and progressively weeded out of the system.

Regulatory Discretion and the Time Consistency Problem

Fisher: “Language that includes a desire to minimize moral hazard—and directs the FDIC as receiver to consider “the potential for serious adverse effects”—provides wiggle room to perpetuate TBTF.” Fisher notes that it’s difficult to credibly commit ex-ante not to bail out TBTF creditors – as long as the regulator retains any amount of discretion with the purpose of maintaining systemic stability, they will be tempted to use it.

On the Ineffectiveness of Regulation Alone

Fisher: “While it is certainly true that ineffective regulation of systemically important institutions—like big commercial banking companies—contributed to the crisis, I find it highly unlikely that such institutions can be effectively regulated, even after reform…Simple regulatory changes in most cases represent a too-late attempt to catch up with the tricks of the regulated—the trickiest of whom tend to be large. In the U.S. financial system, what passed as “innovation” was in large part circumvention, as financial engineers invented ways to get around the rules of the road. There is little evidence that new regulations, involving capital and liquidity rules, could ever contain the circumvention instinct.”

This is a sentiment I don’t often hear expressed by a regulator – As I have opined before on this blog, regulations alone just don’t work. The history of banking is one of repeated circumvention of regulations by banks, a process that has only accelerated with the increased completeness of markets. The question is not whether deregulation accelerated the process of banks’ maximising the moral hazard subsidy – it almost certainly did and this was understood even by the Fed as early as 1983. As John Kareken noted, “Deregulation Is the Cart, Not the Horse”. The question is whether re-regulation has any chance of succeeding without fixing the incentives guiding the actors in the system – it does not.

Bailouts Come in Many Shapes and Sizes

Fisher: “Even if an effective resolution regime can be written down, chances are it might not be used. There are myriad ways for regulators to forbear. Accounting forbearance, for example, could artificially boost regulatory capital levels at troubled big banks. Special liquidity facilities could provide funding relief. In this and similar manners, crisis-related events that might trigger the need for resolution could be avoided, making resolution a moot issue.”

A watertight resolution regime may only encourage regulators to aggressively utilise other forbearance mechanisms. Fisher mentions accounting and liquidity relief but fails to mention the most important “alternative bailout mechanism” – the “Greenspan Put” variant of monetary policy.

Preventing Systemic Risk perpetuates the Too-Big-To-Fail Problem

Fisher: “Consider the idea of limiting any and all financial support strictly to the system as a whole, thus preventing any one firm from receiving individual assistance….If authorities wanted to support a big bank in trouble, they would need only institute a systemwide program. Big banks could then avail themselves of the program, even if nobody else needed it. Systemwide programs are unfortunately a perfect back door through which to channel big bank bailouts.”

“System-wide” programs by definition get activated only when big banks and non-banking financial institutions such as GE Capital are in trouble. Apart from perpetuating TBTF, they encourage smaller banks to mimic big banks and take on similar tail risk thus reducing system diversity.

Shrink the TBTF Banks?

Fisher clearly prefers that the big banks be shrunk as a “second-best” solution to the incentive problems that both regulators and banks face in our current system. Although I’m not convinced that shrinking the banks is a sufficient response, even a “free market” solution to the crisis will almost certainly imply a more dispersed banking sector, due to the removal of the TBTF subsidy. The gist of the problem is not size but insufficient diversity. Fisher argues “there is considerable diversity in strategy and performance among banks that are not TBTF.” This is the strongest and possibly even the only valid argument for breaking up the big banks. My concern is that even a more dispersed banking sector will evolve towards a tightly coupled and homogenous outcome due to the protection against systemic risk provided by the “alternative bailout mechanisms”, particularly the Greenspan Put.

The fact that Richard Fisher’s comments echo themes popular with both left-wing and right-wing commentators is not a coincidence. In the fitness landscape of our financial system, our current choice is not so much a local peak as a deep valley – tinkering will get us nowhere and a significant move either to the left or to the right is likely to be an improvement.

Bookmark and Share

Written by Ashwin Parameswaran

June 6th, 2010 at 1:30 pm

Maturity Transformation and the Yield Curve

with 10 comments

Maturity Transformation (MT) enables all firms, not just banks to borrow short-term money to invest in long-term projects. Of course, banks are the most effective maturity transformers, enabled by deposit insurance/TBTF protection which discourages their creditors from demanding their money back all at the same time and a liquidity backstop from a fiat currency-issuing central bank if panic sets in despite the guarantee. Given the above definition, it is obvious that the presence of MT results in a flatter yield curve than would be the case otherwise (Mencius Moldbug explains it well and this insight is implicit as well in Austrian Business Cycle Theory). This post tries to delineate the exact mechanisms via which the yield curve flattens and how the impact of MT has evolved over the last half-century, particularly due to changes in banks’ asset-liability management (ALM) practices.

Let’s take a simple example of a bank that funds via demand deposits and lends these funds out in the form of 30-year fixed-rate mortgages. This loan if left unhedged exposes the bank to three risks: Liquidity Risk, Interest Rate Risk and Credit Risk. The liquidity risk is of course essentially unhedgeable – it can and is mitigated by for example, converting the mortgage into a securitised form that can be sold onto other banks. But the gap inherent in borrowing short and lending long is unhedgeable. The credit risk of the loan can be hedged but often is not, as compensation for taking on credit risk is one of the fundamental functions of a bank. However, the interest rate risk can be and often is hedged out in the interest rate swaps market.

Interest Rate Risk Management in Bank ALM

Prior to the advent of interest rate derivatives as hedging tools, banks had limited avenues to hedge out interest rate risk. As a result, most banks suffered significant losses whenever interest rates rose. For example, after World War II, US banks were predominantly invested in fixed rate government bonds they had bought during the war. Martin Mayer’s excellent book on ‘The Fed’ documents a Chase banker who said to him in reaction to a Fed rate hike in 1952 that “he never thought he would live to see the day when the government would deliberately make the banking system technically insolvent.” The situation had not changed much even by the 1980s – the initial trigger that set off the S&L crisis was the dramatic rise in interest rates in 1981 that rendered the industry insolvent.

By the 1990s however, many banks had started hedging their duration gap with the aim of mitigating the damage that a sudden move in interest rates could do to their balance sheets. One of the earlier examples is the case of Banc One and the HBS case study on the bank’s ALM strategy is a great introduction to the essence of interest rate hedging. More recently, the Net Interest Income (NII) sensitivity of Bank of America according to slide 35 in this investor presentation is exactly the opposite of the typical maturity-transforming unhedged bank – the bank makes money when rates go up or when the curve steepens. But more importantly, the sensitivity is negligible compared to the size of the bank which suggests a largely duration-matched position.

In the above analysis, I am not suggesting that the banking system does not play the interest carry trade at all. The FDIC’s decision to release an interest rate risk advisory in January certainly suggests that some banks are. I am only suggesting that if a bank does play the carry trade, it is because it chooses to do so and not because it is forced to do so by the nature of its asset-liability profile. Moreover, the indications are that many of the larger banks are reasonably insensitive to changes in interest rates and currently choose not to play the carry game ( See also Wells Fargo’s interest rate neutral stance ).

What does this mean for the impact of MT on the yield curve? It means that the role of the interest rate carry trade inherent in MT in flattening the yield curve is an indeterminate one. At the very least, it has a much smaller role than one would suspect. Taking the earlier example of the bank invested in a 30-year fixed rate mortgage, the bank would simply enter into a 30-year interest rate swap where it pays a fixed rate and receives a floating rate to hedge away its interest rate risk. There are many possible counterparties who want to receive fixed rates in long durations – two obvious examples are corporates who want to hedge their fixed rate issuance back into floating and pension funds and life insurers who need to invest in long-tenor instruments to match their liabilities.

So if interest rate carry is not the source of the curve flattening caused by MT, what is? The answer lies in the other unhedged risk – credit risk. Credit risk curves are also usually upward sloping (except when the credit is distressed) and banks take advantage by funding themselves at a very short tenor where credit spreads are low and lending at long tenors where spreads are much higher. This strategy of course exposes them to the risk of credit risk repricing on their liabilities and this was exactly the problem that banks and corporate maturity transformers such as GE faced during the crisis. Credit was still available but the spreads had widened so much that refinancing at those spreads alone would cause insolvency. This is not dissimilar to the problem that Greece faces at present.

The real benefit of the central bank’s liquidity backstop is realised in this situation. When interbank repo markets and commercial paper markets lock up as they did during the crisis, banks and influential corporates like GE can always repo their assets with the central bank on terms not available to any other private player. The ECB’s 12-month repo program is probably the best example of such a quasi-fiscal liquidity backstop.

Conclusion

Given my view that the interest rate carry trade is a limited phenomenon, I do not believe that the sudden removal of MT will produce a “smoking heap of rubble” (Mencius Moldbug’s view). The yield curve will steepen to the extent that the credit carry trade vanishes but even this will be limited by increased demand from long-term investors, most notably pension funds. The conventional story that MT is the only way to fund long-term projects ignores the increasing importance of pension funds and life insurers who have natural long-tenor liabilities that need to be matched against long-tenor assets.

Bookmark and Share

Written by Ashwin Parameswaran

April 4th, 2010 at 5:54 am

Modigliani-Miller and Banking

with 3 comments

Alan Greenspan’s paper on the financial crisis calls for regulatory capital requirements on banks to be increased but also warns that there are limits to how much they can be increased. In his words: “Without adequate leverage, markets do not provide a rate of return on financial assets high enough to attract capital to that activity. Yet at too great a degree of leverage, bank solvency is at risk.” Greg Mankiw wonders whether the above assertion does not violate the Modigliani-Miller Theorem and is right to do so. Although Greenspan’s conclusion is correct, his argument is incomplete and misses out on the key reason why leverage matters for banks – the implicit and explicit creditor guarantee.

I explained the impact of creditor protection on banks’ optimal leverage in my first note. The conclusions which I summarised in a more concise form in this note are as follows: Even a small probability of a partial bailout will reduce the rate of return demanded by bank creditors and this reduction constitutes an increase in firm value. In a simple Modigliani-Miller world, the optimal leverage for a bank is therefore infinite. Even without invoking Modigliani-Miller, the argument for this is intuitive. If each incremental unit of debt is issued at less than its true economic cost due to deposit insurance or the TBTF doctrine, it “increases the size of the pie” and adds to firm value. In reality of course, there are many limits to leverage, the most important being regulatory capital requirements.

Indeed, the above is the main reason why we have any regulatory capital requirements at all. In the absence of regulation, a bank with blanket creditor protection will likely choose to operate with minimal equity capital especially when it has negligible franchise value or is insolvent. This is exactly what happened during the S&L crisis when bankrupt S&Ls with negligible franchise value bet the farm on the back of a capital structure almost completely funded by insured deposits.

Bookmark and Share

Written by Ashwin Parameswaran

March 30th, 2010 at 3:17 pm

Notes on the Evolutionary Approach to the Moral Hazard Explanation of the Financial Crisis

with 5 comments

In arguing the case for the moral hazard explanation of the financial crisis, I have frequently utilised evolutionary metaphors. This approach is not without controversy and this post is a partial justification as well as an explication of the conditions under which such an approach is valid. In particular, the simple story of selective forces maximising the moral hazard subsidy that I have outlined is dependent upon the specific circumstances and facts of our current financial system.

The “Natural Selection” Analogy

One point of dispute is whether selective forces are relevant in economic systems. The argument against selection usually invokes the possibility of firms or investors surviving for long periods of time despite losses i.e. bankruptcy is not strong enough as a selective force. My arguments rely not on firm survival as the selective force but the principal-agent relationship between investors and asset managers, between shareholders and CEOs etc. Selection kicks in much before the point of bankruptcy in the modern economy. In this respect, it is relevant to note the increased prevalence of shareholder activism in the last 25 years which has strengthened this argument. Moreover, the natural selection argument only serves as a more robust justification for the moral hazard story that does not depend upon explicit agent intentionality but is nevertheless strengthened by it.

The “Optimisation” Analogy

The argument that selective forces lead to optimisation is of course an old argument, most famously put by Milton Friedman and Armen Alchian. However, evolutionary economic processes only lead to optimisation if some key assumptions are satisfied. A brief summary of the key conditions under which an evolutionary process equates to neoclassical outcomes can be found on pages 26-27 of this paper by Nelson and Winter. Below is a partial analysis of these conditions with some examples relevant to the current crisis.

Diversity

Genetic diversity is the raw material upon which Darwinian natural selection operates. Similarly, to achieve anything close to an “optimal” outcome, the strategies available to be chosen by economic agents must be sufficiently diverse. The “natural selection” explanation of the moral hazard problem which I had elaborated upon in my previous post, therefore depends upon the toolset of banks’ strategies being sufficiently varied. The toolset available to banks to exploit the moral hazard subsidy is primarily determined by two factors: technology/innovation and regulation. The development of new financial products via securitisation, tranching and most importantly synthetic issuances with a CDS rather than a bond as an underlying which I discussed here, has significantly expanded this toolset.

Stability

The story of one optimal strategy outcompeting all others is also dependent on environmental conditions being stable. Quoting from Nelson and Winter: “If the analysis concerns a hypothetical static economy, where the underlying economic problem is standing still, it is reasonable to ask whether the dynamics of an evolutionary selection process can solve it in the long run. But if the economy is undergoing continuing exogenous change, and particularly if it is changing in unanticipated ways, then there really is no “long run” in a substantive sense. Rather, the selection process is always in a transient phase, groping toward its temporary target. In that case, we should expect to find firm behavior always maladapted to its current environment and in characteristic ways—for example, out of date because of learning and adjustment lags, or “unstable” because of ongoing experimentation and trial-and-error learning.”

This follows logically from the ‘Law of Competitive Exclusion‘. In an environment free of disturbances, diversity of competing strategies must reduce dramatically as the optimal strategy will outcompete all others. In fact, disturbances are a key reason why competitive exclusion is rarely observed in ecosystems. When Evelyn Hutchinson examined the ‘Paradox of the Plankton’, one of the explanations he offered was the “permanent failure to achieve equilibrium” . Indeed, one of the most accepted explanations of the paradox is the ‘Intermediate Disturbance Hypothesis’ which concludes that ecosystem diversity may be low when the environment is free of disturbances.

Stability here is defined as “stability with respect to the criteria of selection”. In the principal-agent selective process, the analogous criteria to Darwinian “fitness” is profitability. Nelson and Winter’s objection is absolutely relevant when the strategy that maximises profitability is a moving target and there is significant uncertainty regarding the exact contours of this strategy. On the other hand, the kind of strategies that maximise profitability in a bank have not changed for a while, in no small part because of the size of the moral hazard free lunch available. A CEO who wants to maximise Return on Equity for his shareholders would maximise balance sheet leverage, as I explained in my first post. The stability of the parameters of the strategy that would maximise the moral hazard subsidy and accordingly profitability, ensures that this strategy outcompetes all others.

Bookmark and Share

Written by Ashwin Parameswaran

March 13th, 2010 at 5:22 am

Stability and Macro-Stabilisation as a Profound Form of the Moral Hazard Problem

with 5 comments

I have argued previously that the moral hazard explanation of the crisis fits the basic facts i.e. bank balance sheets were highly levered and invested in assets with severely negatively skewed payoffs. But this still leaves another objection to the moral hazard story unanswered – It was not only the banks with access to cheap leverage that were heavily invested in “safe” assets, but also asset managers, money market mutual funds and even ordinary investors. Why was this the case?

A partial explanation which I have discussed many times before relies on the preference of agents (in the principal-agent sense) for such bets. But this is an incomplete explanation. Apart from not being applicable to investors who are not agents, it neglects the principal’s option to walk away. A much better explanation that I mentioned here and here is the role of extended periods of stability in creating “moral hazard-like” outcomes. This is an altogether more profound and pervasive form of the moral hazard problem and lies at the heart of the Minsky-Holling thesis that stability breeds loss of resilience.

It is important to note that such an outcome can arise endogenously without any government intervention. Minsky argued that such an endogenous loss of resilience was inevitable but this is not obvious. As I noted here: “The assertion that an economy can move outside the corridor due to endogenous factors is difficult to reject. All it takes is a chance prolonged period of stability. However, this does not imply that the economy must move outside the corridor, which requires us to prove that prolonged periods of stability are the norm rather than the exception in a capitalist economy.”

But it can also arise as a result of macro-stabilising fiscal and monetary policies. Whether the current crisis was endogenous or not is essentially an empirical question. I have argued in previous posts that it was not and that the “Greenspan Put” monetary policy did as much damage as all the explicit bailouts did. The evidence behind such a view has been put forth well by David Merkel here and by Barry Ritholz in his book or in this excellent episode of Econtalk.

Bookmark and Share

Written by Ashwin Parameswaran

March 7th, 2010 at 10:07 am

Natural Selection, Self-Deception and the Moral Hazard Explanation of the Financial Crisis

with 15 comments

Moral Hazard and Agent Intentionality

A common objection to the moral hazard explanation of the financial crisis is the following: Bankers did not explicitly factor in the possibility of being bailed out. In fact, they genuinely believed that their firms could not possibly collapse under any circumstances. For example, Megan McArdle says: I went to business school with these people, and talked to them when they were at the banks, and the operating assumption was not that they could always get the government to bail them out if something went wrong.  The operating assumption was that they had gotten a whole lot smarter, and would not require a bailout.” And Jeffrey Friedman has this to say about the actions of Ralph Cioffi and Matthew Tannin, the managers of the Bear Stearns fund whose collapse was the canary in the coal mine for the crisis: These are not the words, nor were Tannin and Cioffi’s actions the behavior, of people who had deliberately taken what they knew to be excessive risks. If Tannin and Cioffi were guilty of anything, it was the mistake of believing the triple-A ratings.”

This objection errs in assuming that the moral hazard problem requires an explicit intention on the part of economic agents to take on more risk and maximise the free lunch available courtesy of the taxpayer. The essential idea which I outlined at the end of this post is as follows: The current regime of explicit and implicit bank creditor protection and regulatory capital requirements means that a highly levered balance sheet invested in “safe” assets with severely negatively skewed payoffs is the optimal strategy to maximise the moral hazard free lunch. Reaching this optimum does not require explicit intentionality on the part of economic actors. The same may be achieved via a Hayekian spontaneous order of agents reacting to local incentives or even more generally through “natural selection”-like mechanisms.

Let us analyse the “natural selection” argument a little further. If we assume that there is a sufficient diversity of balance-sheet strategies being followed by various bank CEOs, those CEOs who follow the above-mentioned strategy of high leverage and assets with severely negatively skewed payoffs will be “selected” by their shareholders over other competing CEOs. As I have explained in more detail in this post, the cheap leverage afforded by the creditor guarantee means that this strategy can be levered up to achieve extremely high rates of return. Even better, the assets will most likely not suffer any loss in the extended stable period before a financial crisis. The principal, in this case the bank shareholder, will most likely mistake the returns to be genuine alpha rather than the severe blowup risk trade it truly represents. The same analysis applies to all levels of the principal-agent relationship in banks where an asymmetric information problem exists.

Self-Deception and Natural Selection

But this argument still leaves one empirical question unanswered – given that such a free lunch is on offer, why don’t we see more examples of active and intentional exploitation of the moral hazard subsidy? In other words, why do most bankers seem to be true believers like Tannin and Cioffi. To answer this question, we need to take the natural selection analogy a little further. In the evolutionary race between true believers and knowing deceivers, who wins? The work of Robert Trivers on the evolutionary biology of self-deception tells us that the true believer has a significant advantage in this contest.

Trivers’ work is well summarised by Ramachandran: “According to Trivers, there are many occasions when a person needs to deceive someone else. Unfortunately, it is difficult to do this convincingly since one usually gives the lie away through subtle cues, such as facial expressions and tone of voice. Trivers proposed, therefore, that maybe the best way to lie to others is to first lie to yourself. Self-deception, according to Trivers, may have evolved specifically for this purpose, i.e. you lie to yourself in order to enable you to more effectively deceive others.” Or as Conor Oberst put it more succinctly here: “I am the first one I deceive. If I can make myself believe, the rest is easy.” Trivers’ work is not as relevant for the true believers as it is for the knowing deceivers. It shows that active deception is an extremely hard task to pull off especially when attempted in competition with a true believer who is operating with the same strategy as the deceiver.

Between a CEO who is consciously trying to maximise the free lunch and a CEO who genuinely believes that a highly levered balance sheet of “safe” assets is the best strategy, who is likely to be more convincing to his shareholders and regulator? Bob Trivers’ work shows that it is the latter. Bankers who drink their own Kool-Aid are more likely to convince their bosses, shareholders or regulators that there is nothing to worry about. Given a sufficiently strong selective mechanism such as the principal-agent relationship, it is inevitable that such bankers would end up being the norm rather than the exception. The real deviation from the moral hazard explanation would be if it were any other way!

There is another question which although not necessary for the above analysis to hold is still intriguing: How and why do people transform into true believers? Of course we can assume a purely selective environment where a small population of true believers merely outcompete the rest. But we can do better. There is ample evidence from many fields of study that we tend to cling onto our beliefs even in the face of contradictory pieces of information. Only after the anomalous information crosses a significant threshold do we revise our beliefs. For a neurological explanation of this phenomenon, the aforementioned paper by V.S. Ramachandran analyses how and why patients with right hemisphere strokes vehemently deny their paralysis with the aid of numerous self-deceiving defence mechanisms.

Jeffrey Friedman’s analysis of how Cioffi and Tannin clung to their beliefs in the face of mounting evidence to the contrary until the “threshold” was cleared and they finally threw in the towel is a perfect example of this phenomenon. In Ramachandran’s words, “At any given moment in our waking lives, our brains are flooded with a bewildering variety of sensory inputs, all of which have to be incorporated into a coherent perspective based on what stored memories already tell us is true about ourselves and the world. In order to act, the brain must have some way of selecting from this superabundance of detail and ordering it into a consistent ‘belief system’, a story that makes sense of the available evidence. When something doesn’t quite fit the script, however, you very rarely tear up the entire story and start from scratch. What you do, instead, is to deny or confabulate in order to make the information fit the big picture. Far from being maladaptive, such everyday defense mechanisms keep the brain from being hounded into directionless indecision by the ‘combinational explosion’ of possible stories that might be written from the material available to the senses.” However, once a threshold is passed, the brain finds a way to revise the model completely. Ramachandran’s analysis also provides a neurological explanation for Thomas Kuhn‘s phases of science where the “normal” period is overturned once anomalies accumulate beyond a threshold. It also provides further backing for the thesis that we follow simple rules and heuristics in the face of significant uncertainty which I discussed here.

Fix The System, Don’t Blame the Individuals

The “selection” argument provides the rationale for how the the extraction of the moral hazard subsidy can be maximised despite the lack of any active deception on the part of economic agents. Therefore, as I have asserted before, we need to fix the system rather than blaming the individuals. This does not mean that we should not pursue those guilty of fraud. But merely pursuing instances of fraud without fixing the incentive system in place will get us nowhere.

Bookmark and Share

Written by Ashwin Parameswaran

February 17th, 2010 at 10:30 am

Mark-to-Market Accounting and the Financial Crisis

with 5 comments

Mark-to-Market (MtM) Accounting is usually cast as a villain of the piece in most financial crises. This note aims to rebut this criticism from a “system resilience” perspective. It also expands on the role that MtM Accounting can play in mitigating agents’ preference for severely negatively skewed payoffs, a theme I touched upon briefly in an earlier note.


The “Downward Spiral” of Mark-to-Market Accounting


If there’s anything that can be predicted with certainty in a financial crisis, it is that sooner or later banks will plead to their regulators and/or FASB asking for relaxation of MtM accounting rules. The results are usually favourable. So in the S&L crisis, we got the infamous “Memorandum R-49” and in the current crisis, we got FAS 157-e.


The most credible argument for such a relaxation of MtM rules is the “downward spiral” theory. Opponents of MtM Accounting argue that it can trigger a downward spiral in asset prices in the midst of a liquidity crisis. As this IIF memorandum puts it: “often dramatic write-downs of sound assets required under the current implementation of fair-value accounting adversely affect market sentiment, in turn leading to further write-downs, margin calls and capital impacts in a downward spiral that may lead to large-scale fire-sales of assets, and destabilizing, pro-cyclical feedback effects. These damaging feedback effects worsen liquidity problems and contribute to the conversion of liquidity problems into solvency problems.” The initial fall in prices feeds upon itself in a “positive feedback” process.


I am not going to debate the conditions necessary for this positive feedback process to hold, not because the case is beyond debate but because MtM is just one in a long list of positive feedback processes in our financial markets. Laura Kodres at the IMF has an excellent discussion on “destabilizing” hedge fund strategies here which identifies some of the most common ones – margin calls on levered bets, stop-loss orders, dynamic hedging of short-gamma positions and even just plain vanilla momentum trading strategies.


The crucial assumption necessary for the downward spiral to hold is that the forces exerting negative feedback on this fall in asset prices are not strong enough to counter the positive feedback process. The relevant question from a system resilience perspective is why this is so. Why are there not enough investors with excess liquidity or banks with capital and liquidity reserves to buy up the “undervalued” assets and prevent collapse?  One answer which I discussed in my previous note is the role of extended periods of stability in reducing system resilience. The narrowing of the “Leijonhufvud Corridor” reduces the margin of error before positive feedback processes kick in. The most obvious example is reduction in collateral required to execute a leveraged bet. The period of stability also weeds out negative feedback strategies or forces them to adapt thereby reducing their influence on the market.


A healthy market is characterised not by the absence of positive feedback processes but by the presence of a balanced mix of positive and negative feedback processes. Eliminating every single one of the positive feedback processes above would mean eliminating a healthy chunk of the market. A better solution is to ensure the persistence of negative feedback processes.


Mark-to-Market Accounting as a Modest Mitigant to the Moral Hazard Problem


As I mentioned in a previous note, marking to a liquid market significantly reduces the attractiveness of severely negatively skewed bets for an agent. If the agent is evaluated on the basis of mark-to-market and not just the final payout, significant losses can be incurred much before the actual event of default on a super-senior bond.


The impact of true mark-to-market is best illustrated by highlighting the difference between Andrew Lo’s example of the Capital Decimation Partners and the super-senior tranches that were the source of losses in the current crisis. In Andrew Lo’s example, the agent sells out-of-the-money (OTM) options on an equity index of a very short tenor (less than three months). This means that there is significant time decay which mitigates the mark-to-market impact of a fall in the underlying. This rapid time decay due to the short tenor of the bet makes the negatively skewed bet worthwhile for the hedge fund manager even though he is subject to constant mark to market. On the other hand, loans/bonds are of a much longer tenor and if they were liquidly traded, the mark-to-market swings would make the negative skew of the final payout superfluous for the purposes of the agent who would be evaluated on the basis of the mark-to-market and not the final payout.


Many of the assets on bank balance sheets however are not subject to mark-to-market accounting or are only subject to mark-to-model on an irregular basis. This enables agents to invest in severely negatively skewed bets of long tenor safe in the knowledge that the low probability of an event of default in the first few years is extremely low. It’s worth noting that mark-to-model is almost as bad as not marking to market at all for such negatively skewed bets, especially if the model is based on parameters drawn from recent historical data during the “stable” period.


On Whether Money Market Mutual Funds (MMMFs) should Mark to Market


The SEC recently announced a new set of money market reforms aimed at fixing the flaws highlighted by Reserve Primary Fund’s “breaking the buck” in September 2008. However, it stopped short of requiring money market funds to post market NAVs that may fluctuate. One of the arguments for why floating rate NAVs are a bad idea is that regulations that force money market funds to hold “safe” assets make mark-to-market superfluous. In fact, exactly the opposite is true. It is essential that assets with severely negatively skewed payoffs such as AAA bonds are marked to market precisely so that agents such as money market fund managers are not tempted to take on uneconomic bets in an attempt to pick up pennies from in front of the bulldozer.


The S&L Crisis: A Case Study on the impact of avoiding MtM


Martin Mayer’s excellent book on the S&L crisis has many examples of the damage that can be done by avoiding MtM accounting especially when the sector has a liquidity backstop via the implicit or explicit guarantee of the FDIC or the Fed. In his words, “As S&L accounting was done, winners could be sold at a profit that the owners could take home as dividends, while the losers could be buried in the portfolio “at historic cost,” the price that had been paid for them, even though they were now worth less, and sometimes much less.”


As Mayer notes, this accounting freedom meant that S&L managers were eager consumers of the myriad varieties of mortgage backed securities that Wall Street conjured up in the 80s in search of extra yield, immune from the requirement to mark these securities to market.


Wall Street’s Opposition to the Floating NAV Requirement for MMMFs


Some commentators such as David Reilly and Felix Salmon pointed out the hypocrisy of investment banks such as Goldman Sachs recommending to the SEC that money market funds not be required to mark to market while rigorously enforcing MtM on their own balance sheets. In fact the above analysis of the S&L crisis shows why their objections are perfectly predictable. Investment banks prefer that their customers not have to mark to market. This increases the demand from agents at these customer firms for “safe” highly rated assets that yield a little extra i.e. the very structured products that Wall Street sells, safe in the knowledge that they are immune from MtM fluctuations.


Mark-to-Market and the OTC-Exchange Debate


Agents’ preference for avoiding marking to market also explains why apart from investment banks, even their clients may prefer to invest in illiquid, opaque OTC products rather than exchange-traded ones. Even if accounting allows one to mark a bond at par, it may be a lot harder to do so if the bond price were quoted in the daily newspaper!


Mark-to-Market and Excess Demand for “Safe” Assets


Many commentators have blamed the current crisis on an excess demand for “safe” assets (See for example Ricardo Caballero). However, a significant proportion of this demand may arise from agents who do not need to mark to market and is entirely avoidable. More widespread enforcement of mark to market should significantly decrease the demand from agents for severely negatively skewed bets i.e. “safe” assets.


Bookmark and Share

Written by Ashwin Parameswaran

February 7th, 2010 at 1:14 pm

Knightian Uncertainty and the Resilience-Stability Trade-off

with 11 comments

This note examines the implications of adaptation by economic agents under Knightian uncertainty for the resilience of the macroeconomic system. It expands on themes I touched upon here and here. To summarise the key conclusions,

  • Under Knightian uncertainty, homo economicus is an irrelevant construct. The “optimal” course of action is one that restricts the choice of actions available and depends on a small set of simple rules and heuristics.
  • The choice of actions is restricted to those that are applicable in reasonably likely or recurrent situations. Actions applicable to rare situations are ignored. Therefore, it is entirely rational to take on severely negatively skewed bets.
  • By the same logic, economic agents find it harder to adapt to severe macroeconomic shocks as compared to mild shocks. This is the rationale for Axel Leijonhufvud’s “Corridor Hypothesis”.
  • Minsky’s Financial Instability Hypothesis states that prolonged periods of stability reduce the width of the “corridor” until the point where a macroeconomic crisis is inevitable.
  • The only assumptions needed to draw the above conclusions are the existence of uncertainty and sufficient adaptive/selective forces operating upon economic agents.
  • Minksy believed that this loss of resilience in the macroeconomic system is endogenous and inevitable. Although such a loss of resilience can arise endogenously, the evidence suggests that a significant proportion of the blame for the current crisis can be attributed to the stabilising policies favoured during the Great Moderation.
  • Buzz Holling’s work on ecosystem resilience has highlighted the peril of stabilising complex adaptive systems and how increased stability reduces system resilience.

Uncertainty and Negatively Skewed Payoffs

In a previous note, I explained how the existence of Knightian uncertainty leads to a perceived preference for severely negatively skewed payoffs. Ronald Heiner explains exactly how this occurs in his seminal paper on decision making under uncertainty.

Heiner argues that in the presence of uncertainty, the “optimal” course of action is one that restricts the choice of actions available and depends on a small set of simple rules and heuristics. In his words,

” Think of an omniscient agent with literally no uncertainty in identifying the most preferred action under any conceivable condition, regardless of the complexity of the environment which he encounters. Intuitively, such an agent would benefit from maximum flexibility to use all potential information or to adjust to all environmental conditions, no matter how rare or subtle those conditions might be. But what if there is uncertainty because agents are unable to decipher all of the complexity of the environment? Will allowing complete flexibility still benefit the agents?

I believe the general answer to this question is negative: that when genuine uncertainty exists, allowing greater flexibility to react to more information or administer a more complex repertoire of actions will not necessarily enhance an agent’s performance. “

In Heiner’s framework, actions chosen must satisfy a “Reliability Condition” which he summarises as: ” do so if the actual reliability in selecting the action exceeds the minimum required reliability necessary to improve performance. ” This required reliability cannot be achieved in the tails of the distribution and economic agents therefore ignore actions that are appropriate only in such situations. This explains our reluctance to insure against rare disasters which Heiner notes:

” Rare events are precisely those which are remote to a person’s normal experience, so that uncertainty in detecting which rare disasters to insure against increases as p(probability of disaster) approaches zero. Such greater uncertainty will reduce the reliability of insurance decisions as disasters become increasingly remote to a person’s normal experience.”

” At some point as p approaches zero, the Reliability Condition will be violated. This implies people will switch from typically buying to typically ignoring insurance conditions, which is just the pattern documented in Kunreuther’s 1978 study.”

Note the similarity between Heiner’s analysis of tail risks under uncertainty and Kahneman and Tversky’s distinction between “possible” and “impossible” events. The reliability problem is also connected to the difficulty of ascertaining the properties of tail events through a statistical analysis of historical data.

In an uncertainty-driven framework, it may be more appropriate to refer to this pattern as a reluctance to insure against tail risks rather than a preference for “blowup risks”. This distinction is also relevant in the moral hazard debate where the actions are often characterised better as a neglect of insurance of tail risks than an explicit taking on of such risks.

Impossible Events and Axel Leijonhufvud’s “Corridor Hypothesis”

Heiner also extends this analysis of the reluctance to insure against “impossible” events to provide the rationale for Axel Leijonhufvud’s “Corridor Hypothesis” of macroeconomic shocks and recessions. In his words:

“Now suppose, analogous to the insurance case, that there are different types of shocks. some more severe than others; where larger shocks are possible but less and less likely to happen. In addition, the reliability of detecting when and how to prepare for large shocks decreases as their determinants and repercussions are more remote to agents’ normal experience.

In a similar manner to that discussed for the insurance case, we can derive that the economy’s structure will evolve so as to prepare for and react quickly to small shocks. However, outside of a certain zone or “corridor” around its long-run growth path, it will only very sluggishly react to sufficiently large, infrequent shocks.”

Minsky’s Financial Instability Hypothesis and Leijonhufvud’s Corridor

Minsky’s Financial Instability Hypothesis (FIH) asserts that stability breeds instability i.e. stability reduces the width of the corridor to the point where even a small shock is enough to push the system outside it. Leijonhufvud acknowledged Minsky’s insight that the width of the corridor was variable and depended upon the recency of past disturbances. In his own words: “Our theory implies a variable width of the corridor. Transactors who have once suffered through a displacement of unanticipated magnitude (on the order of the Great Depression, say) will be encouraged to maintain larger buffers thereafter-until the memory dims…”

The assertion that stability breeds instability is well established in ecology, especially in Buzz Holling’s work as I discussed here. Heiner’s framework explains Minsky’s assertion as the logical consequence of agent adaptation under uncertainty. But the same can also be explained via “natural selection”-like mechanisms as well. The most relevant is the principal-agent relationship. Principals that “select” agents under asymmetric information can effectively mimic the effect of natural selection in ecosystems.

Minsky also argues that sooner or later, a capitalist economy will move outside this corridor due to entirely endogenous reasons. This is a more controversial assertion and can only be evaluated through a careful analysis of the empirical evidence. The assertion that an economy can move outside the corridor due to endogenous factors is difficult to reject. All it takes is a chance prolonged period of stability. However, this does not imply that the economy must move outside the corridor, which requires us to prove that prolonged periods of stability are the norm rather than the exception in a capitalist economy.

Minsky’s Financial Instability Hypothesis and C.S. Holling’s conception of Resilience and Stability

Minsky’s idea that stability breeds instability is an important theme in the field of ecology. Buzz Holling however defined the problem as loss of resilience rather than instability. Resilience and stability are dramatically different concepts and Holling explained the difference in his seminal paper on the topic as follows:

“Resilience determines the persistence of relationships within a system and is a measure of the ability of these systems to absorb changes of state variables, driving variables, and parameters, and still persist. In this definition resilience is the property of the system and persistence or probability of extinction is the result. Stability, on the other hand, is the ability of a system to return to an equilibrium state after a temporary disturbance. The more rapidly it returns, and with the least fluctuation, the more stable it is. In this definition stability is the property of the system and the degree of fluctuation around specific states the result.”

The relevant insight in Holling’s work is that resilience and stability as goals for an ecosystem are frequently at odds with each other. In many ecosystems, “the very fact of low stability seems to produce high resilience“. Conversely, “the goal of producing a maximum sustained yield may result in a more stable system of reduced resilience”. Minsky’s hypothesis is thus better described as “stability breeds loss of resilience”, not “stability breeds instability”.

The Pathology of Macroeconomic Stabilisation

The “Pathology of Natural Resource Management” is described by Holling and Meffe as follows:

“when the range of natural variation in a system is reduced, the system loses resilience.That is, a system in which natural levels of variation have been reduced through command-and-control activities will be less resilient than an unaltered system when subsequently faced with external perturbations.”

Similarly, the dominant macroeconomic policy paradigm explicitly aims to stabilise the macroeconomy. In particular, monetary policy during the Great Moderation was used as a blunt instrument to put out all but the most minor macroeconomic fire. Stabilising policies of this nature can and do cause the same kind of loss of resilience that Minsky describes. Indeed, as I mentioned in my previous note, agent adaptation to stabilising monetary and fiscal policies can be viewed as a more profound kind of moral hazard. Economic agents may take on severely negatively skewed bets not even as an adaptation to uncertainty but merely as a rational response to stabilising macroeconomic policies.

Bookmark and Share

Written by Ashwin Parameswaran

January 30th, 2010 at 2:08 pm

Do Investors Prefer Negative Skewness?

with 10 comments

Bootvis asks in a comment on my previous post:

“Financial theory says that rational investors should prefer positive skewness. This is proven under some weak assumptions in “On The Direction of Preference for Moments of Higher Order Than The Variance” by Scott and Horvath (1980) (I can only find it on jstor, behind a wall ).What’s your view on this discrepancy?”

I have not read the above paper and do not have access to JSTOR either. So the below response is just my broad view on the topic.

Agents prefer Negative Skewness

My emphasis so far has been on the preference for maximising negative skewness from an agent’s perspective in a principal-agent relationship. This preference is exacerbated by the moral hazard subsidy. I conclude that the combination of the moral hazard subsidy and the principal-agent problem allows agents to simultaneously maximise negative skewness and improve the risk-return trade-off for owners by increasing leverage.

Whether investors who are not agents would prefer negative skewness is a trickier question. Taleb in this paper clearly concludes that investors prefer negatively skewed bets. But as Bootvis mentions, this contradicts the consensus opinion of financial theory that investors prefer positive skewness. An obvious example of the preference for positive skewness is the phenomenon of “longshot bias” or the popularity of lotteries.

Kahneman-Tversky on Longshots and Black Swans

Kahneman and Tversky offer one way to reconcile these two viewpoints in this paper where they argue that “impossible” events, i.e. black swans, are neglected whereas “possible” but low probability events, i.e. longshots, are overweighted. Preference for negative skewness is not operative for mildly skewed payoffs. It is operative for severely skewed payoffs. As expressed by Kahneman and Tversky: “A change from impossibility to possibility or from possibility to certainty has a bigger impact than a comparable change in the middle of the scale.” In other words, there is a “category-boundary effect” when an event deemed impossible becomes possible. The event is significantly underweighted when deemed impossible and overweighted when it is suddenly deemed possible i.e. the lottery effect only kicks in when the event is deemed possible.

This phenomenon also explains the violence of market reaction and the dramatic move in market prices around this boundary. In fact, it can be argued that the change in market prices itself can cause a move in investor views across this category-boundary in a positive feedback process. For example, if market prices suggest that a tail risk is not improbable, this alone may incentivise economic actors to purchase insurance against the event.

Any behavioural explanation that invokes Kahneman and Tversky does not apply to “rational” investors as defined in modern financial theory. For example, the underweighting of tail events can be explained as a result of investors utilising the “Availability Heuristic” and inducing the probability distribution from past experience. As Andrew Haldane notes: “The longer the period since an event occurred, the lower the subjective probability attached to it by agents (the so-called “availability heuristic”). And below a certain bound, this subjective probability will effectively be set at zero (the “threshold heuristic”).”

Is a Preference for Severe Negative Skewness Irrational?

I would argue that using such heuristics may even be rational when not judged against the unrealistic standards of homo economicus. Inducing probabilities from past experience may be entirely “rational” given bounded rationality and an uncertain environment. As WB Arthur puts it: “Agents “learn” which of their hypotheses work, and from time to time they may discard poorly performing hypotheses and generate new “ideas” to put in their place. A belief model is clung to not because it is “correct”—there is no way to know this—but rather because it has worked in the past, and must cumulate a record of failure before it is worth discarding.”

It can be extremely difficult to ascertain the true distribution of an extremely negatively skewed bet from historical data. A long run without an observed loss makes us less confident about any initial negative thesis. This is also the primary explanation for why we prefer longshots in horse races or play the lottery. Both are fundamentally less uncertain than financial markets. At least we know the full set of outcomes that are possible in a horse race ! Real life markets are nothing like betting markets. They are dominated by true uncertainty and practitioners derive shaky conclusions from historical data and experience. Statistically, it can be extremely difficult to differentiate between alpha and extreme negative skew.

A More Profound “Moral Hazard”

Severely negatively skewed bets usually blow up under conditions of severe distress in the economy when the government is likely to intervene strongly to prevent systemic collapse. As David Merkel mentions in this note, the Great Moderation has been characterised by a Fed that is willing to cut interest rates at the smallest hint of trouble, even in situations where systemic risk was far from severe.

The current “no more Lehmans” policy is practise means that the Fed and the Treasury will do anything to prevent negative tail scenarios. In the face of such an explicit insurance policy, selling tail events may be entirely rational.

Negative Skewness and Fixed Income Markets

Taleb essentially denies that even longshots are overpriced in financial markets. I am not convinced that moderate negative skewness is at all “preferred”. Moreover, most of the empirical evidence he presents pertains to severely skewed payoffs. But there is one point he raises in a reply to Tyler Cowen’s review that deserves more analysis. The vast majority of blowups that Taleb recounts are in the fixed income markets.

Indeed, I think the preference for negative skewness is most relevant in fixed income markets. The original fixed income instrument i.e. the bond has an extremely negatively skewed payoff by construction as does the original “alpha” strategy, the carry trade. Secondly, fixed income markets are dominated to a much larger extent by banks and other agents who are compromised by the moral hazard and/or principal-agent problem. Third, the nature of structured product markets in fixed income are dominated by new methods to construct negatively skewed payoffs. To give a few examples, callable range accruals in interest rate products, the PRDC in currency products and almost any credit structured product that aims to achieve a AAA rating like the leveraged super-senior.

This is not to deny the popularity of severely negatively skewed payoffs in equities (for e.g. the reverse convertible note). But they are nowhere near as predominant.

Conclusion

The moral hazard subsidy, the principal-agent problem and investor “irrationality” each incentivise economic actors to take on considerably negatively skewed bets. Assessing the relative contributions of each from historical market data is extremely difficult given that there is no plausible way to separate the effect of the three causes. The problem is exacerbated by the difficulty in drawing any conclusions about tail events from a study of historical data. However, the concentration of historical blowups in fixed income markets leads me to suspect that the combination of moral hazard and the principal-agent problem had a more prominent role in fuelling the crisis than genuine “irrationality”.

Bookmark and Share

Written by Ashwin Parameswaran

January 13th, 2010 at 5:17 pm