resilience, not stability

Archive for the ‘Principal-Agent Problem’ Category

How to commit fraud and get away with it: A Guide for CEOs

with 14 comments

Shorter Version

A strategy to maximise bonuses and avoid personal culpability:

  • Don’t commit the fraud yourself.
  • Minimise information received about the actions of your employees.
  • Control employees through automated, algorithmic systems based on plausible metrics like Value at Risk.
  • Pay high bonuses to employees linked to “stretch” revenue/profit targets.
  • Fire employees when targets are not met.
  • …..Wait.

Longer Version

CEOs and senior managers of modern corporations possess the ability to engineer fraud on an organisational scale and capture the upside without running the risk of doing any jail time. In other words, they can reliably commit fraud and get away with it.

Imagine that you are the newly hired CEO of a large bank and by some improbable miracle your bank is squeaky clean and free of fraudulent practises. But you are unhappy about this. Your competitors are making more profits than you are by embracing fraud and coming out ahead of you even after paying tens of billions of dollars in fines to the regulators. And you want a piece of the action. But you’re a risk-averse person and don’t want to risk spending any time in jail for committing fraud. So how can you achieve this outcome?

Obviously you should not commit any fraudulent acts yourself. You want your junior managers to commit fraud in the pursuit of higher profits. One way to incentivise this behaviour is to adopt what are known as ‘high-powered incentives’. Pay your employees high bonuses tied to revenue/profits and maintain hard-to-meet ‘stretch’ targets. Fire ruthlessly if these targets are not met. And finally, ensure that you minimise the flow of information up to you about how exactly how your employees meet these targets.

There is one problem with this approach. As a CEO, this allows you to use the “I knew nothing!” defense and claim ignorance about all the “deplorable” fraud taking place lower down the organisational food chain. But it may fall foul of another legal principle that has been tailored for such situations – the principle of ‘wilful blindness’“if there is information that you could have know, and should have known, but somehow managed not to know, the law treats you as though you did know it”. In a recent essay, Judge Rakoff uses exactly this principle to criticise the failure of regulators in the United States in prosecuting senior bankers.

But wait – all hope is not lost yet. There is one way by which you as a CEO can not only argue that adequate controls and supervision were in place and at the same time make it easier for your employees to commit fraud. Simply perform the monitoring and control function through an automated system and restrict your role to signing off on the risk metrics that are the output of this automated system.

It is hard to explain how this can be done in the abstract so let me take a hypothetical example from the mortgage origination and securitisation industry. As a CEO of a mortgage originator in 2005, you are under a lot of pressure from your shareholders to increase subprime originations. You realise that the task would be a lot easier if your salespeople originated fraudulent loans where ineligible borrowers are given loans they can’t afford. You’ve followed all the steps laid out above but as discussed this is not enough. You may be accused of not having any controls in the organisation. Even if you try hard to ensure that no information regarding fraud filters through to you, you can never be certain. At the first sign of something unusual, a mortgage approval officer may raise an exception to his supervisor. Given that every person in the management hierarchy wants to cover his own back, how can you ensure that nothing filters up to you whilst at the same time providing a plausible argument that you aren’t wilfully blind?

The answer is somewhat counterintuitive – you should codify and automate the mortgage approval process. Have your salespeople input potential borrower details into a system that approves or rejects the loan application based on an algorithm without any human intervention. The algorithm does not have to be naive. In fact it would ideally be a complex algorithm, maybe even ‘learned from data’. Why so? Because the more complex the algorithm, the more opportunities it provides to the salespeople to ‘game’ and arbitrage the system in order to commit fraud. And the more complex the algorithm, the easier it is for you, the CEO, to argue that your control systems were adequate and that you cannot be accused of wilful blindness or even the ‘failure to supervise’.

In complex domains, this argument is impossible to refute. No regulator/prosecutor is going to argue that you should have installed a more manual control system. And no regulator can argue that you, the CEO, should have micro-managed the mortgage approval process.

Let me take another example – the use of Value at Risk (VaR) as a risk measure for control purposes in banks. VaR is not ubiquitous because traders and CEOs are unaware of its flaws. It is ubiquitous because it allows senior managers to project the facade of effective supervision without taking on the trouble or the legal risks of actually monitoring what their traders are up to. It is sophisticated enough to protect against the charge of wilful blindness and it allows ample room for traders to load up on the tail risks that fund the senior managers’ bonuses during the good times. When the risk blows up, the senior manager can simply claim that he was deceived and fire the trader.

What makes this strategy so easy to implement today compared to even a decade ago is the ubiquitousness of fully algorithmic control systems. When the control function is performed by genuine human domain experts, then obvious gaming of the control mechanism is a lot harder to achieve. Let me take another example to illustrate this. One of the positions that lost UBS billions of dollars during the 2008 financial crisis was called ‘AMPS’ where billions of dollars in super-senior tranche bonds were hedged with a tiny sliver of equity tranche bonds so that the portfolio showed a zero VaR and delta-neutral risk position. Even the most novice of controllers could have identified the catastrophic tail risk embedded in hedging a position where one can lose billions, with another position where one could only gain millions.

There is nothing new in what I have laid out in this essay – for example, Kenneth Bamberger has made much the same point on the interaction between technology and regulatory compliance:

automated systems—systems that governed loan originations, measured institutional risk, prompted investment decisions, and calculated capital reserve levels—shielded irresponsible decisions, unreasonably risky speculation, and intentional manipulation, with a façade of regularity….
Invisibility by design, allows engineering of fraudulent outcomes without being held responsible for them – the “I knew nothing!” defense. of course, they are also self-deceived so this is really true.

But although the automation that enables this risk-free fraud is a recent phenomenon, the principle behind this strategy is one that is familiar to managers throughout the modern era – “How do I get things done the way I want to without being held responsible for them?”.

Just as the algorithmic revolution is simply a continuation of the control revolution, the ‘accountability gap’ due to automation is simply an acceleration of trends that have been with us throughout the modern era. Theodore Porter has shown how the rise of objectivity and bureaucracy were as much driven by the desire to avoid responsibility as they were driven by the desire for superior results. Many features of the modern corporate world only make sense when we understand that one of their primary aims is the avoidance of responsibility and culpability. Why are external consulting firms so popular even when the CEO knows exactly what he wants to do? So that the CEO can avoid responsibility if the ‘strategic restructuring’ goes badly. Why do so many firms delegate their critical control processes to a hotpotch of outsourced software contractors? So that they can blame any failures on external counter-parties who have explicitly been granted exemption from any liability1.

Due to my experience in banking, my examples and illustrations are necessarily drawn from the world of finance. But it should be clear that nothing in what I’ve said is limited to banking. ‘Strategic ignorance’ is equally effective in many other domains. My arguments are also not a justification for not prosecuting bankers for fraud. It is an argument that CEOs of modern corporations can reap the benefits of fraud and get away with it. And they can do so very easily. Fraud is embedded within the very fabric of the modern economy.

Note: Venkat makes a similar point in his series on the ‘Gervais Principle’ on how sociopathic managers avoid responsibility for their actions. Much of what I have written above may make more sense if read in conjunction with his essay.

  1. Helen Nissenbaum makes this and many other relevant points in her paper about ‘accountability in a computerised society’.  ↩
Bookmark and Share

Written by Ashwin Parameswaran

December 4th, 2013 at 4:19 pm

Derivatives: The Negative Skewness in Dynamic Hedging and the Moral Hazard Problem

without comments

In a recent article, John Kay discovered the temptations of negative skewness, even for non-bank investors. Although some may label this irrational or even a scam, seeking out negative skewness may be entirely rational in the presence of policies such as the Greenspan/Bernanke Put that seek to avoid tail outcomes at all costs. The product that John Kay describes is a equity reverse convertible bond with an auto-call feature and European barriers. A cursory internet search shows that atleast in Europe, these products are not uncommon and most are not dissimilar to the specific bond that he describes:

“If the FTSE index is higher in a year’s time than it is today, you receive a 10 per cent return and your money back (no doubt with an invitation to apply for a new kickout bond). If the FTSE has fallen, the bond runs for another year. If the index has then risen above its initial level, you receive your money back with a 20 per cent return. Otherwise the bond runs for another year. And so on. The race ends – sorry, the investment matures – after five years. If the FTSE index, having been below its initial level at the end of years one, two, three and four, now lies above it, then bingo! you get a 50 per cent bonus.

There is, of course, a catch. If you miss out on the five-year jackpot the manager will review whether or not the FTSE index ever closed at more than 50 per cent below its starting level. If it hasn’t, then you will get back your initial stake, without bonus or interest. If the index breached that 50 per cent barrier your capital will be scaled down, perhaps substantially.”

The distribution of returns of this bond is negatively skewed: In return for taking on a small probability of a significant loss (if equities fall by 50%), the investor is compensated via a highly probable but likely modest profit – it is probable that the investor only gets his principal back and the most probable profitable scenario is redemption in one year with a return of 10%.

But if the investor takes on a negatively skewed payoff, doesn’t the bank by definition take on a positively skewed payoff? And does that not invalidate my entire thesis on moral hazard? No – In fact, structured products which provide negatively skewed payoffs to bank clients frequently allow banks to take on negative skewness. Banks do not simply hold the other side of the bond – they dynamically hedge the risk exposure of the bond and it is this dynamically hedged exposure that has a negatively skewed payoff.

Dynamic hedging differs from static hedging in that the hedges put in place need to be continuously rebalanced throughout the life of the transaction. Most banks restrict their hedging to first-order and second-order risks such as delta, gamma, vega etc and only rarely hedge higher order risks. How often this rebalancing needs to be done depends on the stability of the risks themselves (how stable the risks are with regards to movements in the market and movements in time) and the realised movements in the market itself. At the extremes, a product with stable risks in a stable market environment will require only infrequent rebalancing of the hedge and a product with unstable risks in an unstable market environment will need to be rebalanced often. In a world without transaction costs and slippage, none of this matters. But in the real world, increased slippage costs dramatically reduce the profitability of a dynamically hedged structured product when markets are unstable and/or the tenor of the product increases.

For many structured products such as the auto-call reverse convertible, the risk exposure of the dynamically hedged position is as follows: a high probability of a stable and/or short lifespan combined with a small probability of an extremely unstable and long lifespan. Typically, the bank would hedge the delta and vega of the bond  sometimes utilising out-of-the-money puts and calls to replicate the skew exposure. In most probable scenarios, the risk exposure of the dynamically hedged position is fairly stable. If the market simply goes up and stays there, the bond redeems with a 10% return after one year and the hedge would have to be rebalanced very few times in a smooth manner. If the market simply goes down significantly, the risk exposure simplifies into one resembling a put option owned by a bank. But what if the market goes down a little bit and stays there? Or even worse, what if the market goes down dramatically and then reverses course in an equally swift manner but stops short of the redemption level? It is not difficult to visualise that in some scenarios, the losses due to slippage can quite easily swamp the profits and fees made at inception.

The losses are exacerbated as it is precisely in these unstable market conditions when hedges need to be rebalanced frequently that transactions costs and slippage spiral out of control – the bank then faces the option of running the risk of an unhedged position or locking in a certain and significant loss. Although many traders would argue that remaining unhedged is the more profitable strategy (sometimes correctly), senior managers almost always choose the option of locking in a known loss even if it wipes out the past profits of the business. Moreover, the oligopolistic nature of the market and the homogeneous “same-way” exposure of the banks implies that all market participants will need to hedge at the same time in the same manner. The execution of such hedging itself may also affect the fragile fundamentals of the related market in a reflexive feedback loop.

The simplistic argument against TBTF banks owning a derivatives business is as follows:  bankers accumulate large positions of a long tenor yet get paid bonuses based on annual performance. If the positions accumulated in this manner blow up afterwards, the bank and often the taxpayer is left holding the can. Banks counter this argument by pointing out that these positions are typically hedged. In a world of static hedging, this may be an acceptable argument. But in a derivative book that needs to be dynamically hedged, the argument falls apart. Most existing books of dynamically hedged derivative positions are a negative NPV asset if the likely slippage in future market disruptions is incorporated into their valuation, especially if these slippages are computed over the “real” distribution rather than a “normal” one. Warren Buffett found this out the hard way when Berkshire Hathaway lost $400 mio in the process of unwinding General Re’s derivatives book even though the unwind was executed in the benign market conditions of 2004-2005 and Gen Re was only a minor player in the derivatives market.

Even in a calm market environment, most long-tenor dynamically hedged positions are marked significantly above their true NPV net of expected future slippage. In the good times, this dynamic is hidden by the profits that flow in from new business. But sooner or later, the negative dynamics of the book (the “stock”) overwhelm the profits on the new business (the “flow”) especially as the flow of new deals dries up. And when the bank in question is too big to fail, it is not the stockholder or the retail investor but the taxpayer who will ultimately foot the bill.

Bookmark and Share

Written by Ashwin Parameswaran

November 18th, 2010 at 1:17 pm

Notes on the Evolutionary Approach to the Moral Hazard Explanation of the Financial Crisis

with 5 comments

In arguing the case for the moral hazard explanation of the financial crisis, I have frequently utilised evolutionary metaphors. This approach is not without controversy and this post is a partial justification as well as an explication of the conditions under which such an approach is valid. In particular, the simple story of selective forces maximising the moral hazard subsidy that I have outlined is dependent upon the specific circumstances and facts of our current financial system.

The “Natural Selection” Analogy

One point of dispute is whether selective forces are relevant in economic systems. The argument against selection usually invokes the possibility of firms or investors surviving for long periods of time despite losses i.e. bankruptcy is not strong enough as a selective force. My arguments rely not on firm survival as the selective force but the principal-agent relationship between investors and asset managers, between shareholders and CEOs etc. Selection kicks in much before the point of bankruptcy in the modern economy. In this respect, it is relevant to note the increased prevalence of shareholder activism in the last 25 years which has strengthened this argument. Moreover, the natural selection argument only serves as a more robust justification for the moral hazard story that does not depend upon explicit agent intentionality but is nevertheless strengthened by it.

The “Optimisation” Analogy

The argument that selective forces lead to optimisation is of course an old argument, most famously put by Milton Friedman and Armen Alchian. However, evolutionary economic processes only lead to optimisation if some key assumptions are satisfied. A brief summary of the key conditions under which an evolutionary process equates to neoclassical outcomes can be found on pages 26-27 of this paper by Nelson and Winter. Below is a partial analysis of these conditions with some examples relevant to the current crisis.


Genetic diversity is the raw material upon which Darwinian natural selection operates. Similarly, to achieve anything close to an “optimal” outcome, the strategies available to be chosen by economic agents must be sufficiently diverse. The “natural selection” explanation of the moral hazard problem which I had elaborated upon in my previous post, therefore depends upon the toolset of banks’ strategies being sufficiently varied. The toolset available to banks to exploit the moral hazard subsidy is primarily determined by two factors: technology/innovation and regulation. The development of new financial products via securitisation, tranching and most importantly synthetic issuances with a CDS rather than a bond as an underlying which I discussed here, has significantly expanded this toolset.


The story of one optimal strategy outcompeting all others is also dependent on environmental conditions being stable. Quoting from Nelson and Winter: “If the analysis concerns a hypothetical static economy, where the underlying economic problem is standing still, it is reasonable to ask whether the dynamics of an evolutionary selection process can solve it in the long run. But if the economy is undergoing continuing exogenous change, and particularly if it is changing in unanticipated ways, then there really is no “long run” in a substantive sense. Rather, the selection process is always in a transient phase, groping toward its temporary target. In that case, we should expect to find firm behavior always maladapted to its current environment and in characteristic ways—for example, out of date because of learning and adjustment lags, or “unstable” because of ongoing experimentation and trial-and-error learning.”

This follows logically from the ‘Law of Competitive Exclusion‘. In an environment free of disturbances, diversity of competing strategies must reduce dramatically as the optimal strategy will outcompete all others. In fact, disturbances are a key reason why competitive exclusion is rarely observed in ecosystems. When Evelyn Hutchinson examined the ‘Paradox of the Plankton’, one of the explanations he offered was the “permanent failure to achieve equilibrium” . Indeed, one of the most accepted explanations of the paradox is the ‘Intermediate Disturbance Hypothesis’ which concludes that ecosystem diversity may be low when the environment is free of disturbances.

Stability here is defined as “stability with respect to the criteria of selection”. In the principal-agent selective process, the analogous criteria to Darwinian “fitness” is profitability. Nelson and Winter’s objection is absolutely relevant when the strategy that maximises profitability is a moving target and there is significant uncertainty regarding the exact contours of this strategy. On the other hand, the kind of strategies that maximise profitability in a bank have not changed for a while, in no small part because of the size of the moral hazard free lunch available. A CEO who wants to maximise Return on Equity for his shareholders would maximise balance sheet leverage, as I explained in my first post. The stability of the parameters of the strategy that would maximise the moral hazard subsidy and accordingly profitability, ensures that this strategy outcompetes all others.

Bookmark and Share

Written by Ashwin Parameswaran

March 13th, 2010 at 5:22 am

Stability and Macro-Stabilisation as a Profound Form of the Moral Hazard Problem

with 5 comments

I have argued previously that the moral hazard explanation of the crisis fits the basic facts i.e. bank balance sheets were highly levered and invested in assets with severely negatively skewed payoffs. But this still leaves another objection to the moral hazard story unanswered – It was not only the banks with access to cheap leverage that were heavily invested in “safe” assets, but also asset managers, money market mutual funds and even ordinary investors. Why was this the case?

A partial explanation which I have discussed many times before relies on the preference of agents (in the principal-agent sense) for such bets. But this is an incomplete explanation. Apart from not being applicable to investors who are not agents, it neglects the principal’s option to walk away. A much better explanation that I mentioned here and here is the role of extended periods of stability in creating “moral hazard-like” outcomes. This is an altogether more profound and pervasive form of the moral hazard problem and lies at the heart of the Minsky-Holling thesis that stability breeds loss of resilience.

It is important to note that such an outcome can arise endogenously without any government intervention. Minsky argued that such an endogenous loss of resilience was inevitable but this is not obvious. As I noted here: “The assertion that an economy can move outside the corridor due to endogenous factors is difficult to reject. All it takes is a chance prolonged period of stability. However, this does not imply that the economy must move outside the corridor, which requires us to prove that prolonged periods of stability are the norm rather than the exception in a capitalist economy.”

But it can also arise as a result of macro-stabilising fiscal and monetary policies. Whether the current crisis was endogenous or not is essentially an empirical question. I have argued in previous posts that it was not and that the “Greenspan Put” monetary policy did as much damage as all the explicit bailouts did. The evidence behind such a view has been put forth well by David Merkel here and by Barry Ritholz in his book or in this excellent episode of Econtalk.

Bookmark and Share

Written by Ashwin Parameswaran

March 7th, 2010 at 10:07 am

Natural Selection, Self-Deception and the Moral Hazard Explanation of the Financial Crisis

with 15 comments

Moral Hazard and Agent Intentionality

A common objection to the moral hazard explanation of the financial crisis is the following: Bankers did not explicitly factor in the possibility of being bailed out. In fact, they genuinely believed that their firms could not possibly collapse under any circumstances. For example, Megan McArdle says: I went to business school with these people, and talked to them when they were at the banks, and the operating assumption was not that they could always get the government to bail them out if something went wrong.  The operating assumption was that they had gotten a whole lot smarter, and would not require a bailout.” And Jeffrey Friedman has this to say about the actions of Ralph Cioffi and Matthew Tannin, the managers of the Bear Stearns fund whose collapse was the canary in the coal mine for the crisis: These are not the words, nor were Tannin and Cioffi’s actions the behavior, of people who had deliberately taken what they knew to be excessive risks. If Tannin and Cioffi were guilty of anything, it was the mistake of believing the triple-A ratings.”

This objection errs in assuming that the moral hazard problem requires an explicit intention on the part of economic agents to take on more risk and maximise the free lunch available courtesy of the taxpayer. The essential idea which I outlined at the end of this post is as follows: The current regime of explicit and implicit bank creditor protection and regulatory capital requirements means that a highly levered balance sheet invested in “safe” assets with severely negatively skewed payoffs is the optimal strategy to maximise the moral hazard free lunch. Reaching this optimum does not require explicit intentionality on the part of economic actors. The same may be achieved via a Hayekian spontaneous order of agents reacting to local incentives or even more generally through “natural selection”-like mechanisms.

Let us analyse the “natural selection” argument a little further. If we assume that there is a sufficient diversity of balance-sheet strategies being followed by various bank CEOs, those CEOs who follow the above-mentioned strategy of high leverage and assets with severely negatively skewed payoffs will be “selected” by their shareholders over other competing CEOs. As I have explained in more detail in this post, the cheap leverage afforded by the creditor guarantee means that this strategy can be levered up to achieve extremely high rates of return. Even better, the assets will most likely not suffer any loss in the extended stable period before a financial crisis. The principal, in this case the bank shareholder, will most likely mistake the returns to be genuine alpha rather than the severe blowup risk trade it truly represents. The same analysis applies to all levels of the principal-agent relationship in banks where an asymmetric information problem exists.

Self-Deception and Natural Selection

But this argument still leaves one empirical question unanswered – given that such a free lunch is on offer, why don’t we see more examples of active and intentional exploitation of the moral hazard subsidy? In other words, why do most bankers seem to be true believers like Tannin and Cioffi. To answer this question, we need to take the natural selection analogy a little further. In the evolutionary race between true believers and knowing deceivers, who wins? The work of Robert Trivers on the evolutionary biology of self-deception tells us that the true believer has a significant advantage in this contest.

Trivers’ work is well summarised by Ramachandran: “According to Trivers, there are many occasions when a person needs to deceive someone else. Unfortunately, it is difficult to do this convincingly since one usually gives the lie away through subtle cues, such as facial expressions and tone of voice. Trivers proposed, therefore, that maybe the best way to lie to others is to first lie to yourself. Self-deception, according to Trivers, may have evolved specifically for this purpose, i.e. you lie to yourself in order to enable you to more effectively deceive others.” Or as Conor Oberst put it more succinctly here: “I am the first one I deceive. If I can make myself believe, the rest is easy.” Trivers’ work is not as relevant for the true believers as it is for the knowing deceivers. It shows that active deception is an extremely hard task to pull off especially when attempted in competition with a true believer who is operating with the same strategy as the deceiver.

Between a CEO who is consciously trying to maximise the free lunch and a CEO who genuinely believes that a highly levered balance sheet of “safe” assets is the best strategy, who is likely to be more convincing to his shareholders and regulator? Bob Trivers’ work shows that it is the latter. Bankers who drink their own Kool-Aid are more likely to convince their bosses, shareholders or regulators that there is nothing to worry about. Given a sufficiently strong selective mechanism such as the principal-agent relationship, it is inevitable that such bankers would end up being the norm rather than the exception. The real deviation from the moral hazard explanation would be if it were any other way!

There is another question which although not necessary for the above analysis to hold is still intriguing: How and why do people transform into true believers? Of course we can assume a purely selective environment where a small population of true believers merely outcompete the rest. But we can do better. There is ample evidence from many fields of study that we tend to cling onto our beliefs even in the face of contradictory pieces of information. Only after the anomalous information crosses a significant threshold do we revise our beliefs. For a neurological explanation of this phenomenon, the aforementioned paper by V.S. Ramachandran analyses how and why patients with right hemisphere strokes vehemently deny their paralysis with the aid of numerous self-deceiving defence mechanisms.

Jeffrey Friedman’s analysis of how Cioffi and Tannin clung to their beliefs in the face of mounting evidence to the contrary until the “threshold” was cleared and they finally threw in the towel is a perfect example of this phenomenon. In Ramachandran’s words, “At any given moment in our waking lives, our brains are flooded with a bewildering variety of sensory inputs, all of which have to be incorporated into a coherent perspective based on what stored memories already tell us is true about ourselves and the world. In order to act, the brain must have some way of selecting from this superabundance of detail and ordering it into a consistent ‘belief system’, a story that makes sense of the available evidence. When something doesn’t quite fit the script, however, you very rarely tear up the entire story and start from scratch. What you do, instead, is to deny or confabulate in order to make the information fit the big picture. Far from being maladaptive, such everyday defense mechanisms keep the brain from being hounded into directionless indecision by the ‘combinational explosion’ of possible stories that might be written from the material available to the senses.” However, once a threshold is passed, the brain finds a way to revise the model completely. Ramachandran’s analysis also provides a neurological explanation for Thomas Kuhn‘s phases of science where the “normal” period is overturned once anomalies accumulate beyond a threshold. It also provides further backing for the thesis that we follow simple rules and heuristics in the face of significant uncertainty which I discussed here.

Fix The System, Don’t Blame the Individuals

The “selection” argument provides the rationale for how the the extraction of the moral hazard subsidy can be maximised despite the lack of any active deception on the part of economic agents. Therefore, as I have asserted before, we need to fix the system rather than blaming the individuals. This does not mean that we should not pursue those guilty of fraud. But merely pursuing instances of fraud without fixing the incentive system in place will get us nowhere.

Bookmark and Share

Written by Ashwin Parameswaran

February 17th, 2010 at 10:30 am

Mark-to-Market Accounting and the Financial Crisis

with 5 comments

Mark-to-Market (MtM) Accounting is usually cast as a villain of the piece in most financial crises. This note aims to rebut this criticism from a “system resilience” perspective. It also expands on the role that MtM Accounting can play in mitigating agents’ preference for severely negatively skewed payoffs, a theme I touched upon briefly in an earlier note.

The “Downward Spiral” of Mark-to-Market Accounting

If there’s anything that can be predicted with certainty in a financial crisis, it is that sooner or later banks will plead to their regulators and/or FASB asking for relaxation of MtM accounting rules. The results are usually favourable. So in the S&L crisis, we got the infamous “Memorandum R-49” and in the current crisis, we got FAS 157-e.

The most credible argument for such a relaxation of MtM rules is the “downward spiral” theory. Opponents of MtM Accounting argue that it can trigger a downward spiral in asset prices in the midst of a liquidity crisis. As this IIF memorandum puts it: “often dramatic write-downs of sound assets required under the current implementation of fair-value accounting adversely affect market sentiment, in turn leading to further write-downs, margin calls and capital impacts in a downward spiral that may lead to large-scale fire-sales of assets, and destabilizing, pro-cyclical feedback effects. These damaging feedback effects worsen liquidity problems and contribute to the conversion of liquidity problems into solvency problems.” The initial fall in prices feeds upon itself in a “positive feedback” process.

I am not going to debate the conditions necessary for this positive feedback process to hold, not because the case is beyond debate but because MtM is just one in a long list of positive feedback processes in our financial markets. Laura Kodres at the IMF has an excellent discussion on “destabilizing” hedge fund strategies here which identifies some of the most common ones – margin calls on levered bets, stop-loss orders, dynamic hedging of short-gamma positions and even just plain vanilla momentum trading strategies.

The crucial assumption necessary for the downward spiral to hold is that the forces exerting negative feedback on this fall in asset prices are not strong enough to counter the positive feedback process. The relevant question from a system resilience perspective is why this is so. Why are there not enough investors with excess liquidity or banks with capital and liquidity reserves to buy up the “undervalued” assets and prevent collapse?  One answer which I discussed in my previous note is the role of extended periods of stability in reducing system resilience. The narrowing of the “Leijonhufvud Corridor” reduces the margin of error before positive feedback processes kick in. The most obvious example is reduction in collateral required to execute a leveraged bet. The period of stability also weeds out negative feedback strategies or forces them to adapt thereby reducing their influence on the market.

A healthy market is characterised not by the absence of positive feedback processes but by the presence of a balanced mix of positive and negative feedback processes. Eliminating every single one of the positive feedback processes above would mean eliminating a healthy chunk of the market. A better solution is to ensure the persistence of negative feedback processes.

Mark-to-Market Accounting as a Modest Mitigant to the Moral Hazard Problem

As I mentioned in a previous note, marking to a liquid market significantly reduces the attractiveness of severely negatively skewed bets for an agent. If the agent is evaluated on the basis of mark-to-market and not just the final payout, significant losses can be incurred much before the actual event of default on a super-senior bond.

The impact of true mark-to-market is best illustrated by highlighting the difference between Andrew Lo’s example of the Capital Decimation Partners and the super-senior tranches that were the source of losses in the current crisis. In Andrew Lo’s example, the agent sells out-of-the-money (OTM) options on an equity index of a very short tenor (less than three months). This means that there is significant time decay which mitigates the mark-to-market impact of a fall in the underlying. This rapid time decay due to the short tenor of the bet makes the negatively skewed bet worthwhile for the hedge fund manager even though he is subject to constant mark to market. On the other hand, loans/bonds are of a much longer tenor and if they were liquidly traded, the mark-to-market swings would make the negative skew of the final payout superfluous for the purposes of the agent who would be evaluated on the basis of the mark-to-market and not the final payout.

Many of the assets on bank balance sheets however are not subject to mark-to-market accounting or are only subject to mark-to-model on an irregular basis. This enables agents to invest in severely negatively skewed bets of long tenor safe in the knowledge that the low probability of an event of default in the first few years is extremely low. It’s worth noting that mark-to-model is almost as bad as not marking to market at all for such negatively skewed bets, especially if the model is based on parameters drawn from recent historical data during the “stable” period.

On Whether Money Market Mutual Funds (MMMFs) should Mark to Market

The SEC recently announced a new set of money market reforms aimed at fixing the flaws highlighted by Reserve Primary Fund’s “breaking the buck” in September 2008. However, it stopped short of requiring money market funds to post market NAVs that may fluctuate. One of the arguments for why floating rate NAVs are a bad idea is that regulations that force money market funds to hold “safe” assets make mark-to-market superfluous. In fact, exactly the opposite is true. It is essential that assets with severely negatively skewed payoffs such as AAA bonds are marked to market precisely so that agents such as money market fund managers are not tempted to take on uneconomic bets in an attempt to pick up pennies from in front of the bulldozer.

The S&L Crisis: A Case Study on the impact of avoiding MtM

Martin Mayer’s excellent book on the S&L crisis has many examples of the damage that can be done by avoiding MtM accounting especially when the sector has a liquidity backstop via the implicit or explicit guarantee of the FDIC or the Fed. In his words, “As S&L accounting was done, winners could be sold at a profit that the owners could take home as dividends, while the losers could be buried in the portfolio “at historic cost,” the price that had been paid for them, even though they were now worth less, and sometimes much less.”

As Mayer notes, this accounting freedom meant that S&L managers were eager consumers of the myriad varieties of mortgage backed securities that Wall Street conjured up in the 80s in search of extra yield, immune from the requirement to mark these securities to market.

Wall Street’s Opposition to the Floating NAV Requirement for MMMFs

Some commentators such as David Reilly and Felix Salmon pointed out the hypocrisy of investment banks such as Goldman Sachs recommending to the SEC that money market funds not be required to mark to market while rigorously enforcing MtM on their own balance sheets. In fact the above analysis of the S&L crisis shows why their objections are perfectly predictable. Investment banks prefer that their customers not have to mark to market. This increases the demand from agents at these customer firms for “safe” highly rated assets that yield a little extra i.e. the very structured products that Wall Street sells, safe in the knowledge that they are immune from MtM fluctuations.

Mark-to-Market and the OTC-Exchange Debate

Agents’ preference for avoiding marking to market also explains why apart from investment banks, even their clients may prefer to invest in illiquid, opaque OTC products rather than exchange-traded ones. Even if accounting allows one to mark a bond at par, it may be a lot harder to do so if the bond price were quoted in the daily newspaper!

Mark-to-Market and Excess Demand for “Safe” Assets

Many commentators have blamed the current crisis on an excess demand for “safe” assets (See for example Ricardo Caballero). However, a significant proportion of this demand may arise from agents who do not need to mark to market and is entirely avoidable. More widespread enforcement of mark to market should significantly decrease the demand from agents for severely negatively skewed bets i.e. “safe” assets.

Bookmark and Share

Written by Ashwin Parameswaran

February 7th, 2010 at 1:14 pm

Implications of Moral Hazard in Banking

with 8 comments

In my previous post, I explained how a moral hazard outcome can come out even in the absence of explicit agent intentionality to take on more risk. This post will focus on the practical implications of the moral hazard problem in banking. Much of the below is just a restatement of arguments made in my first post that I felt needed to be highlighted. For references and empirical evidence, please refer to the earlier post.

Moral hazard can persist even if the bailout is uncertain. Even a small probability of a partial bailout will reduce the rate of return demanded by bank creditors and this reduction constitutes an increase in firm value. The implication is that there is no partial solution of the moral hazard problem. There must be a credible and time-consistent commitment that under no circumstances will there even be a partial creditor bailout.

In a simple Modigliani-Miller world, the optimal leverage for a bank is therefore infinite. Even without invoking Modigliani-Miller, the argument for this is intuitive. If each incremental unit of debt issued is issued at less than its true economic cost, it adds to firm value. In reality of course, there are many limits to leverage, the most important being regulatory capital requirements.

Increased leverage and a riskier asset portfolio are not substitutable. Most moral hazard explanations of the crisis claim that the implicit/explicit creditor protection from deposit insurance and the TBTF doctrine causes banks to “take on more risk”, risk being defined as a combination of higher leverage and a riskier asset portfolio. The above arguments show that risk taken on via increased leverage is distinctly superior to the choice of a riskier asset portfolio – Unlike increased leverage, riskier assets do not include any free lunch component.

Regulatory capital requirements force banks to choose from a continuum of choices with low leverage and risky assets combinations on one side to high leverage and “safe” assets on the other (This argument assumes that off balance sheet vehicles cannot fully remove the regulatory capital constraint). Given that high leverage maximises the moral hazard subsidy, banks are biased to move towards the high leverage, “low risk” combination. The frequent divergence between market risk-reward and ratings-implied risk-reward of course means that riskier assets will still be invested in. But they need to clear a higher hurdle than AAA assets.

High-powered incentives encourage managers/traders to operate under high leverage. Bonuses and equity compensation help align the interests of the owner and the manager.

Risk from an agent’s perspective is defined by the skewness of asset returns as well as the volatility. Managers/Traders are motivated to minimise the probability of a negative outcome i.e. maximise negative skew. This tendency is exacerbated in the presence of high-powered incentives. Andrew Lo illustrated this in his example of the Capital Decimation Partners in the context of hedge funds (Hedge fund investors of course do not have an incentive to maximise leverage without limit).

The above is a short explanation of the consequences of moral hazard that explains the key facts of the crisis – high leverage combined with an apparently safe asset portfolio of AAA assets such as super-senior tranches of ABS CDOs. Contrary to conventional wisdom, a moral hazard outcome is characterised by negatively skewed bets, not volatile bets.

The dominance of negatively skewed bets means that it is extremely difficult to detect the outcome of moral hazard by statistical methods. As Nassim Taleb explains here, a large sample size is essential. If the analysis is limited to a “calm” period, the mean as well as the variance of the distribution will be significantly misestimated. Moreover, the problem is exacerbated if one has assumed a symmetric distribution as is often the case. The “low measured variance” is easily misunderstood as a refutation of the moral hazard outcome rather than the confirmation it really represents.

Bookmark and Share

Written by Ashwin Parameswaran

January 6th, 2010 at 1:49 am

Complete Markets and the Principal-Agent Problem in Banking

without comments

In an earlier note, I discussed how monitoring and incentive contracts can alleviate the asymmetric information problem in the principal-agent relationship. Perfect monitoring, apart from being impossible in many cases, is also too expensive. As a result, most principals will monitor to the extent that the expense is justified by the reduced incentive mismatch. In most industries, this approach is good enough. The menu of choices available to an agent is usually narrow and the principal only needs to monitor for the most egregious instances of abuse.

In fact, this was the case in banking as well until the advent of derivatives. Goodhart’s Law by itself does not guarantee arbitrage by the agent – the agent also needs a sufficiently wide menu of choices that the principal cannot completely monitor or contract for.

As discussed in an earlier note, agents in banking have a strong incentive to enter into bets with negatively skewed payoffs. The limiting factor was always the supply of such financial instruments. For example, supply of AAA corporate bonds has always been limited. Securitisation and tranching technology increased this limit substantially by using a diverse pool of credits with a lower rating to produce a substantial senior AAA tranche. But the supply was still limited by the number of mortgages or bonds that were available.

The innovation that effectively removed any limit on the agent’s ability to arbitrage was the growth of the CDS market and the development of the synthetic CDO. As the UBS shareholder report notes:

“Key to the growth of the CDO structuring business was the  development of the credit default swap (”CDS”) on ABS in June 2005 (when ISDA published  its CDS on ABS credit definitions). This permitted simple referencing of ABS through a CDS. Prior to this, cash ABS had to be sourced for inclusion in the CDO Warehouse.”

Bookmark and Share

Written by Ashwin Parameswaran

December 28th, 2009 at 9:18 am

Information Asymmetry and the Principal-Agent Problem

with 2 comments

Information Asymmetry is often held as the cause of many agency problems. The most famous such study is Akerlof’s “Market for Lemons”. Many recent studies have pinned the blame for aspects of the recent financial crisis on information asymmetry between various market participants. On the face of it, this view is hard to dispute. The principal-agent problem is pervasive in financial institutions and markets – between shareholders and CEOs, CEOs and traders, shareholders and bank creditors, and between banks and their clients.

Monitoring and Incentive Contracts

In most circumstances, market participants find ways to mitigate this principal-agent problem. In the case of simple tasks, monitoring by the principal may be enough. Unfortunately, many tasks are too complex to be monitored effectively by the principal. Comprehensive monitoring can also be too expensive.

Another option is to amend the contract between the principal and the agent so as to align their interests. Examples of this are second-hand car dealers offering warranties, bonds carrying covenants, bank bonuses being paid in deferred equity rather than cash etc. This approach is not perfect either. There are limits to how perfectly a contract can align interests and agents will arbitrage imperfect contracts to maximise their own interests – again Goodhart’s Law in operation. In fact, firms frequently discover that contracts that seem to align principal and agent interests have exactly the opposite effect. As a seminal paper in management theory puts it, “the folly of rewarding A, while hoping for B” . In the absence of a “perfectly aligned” contract, a close proxy (A) of the real objective (B) may make things worse.

At this point, it is worth noting that the imperfect nature of contracting and monitoring does not necessarily mean that the principal-agent relationship will break down. In many contracts, a small amount of moral hazard may not significantly reduce the economic benefit derived by the principal .

The Option To Walk Away

If the loss due to information asymmetry is too large despite all available contractual arrangement, then the principal always retains the option to walk away. This is of course Akerlof’s conclusion as well. In his example on health insurance for people over the age of 65, he notes that “no insurance sales may take place at any price.”

In the context of a repeated game where the principal and agent transact regularly, merely the existence of the option to walk away can mitigate the principal-agent problem. For example, let’s replace the used-car seller in Akerlof’s analysis with a fruit-seller. Even if a buyer of fruits has no knowledge of fruit quality, the seller will not sell him “lemons” as the buyer can always walk away. The seller is incentivised to maximise profits over the series of sales rather than one sale.

It is puzzling that many recent studies of the crisis neglect this option to walk away. For example,

  • For example, Richard Squire analyses the asymmetric risk incentive facing shareholders in AIG and concludes that it is not cost-effective for creditors to monitor shareholders and therefore, the problem persists. But if this is indeed the case, the optimal course of action for creditors is to simply walk away.
  • Arora, Borak et al conclude that banks that structure and sell CDOs can cherry-pick portfolios to include lemons in a manner undetectable by the buyer of the CDO.

I must stress that I am not disputing the arguments presented in either paper. But the information asymmetry problem cannot be the basis of a persistent, repeated principal-agent problem. Market mechanisms do not guarantee that no mistakes are made. However, they do ensure that repeated mistakes are unlikely. As the saying goes, “Fool me once, shame on you; fool me twice, shame on me.”

Indeed the break-down in the CDO market may simply be a case of the buyer having walked away. In the case of AIG, creditors have not walked away primarily due to the near-explicit guarantee accorded to them by the United States government.

Creditors have not walked away because of the guarantee of the state. Clients have walked away from complex products in many cases. But what about bank shareholders who have suffered so much in the crisis? Why have they not walked away from the sector? The answer lies in the implicit/explicit guarantee provided to bank creditors that is essentially a free lunch courtesy of the taxpayer. As I discussed in the conclusion to an earlier note:

“Principal-agent problems and conflicts between the interests of shareholders, managers and creditors are inherent in each organisation to some degree but usually, the stakeholders develop ways to mitigate such problems. If no such avenues for mitigation are feasible, they always retain the option to walk away from the relationship.

This dynamic changes significantly in the presence of a “free lunch” such as the one provided by creditor protection. In such a case, not walking away even after suffering losses is an entirely rational strategy. Each stakeholder has a positive probability of capturing part of the free lunch in the future even if he has not been able to do so in the past. In fact, shareholder optimism may well be proven correct if significant compensation restrictions are imposed on the entire industry and this increases the share of the “free lunch” flowing to them.”

The free lunch subsidy of “Too Big to Fail” and deposit insurance takes away the option to walk away, not only in the context of bank creditors but also in the context of other principal-agent problems within the industry. The problems of asymmetric information are thus allowed to persist at all levels in the industry for far longer periods than would be the case otherwise.

Bookmark and Share

Written by Ashwin Parameswaran

December 28th, 2009 at 6:21 am

The Chicago Pit on Negatively Skewed Bets

without comments

The Chicago pit has a saying that captures exactly the perils of entering into a negatively skewed bet:

“Traders who sell volatility eat like chickens and shit like elephants.”

Taleb has shown that negatively skewed bets are tempting enough even when we’re risking our own capital. The moral hazard problem and the resultant cheap leverage makes the trade a no-brainer for a bank.

Bookmark and Share

Written by Ashwin Parameswaran

December 16th, 2009 at 5:20 pm