Archive for the ‘Goodhart’s Law’ Category
A strategy to maximise bonuses and avoid personal culpability:
- Don’t commit the fraud yourself.
- Minimise information received about the actions of your employees.
- Control employees through automated, algorithmic systems based on plausible metrics like Value at Risk.
- Pay high bonuses to employees linked to “stretch” revenue/profit targets.
- Fire employees when targets are not met.
CEOs and senior managers of modern corporations possess the ability to engineer fraud on an organisational scale and capture the upside without running the risk of doing any jail time. In other words, they can reliably commit fraud and get away with it.
Imagine that you are the newly hired CEO of a large bank and by some improbable miracle your bank is squeaky clean and free of fraudulent practises. But you are unhappy about this. Your competitors are making more profits than you are by embracing fraud and coming out ahead of you even after paying tens of billions of dollars in fines to the regulators. And you want a piece of the action. But you’re a risk-averse person and don’t want to risk spending any time in jail for committing fraud. So how can you achieve this outcome?
Obviously you should not commit any fraudulent acts yourself. You want your junior managers to commit fraud in the pursuit of higher profits. One way to incentivise this behaviour is to adopt what are known as ‘high-powered incentives’. Pay your employees high bonuses tied to revenue/profits and maintain hard-to-meet ‘stretch’ targets. Fire ruthlessly if these targets are not met. And finally, ensure that you minimise the flow of information up to you about how exactly how your employees meet these targets.
There is one problem with this approach. As a CEO, this allows you to use the “I knew nothing!” defense and claim ignorance about all the “deplorable” fraud taking place lower down the organisational food chain. But it may fall foul of another legal principle that has been tailored for such situations – the principle of ‘wilful blindness’ – “if there is information that you could have know, and should have known, but somehow managed not to know, the law treats you as though you did know it”. In a recent essay, Judge Rakoff uses exactly this principle to criticise the failure of regulators in the United States in prosecuting senior bankers.
But wait – all hope is not lost yet. There is one way by which you as a CEO can not only argue that adequate controls and supervision were in place and at the same time make it easier for your employees to commit fraud. Simply perform the monitoring and control function through an automated system and restrict your role to signing off on the risk metrics that are the output of this automated system.
It is hard to explain how this can be done in the abstract so let me take a hypothetical example from the mortgage origination and securitisation industry. As a CEO of a mortgage originator in 2005, you are under a lot of pressure from your shareholders to increase subprime originations. You realise that the task would be a lot easier if your salespeople originated fraudulent loans where ineligible borrowers are given loans they can’t afford. You’ve followed all the steps laid out above but as discussed this is not enough. You may be accused of not having any controls in the organisation. Even if you try hard to ensure that no information regarding fraud filters through to you, you can never be certain. At the first sign of something unusual, a mortgage approval officer may raise an exception to his supervisor. Given that every person in the management hierarchy wants to cover his own back, how can you ensure that nothing filters up to you whilst at the same time providing a plausible argument that you aren’t wilfully blind?
The answer is somewhat counterintuitive – you should codify and automate the mortgage approval process. Have your salespeople input potential borrower details into a system that approves or rejects the loan application based on an algorithm without any human intervention. The algorithm does not have to be naive. In fact it would ideally be a complex algorithm, maybe even ‘learned from data’. Why so? Because the more complex the algorithm, the more opportunities it provides to the salespeople to ‘game’ and arbitrage the system in order to commit fraud. And the more complex the algorithm, the easier it is for you, the CEO, to argue that your control systems were adequate and that you cannot be accused of wilful blindness or even the ‘failure to supervise’.
In complex domains, this argument is impossible to refute. No regulator/prosecutor is going to argue that you should have installed a more manual control system. And no regulator can argue that you, the CEO, should have micro-managed the mortgage approval process.
Let me take another example – the use of Value at Risk (VaR) as a risk measure for control purposes in banks. VaR is not ubiquitous because traders and CEOs are unaware of its flaws. It is ubiquitous because it allows senior managers to project the facade of effective supervision without taking on the trouble or the legal risks of actually monitoring what their traders are up to. It is sophisticated enough to protect against the charge of wilful blindness and it allows ample room for traders to load up on the tail risks that fund the senior managers’ bonuses during the good times. When the risk blows up, the senior manager can simply claim that he was deceived and fire the trader.
What makes this strategy so easy to implement today compared to even a decade ago is the ubiquitousness of fully algorithmic control systems. When the control function is performed by genuine human domain experts, then obvious gaming of the control mechanism is a lot harder to achieve. Let me take another example to illustrate this. One of the positions that lost UBS billions of dollars during the 2008 financial crisis was called ‘AMPS’ where billions of dollars in super-senior tranche bonds were hedged with a tiny sliver of equity tranche bonds so that the portfolio showed a zero VaR and delta-neutral risk position. Even the most novice of controllers could have identified the catastrophic tail risk embedded in hedging a position where one can lose billions, with another position where one could only gain millions.
There is nothing new in what I have laid out in this essay – for example, Kenneth Bamberger has made much the same point on the interaction between technology and regulatory compliance:
automated systems—systems that governed loan originations, measured institutional risk, prompted investment decisions, and calculated capital reserve levels—shielded irresponsible decisions, unreasonably risky speculation, and intentional manipulation, with a façade of regularity….
Invisibility by design, allows engineering of fraudulent outcomes without being held responsible for them – the “I knew nothing!” defense. of course, they are also self-deceived so this is really true.
But although the automation that enables this risk-free fraud is a recent phenomenon, the principle behind this strategy is one that is familiar to managers throughout the modern era – “How do I get things done the way I want to without being held responsible for them?”.
Just as the algorithmic revolution is simply a continuation of the control revolution, the ‘accountability gap’ due to automation is simply an acceleration of trends that have been with us throughout the modern era. Theodore Porter has shown how the rise of objectivity and bureaucracy were as much driven by the desire to avoid responsibility as they were driven by the desire for superior results. Many features of the modern corporate world only make sense when we understand that one of their primary aims is the avoidance of responsibility and culpability. Why are external consulting firms so popular even when the CEO knows exactly what he wants to do? So that the CEO can avoid responsibility if the ‘strategic restructuring’ goes badly. Why do so many firms delegate their critical control processes to a hotpotch of outsourced software contractors? So that they can blame any failures on external counter-parties who have explicitly been granted exemption from any liability1.
Due to my experience in banking, my examples and illustrations are necessarily drawn from the world of finance. But it should be clear that nothing in what I’ve said is limited to banking. ‘Strategic ignorance’ is equally effective in many other domains. My arguments are also not a justification for not prosecuting bankers for fraud. It is an argument that CEOs of modern corporations can reap the benefits of fraud and get away with it. And they can do so very easily. Fraud is embedded within the very fabric of the modern economy.
Note: Venkat makes a similar point in his series on the ‘Gervais Principle’ on how sociopathic managers avoid responsibility for their actions. Much of what I have written above may make more sense if read in conjunction with his essay.
Many economists want to turn back the clock on the American economic system to that of the 50s and 60s. This is understandable – the ‘Golden Age’ of the 50s and 60s was characterised by healthy productivity growth, significant real wage growth and financial stability. Similarly, many commentators see the banking system during that time as the ideal state. In this vein, Amar Bhide offers his solution for the chronic fragility of the financial system:
governments should fully guarantee all bank deposits — and impose much tighter restrictions on risk-taking by banks. Banks should be forced to shed activities like derivatives trading that regulators cannot easily examine…..Banks must therefore be restricted to those activities, like making traditional loans and simple hedging operations, that a regulator of average education and intelligence can monitor.
There are a couple of problems with his idea – for one it may not be possible to effectively regulate bank risk-taking. On many previous occasions, I have asserted that regulations cannot restrain banks from extracting moral hazard rents from the guarantee provided by the state/central bank to bank creditors and depositors. The primary reason for this is the spread of financial innovation during the last fifty years that has given banks an almost infinite variety of ways in which it can construct an opaque and precisely tailored payoff that provides a steady stream of profits in good times in exchange for a catastrophic loss in bad times. As I have shown, the moral hazard trade is not a “riskier” trade but a combination of high leverage and a severely negatively skewed payoff with a catastrophic tail risk.
Minsky himself understood the essentially ephemeral nature of the financial system of the 50s from his work on the early stages of the process of financial innovation that allowed the financial system to unshackle itself from the effective control of the central bank and the regulator. As he observes:
The banking system came out of the war with a portfolio heavily weighted with government debt, and it was not until the 1960s that banks began to speculate actively with respect to their liabilities. It was a unique period in which finance mattered relatively little; at least, finance did not interpose its destabilizing ways……The apparent stability and robustness of the financial system of the 1950s and early 1960s can now be viewed as an accident of history, which was due to the financial residue of World War 2 following fast upon a great depression.
Amar Bhide’s idea essentially seeks to turn back the clock and forbid much of the innovation that has taken place in the last few decades. In particular, derivatives businesses will be forbidden for deposit-taking banks. This is a radical idea and one that is a significant improvement on the current status quo. But it is not enough to mitigate the moral hazard problem. To illustrate why this is the case, let me take an example of how as a banker, I would construct such a payoff within a “narrow banking”-like mandate. Let us assume that banks can only take deposits and make loans to corporations and households. They cannot hedge their loans or engage in any activities related to financial market positions even as market makers, and they cannot carry any off balance-sheet exposures, commitments etc. Although this would seem to be a sufficiently narrow mandate to prevent rent extraction, it is not. Banks can simply lend to other firms that take on negatively skewed bets. You may counter that banks should only be allowed to lend to real economy firms. But do we expect regulators to audit not only the banks under their watch but also the firms to whom they lend money? In the first post on this blog, I outlined how the synthetic super-senior CDO tranche was the quintessential rent-extraction product of the derivatives revolution. But at its core, the super-senior tranche is simply a severely negatively skewed bond – a product that pays a small positive spread in good times and loses you all your money in bad times. There is no shortage of ways in which such a negatively skewed payoff can be constructed by simple structured bank loans.
What the synthetic OTC derivatives revolution made possible was for the banking system to structure such payoffs in an essentially infinite amount without even going through the trouble of making new loans or mortgages – all that was needed was a derivatives counterparty. Without derivatives, banks would have to lend money to generate such a payoff – this only makes it a little harder to extract rents but it still does not change the essence of the problem. Even more crucially, the potential for such rent extraction is unlimited compared to other avenues for extracting rent. If the state pays a higher price for an agricultural crop compared to the market, at least the losses suffered by the taxpayer are limited by physical constraints such as arable land available. But when the rent extraction opportunity goes hand in hand with the very process that creates credit and broad money, the potential for rent extraction is virtually unlimited.
Even if we assume that rent extraction can be controlled by more stringent regulations, there remains one problem. There is simply no way that incumbent large banks, especially those with a large OTC derivatives franchise, can shed their derivatives business and still remain solvent. The best indication of how hard it is to unwind complex derivatives portfolios was the experience of Warren Buffett in unwinding the derivatives portfolio which he inherited from the General Re acquisition. As Buffett notes, unwinding the portfolio of a relatively minor player in the derivative market under benign market conditions and no internal financial pressure took years and cost him $404 million. If we asked any of the large banks, let alone all of them at once, to do the same in the current fragile market conditions the cost of doing so will comfortably bankrupt the entire banking sector. The modern TBTF bank with its huge OTC derivatives business is akin to a suicide bomber with his finger on the button that is holding us hostage – this is the reason why regulators handle them with kid gloves.
In other words, even if our dream of limited and safe banking is viable we have a ‘can’t get there from here’ problem. This does not mean that there are no viable solutions but we need to be more creative. Amar Bhide makes a valid point when he argues that “Why not also make all short-term deposits, which function much like currency, the explicit liability of the government?” But the solution is not to allow private banks to reap the rents from cheap deposit financing but to allow each citizen and corporation access to a public deposit account. The simplest implementation of this would be a system similar to the postal savings system where all deposits are necessarily backed by short-term treasury bills. If the current stock of T-bills is not sufficient to back the demand for such deposits, the Treasury should shift the maturity profile of its debt until the demand is met. In such a system, there would be no deposit insurance i.e. all investment/deposit alternatives except for the state system will be explicitly risky and unprotected.
One criticism of such a system would be that the benefits of maturity transformation would be lost to the economy i.e. unless short-term deposits are deployed to match long-term investment projects, such projects would not find adequate funding. But as I have argued and the data shows, household long-term savings (which includes pensions and life insurance) is more than sufficient to meet the long-term borrowing needs of the corporate and the household sector in both the United States and Europe.
The “regulate and insure” model ignores the ability of banks to arbitrage any regulatory framework. But the status quo is also unacceptable. However the system is sufficiently levered and fragile that allowing market forces to operate or simply forcing a drastic structural change upon incumbent banks by regulatory fiat implies an almost certain collapse of the incumbent banks. Creating a public deposit option is the first step in implementing a sustainable transition to a resilient financial system, one in which instead of shackling incumbent banks we separate them from the risk-free depository system.
Note: My views on this topic and some other related topics which I hope to explore soon have been significantly influenced by uber-commenter K. For a taste of his broader ideas which are similar to mine, try this comment which he made in response to a Nick Rowe post.
“The interaction between the market participants, and for that matter between the market participants and the regulators, is not a game, but a war.”
Rick Bookstaber recently compared the complexity of the financial marketplace to that observed in military warfare. Bookstaber focuses primarily on the interaction between market participants but as he mentions, the same analogy also holds for the interaction between market participants and the regulator. In this post, I analyse the role of the financial market regulator within this context. Bookstaber primarily draws upon the work of John Boyd but I will focus on Sun Tzu’s ‘Art of War’.
Much like John Boyd, Sun Tzu emphasised the role of deception in war: “All warfare is based on deception”. In the context of regulation, “deception” is best understood as the need for the regulator to be unpredictable. This is not uncommon in other war-like economic domains. Google, for example, must maintain the secrecy and ambiguity of its search algorithms in order to stay one step ahead of the SEO firms’ attempts to game them. An unpredictable regulator may seem like a crazy idea but in fact it is a well-researched option in the central banking policy arsenal. In a paper for the Federal Reserve bank of Richmond in 1999, Jeffrey Lacker and Marvin Goodfriend analysed the merits of a regulator adopting a stance of ‘constructive ambiguity’. They concluded that a stance of constructive ambiguity was unworkable and could not prevent the moral hazard that arose from the central bank’s commitment to backstop banks in times of crisis. The reasoning was simple: constructive ambiguity is not time-consistent. As Lacker and Goodfriend note: “The problem with adding variability to central bank lending policy is that the central bank would have trouble sticking to it, for the same reason that central banks tend to overextend lending to begin with. An announced policy of constructive ambiguity does nothing to alter the ex post incentives that cause central banks to lend in the ﬁrst place. In any particular instance the central bank would want to ignore the spin of the wheel.” Steve Waldman summed up the time-consistency problem in regulation well when he noted: “Given the discretion to do so, financial regulators will always do the wrong thing.” In fact, Lacker has argued that it was this stance of constructive ambiguity combined with the creditor bailouts since Continental Illinois that the market understood to be an implicit commitment to bailout TBTF banks.
As is clear from the war analogy, a predictable adversary is easily defeated. This of course is why Goodhart’s Law is such a big problem in regulation. Lacker’s suggestion that the regulator follow a “simple decision rule” is fatally flawed for the same reason. Lacker also suggests that “legal constraints limiting policymakers’ actions” could be imposed to mitigate the moral hazard problem. But attempting to lay out a comprehensive list of constraints suffers from the same problem i.e. they can be easily circumvented by a determined regulator. If the relationship between a regulator and the regulated is akin to war, then so is the relationship between the rule-making legislative body and the regulator. Bank bailouts can and have been carried out over the last thirty years under many different guises: explicit creditor bailouts, asset backstops a la Bear Stearns, “liquidity” support via expanded and lenient collateral standards, interest rate cuts as a bank recapitalisation mechanism etc.
Bookstaber asserts quite rightly that the military analogy stems from a view of human rationality that is at odds with both neoclassical and behavioural economics, a point that Gerd Gigerenzer has repeatedly emphasised. Homo economicus relies on a strangely simplistic version of the ‘computational theory of the mind’ that assumes man to be an optimising computer. Behavioural economics then compares the reality of human rationality to this computational ideal and finds man to be an inferior version of a computer, riddled with biases and errors. As Gigerenzer has argued, many heuristics and biases that appear to be irrational or illogical are entirely rational responses to an uncertain world. But clearly deception and unpredictability go beyond simply substituting the rationality of homo economicus with simple heuristics. In the ‘Art of War’, Sun Tzu insists that a successful general must “respond to circumstances in an infinite variety of ways”. Each battle must be fought in its unique context and “when victory is won, one’s tactics are not repeated”. To Sun Tzu, the expert general must be “serene and inscrutable”. In one of the most fascinating passages in the book, he describes the actions and decisions of the expert general: “How subtle and insubstantial, that the expert leaves no trace. How divinely mysterious, that he is inaudible.”
As Robert Wilkinson notes, in order to make any sense of these comments, one needs to appreciate the Taoist underpinnings of the ‘Art of War’. The “infinite variety” of tactics is not the variety that comes from making decisions based on the “spin of a roulette wheel” that Goodfriend and Lacker take to provide constructive ambiguity. It comes from an appreciation of the unique context in which each situation is placed and the flexibility, adaptability and novelty required to succeed. The “inaudibility” refers to the inability to translate such expertise into rules, algorithms or even heuristics. The ‘Taoist adept’ relies on the same intuitive tacit understanding that lies at the heart of what Hubert and Stuart Dreyfus call “expert know-how”1. In fact, rules and algorithms may paralyse the expert rather than aid him. Hubert/Stuart Dreyfus noticed of expert pilots that “rather than being aware that they are flying an airplane, they have the experience that they are flying. The magnitude and importance of this change from analytic thought to intuitive response is evident to any expert pilot who has had the experience of suddenly reflecting upon what he is doing, with an accompanying degradation of his performance and the disconcerting realization that rather than simply flying, he is controlling a complicated mechanism.” The same sentiment was expressed rather more succinctly by Laozi when he said:
“Having some knowledge
When walking the Great Tao
Only brings fear.”
I’m not suggesting that financial markets regulation would work well if only we could hire “expert” regulators. The regulatory capture and the revolving door between the government and Wall Street that is typical of late-stage Olsonian demosclerosis means that the real relationship between the regulator and the regulated is anything but adversarial. I’m simply asserting that there is no magical regulatory recipe or formula that will prevent Wall Street from gaming and arbitraging the system. This is the unresolvable tension in financial markets regulation: Discretionary policy falls prey to the time-consistency problem. The alternative, a systematic and predictable set of rules, is the worst possible way to fight a war.
- This Taoist slant to Hubert Dreyfus’ work is not a coincidence. Dreyfus was deeply influenced by the philosophy of Martin Heidegger who, although he never acknowledged it, was almost certainly influenced by Taoist thought [↩]
Richard Fisher of the Dallas Fed delivered a speech last week( h/t Zerohedge) on the topic of financial reform, which delivered some of the most brutally honest analysis of the problem at hand that I’ve seen from anyone at the Fed. It also made a few points that I felt deserved further analysis and elaboration.
The Dynamics of the TBTF Problem
In Fisher’s words: “Big banks that took on high risks and generated unsustainable losses received a public benefit: TBTF support. As a result, more conservative banks were denied the market share that would have been theirs if mismanaged big banks had been allowed to go out of business. In essence, conservative banks faced publicly backed competition…..It is my view that, by propping up deeply troubled big banks, authorities have eroded market discipline in the financial system.
The system has become slanted not only toward bigness but also high risk…..if the central bank and regulators view any losses to big bank creditors as systemically disruptive, big bank debt will effectively reign on high in the capital structure. Big banks would love leverage even more, making regulatory attempts to mandate lower leverage in boom times all the more difficult…..
It is not difficult to see where this dynamic leads—to more pronounced financial cycles and repeated crises.”
Fisher correctly notes that TBTF support damages system resilience not only by encouraging higher leverage amongst large banks, but by disadvantaging conservative banks that would otherwise have gained market share during the crisis. As I have noted many times on this blog, the dynamic, evolutionary view of moral hazard focuses not only on the protection provided to destabilising positive feedback forces, but on how stabilising negative feedback forces that might have flourished in the absence of the stabilising actions are selected against and progressively weeded out of the system.
Regulatory Discretion and the Time Consistency Problem
Fisher: “Language that includes a desire to minimize moral hazard—and directs the FDIC as receiver to consider “the potential for serious adverse effects”—provides wiggle room to perpetuate TBTF.” Fisher notes that it’s difficult to credibly commit ex-ante not to bail out TBTF creditors – as long as the regulator retains any amount of discretion with the purpose of maintaining systemic stability, they will be tempted to use it.
On the Ineffectiveness of Regulation Alone
Fisher: “While it is certainly true that ineffective regulation of systemically important institutions—like big commercial banking companies—contributed to the crisis, I find it highly unlikely that such institutions can be effectively regulated, even after reform…Simple regulatory changes in most cases represent a too-late attempt to catch up with the tricks of the regulated—the trickiest of whom tend to be large. In the U.S. financial system, what passed as “innovation” was in large part circumvention, as financial engineers invented ways to get around the rules of the road. There is little evidence that new regulations, involving capital and liquidity rules, could ever contain the circumvention instinct.”
This is a sentiment I don’t often hear expressed by a regulator – As I have opined before on this blog, regulations alone just don’t work. The history of banking is one of repeated circumvention of regulations by banks, a process that has only accelerated with the increased completeness of markets. The question is not whether deregulation accelerated the process of banks’ maximising the moral hazard subsidy – it almost certainly did and this was understood even by the Fed as early as 1983. As John Kareken noted, “Deregulation Is the Cart, Not the Horse”. The question is whether re-regulation has any chance of succeeding without fixing the incentives guiding the actors in the system – it does not.
Bailouts Come in Many Shapes and Sizes
Fisher: “Even if an effective resolution regime can be written down, chances are it might not be used. There are myriad ways for regulators to forbear. Accounting forbearance, for example, could artificially boost regulatory capital levels at troubled big banks. Special liquidity facilities could provide funding relief. In this and similar manners, crisis-related events that might trigger the need for resolution could be avoided, making resolution a moot issue.”
A watertight resolution regime may only encourage regulators to aggressively utilise other forbearance mechanisms. Fisher mentions accounting and liquidity relief but fails to mention the most important “alternative bailout mechanism” – the “Greenspan Put” variant of monetary policy.
Preventing Systemic Risk perpetuates the Too-Big-To-Fail Problem
Fisher: “Consider the idea of limiting any and all financial support strictly to the system as a whole, thus preventing any one firm from receiving individual assistance….If authorities wanted to support a big bank in trouble, they would need only institute a systemwide program. Big banks could then avail themselves of the program, even if nobody else needed it. Systemwide programs are unfortunately a perfect back door through which to channel big bank bailouts.”
“System-wide” programs by definition get activated only when big banks and non-banking financial institutions such as GE Capital are in trouble. Apart from perpetuating TBTF, they encourage smaller banks to mimic big banks and take on similar tail risk thus reducing system diversity.
Shrink the TBTF Banks?
Fisher clearly prefers that the big banks be shrunk as a “second-best” solution to the incentive problems that both regulators and banks face in our current system. Although I’m not convinced that shrinking the banks is a sufficient response, even a “free market” solution to the crisis will almost certainly imply a more dispersed banking sector, due to the removal of the TBTF subsidy. The gist of the problem is not size but insufficient diversity. Fisher argues “there is considerable diversity in strategy and performance among banks that are not TBTF.” This is the strongest and possibly even the only valid argument for breaking up the big banks. My concern is that even a more dispersed banking sector will evolve towards a tightly coupled and homogenous outcome due to the protection against systemic risk provided by the “alternative bailout mechanisms”, particularly the Greenspan Put.
The fact that Richard Fisher’s comments echo themes popular with both left-wing and right-wing commentators is not a coincidence. In the fitness landscape of our financial system, our current choice is not so much a local peak as a deep valley – tinkering will get us nowhere and a significant move either to the left or to the right is likely to be an improvement.
The Franken Amendment draws upon Richardson and White’s idea of a centralised clearing platform which I had criticised earlier. This proposal is based upon a flawed understanding of the structured products’ ratings process and the incentives guiding the agencies during this process and arises from a false extrapolation of the corporate and sovereign bond ratings process into the realm of structured products.
The fatal flaw in our ratings regime is not the issuer-pays model but the fact that ratings agencies only get paid if the bond is issued. In the structured products space, the difference between a potential AAA rating and a AA rating is not just that a higher spread is paid to the investor on the bond. The lower rating usually means that the bond will not be issued at all, which means that the ratings agency will not earn any fees. This problem cannot be solved even if we have a single monopolistic ratings agency paid by the SEC, so long as the fees are payable only upon issuance of the bond. As I have discussed earlier in more detail, ratings agencies are incentivised not only to expand market share but to expand the size of the market for rateable securities.
Let me explain the logic with a simple example. A pension fund approaches a bank for a bespoke AAA tranche on a portfolio of mortgage-backed securities. The bank constructs an appropriate tranche paying Libor + 100 bps and asks for a rating, upon which the clearing platform allocates it an agency. The agency comes back with a AA rating instead – so what does the bank do in this instance? It cannot change the tranching without damaging its own economics and the client will not accept a AA tranche paying the same coupon. So the deal just does not get done and the ratings agency is left without any fee for its opinion.
Let us go a little further along this chain of thought – all competing agencies are similarly stringent in their ratings and discover after six months that their earnings and dealflow have collapsed! At this point, they will of course gradually start easing their ratings requirements and sooner or later we will end up in the same position we were in before the crisis hit us. Its worth noting that this outcome does not change if someone other than the issuer pays the agency or even if we have a monopolistic ratings agency. Provided that the agency is a profit-maximising entity, the removal of direct competition may slow the process of easing of ratings criteria, but it will not change the end result.
In fact, the above example is too generous as it ignores the ease with which the centralised platform process can be gamed by banks. The central problem here is the fact that there are a multitude number of structured bonds that can fulfill a typical client request, such as the one above. For example, let us assume that the bank above constructs a tranche from a portfolio of MBS and applies to the platform which allocates it to Moody’s. If Moody’s comes back with an unsatisfactory rating, it cancels the issuance, makes a small modification to the portfolio and tranching and tries its luck again. The process can continue until the bank gets allocated to a more friendly ratings agency and the desired rating is achieved.
The fundamental issue here is that tinkering with the system in this manner is futile – the problems inherent in our current financial system are too fundamental and we have only two choices as I hinted at in an earlier post. We can either put in place blunt and almost certainly efficiency-reducing regulations or we can move towards a free-market system where the implicit and explicit protection provided to the banking sector is removed in a credible and time-consistent manner. To give you a simple example of a blunt regulation that will reduce the potential for ratings arbitrage, we could legislate that if a portfolio of sub investment-grade assets cannot be tranched to produce a AAA tranche. The price we pay for such regulations is that we eliminate a significant proportion of legitimate tranching, but this trade-off is unavoidable.
It was only a matter of time given the focus on the Goldman-SEC case before someone decided to apportion some of the blame onto the ratings agencies. And sure enough, the New York Times has a story out on how the ratings agencies were an integral part of the problem because they gave banks free access to their models and ratings methodology. But this is true of all banking regulations – banking regulators too make their rules, models and methodology freely available to banks who then proceed to arbitrage these rules, primarily to minimise the capital that they are required to hold. This is not surprising given that ratings agencies are essentially an outsourced function of the banking regulatory apparatus. And the problem of arbitrage is also well-known – I have referred to it as the Goodhart’s Law of financial regulation.
The NYT article implicitly suggests that increasing the opacity and ambiguity around the ratings methodology would have resulted in a better outcome. This is similar to how Google tries to discourage people from trying to arbitrage its search algorithm by keeping it opaque. Just keeping the algorithm private is not enough as search-engine optimisers soon figure out the key features of the algorithm by experimenting with what works and what does not, which means that Google needs to continuously modify the algorithm to stay one step ahead of the arbitrageurs.
Maintaining a continuously updated, opaque algorithm is not a suitable strategy for ratings agencies. Even if a banker does not know the exact ratings methodology, he can easily figure out the key features just by running a large number of sample portfolios through the ratings system and analysing the results. Moreover, ratings methodologies that are unpredictable by design can create unnecessary ratings volatility and friction in financial markets. And last but not least, ratings agencies have no incentive to engage in such an arms race with the banks given that they get paid by the bank only when a deal gets done.
The role of ratings agencies in exacerbating the financial crisis has been exaggerated. As David Merkel puts it, “Don’t blame the rating agencies for the failure of the regulators, because they ceded their statutory role to the rating agencies.” The mad rush to buy AAA bonds in the boom wasn’t as much a function of the irrational faith in ratings agencies as it was a function of the rational desire to obtain extra yield whilst not falling foul of internal and external rules and regulations. Even internal control functions in firms often limit the scope of investments by specifying minimum required ratings and then assume that this requirement makes all further supervision of the manager redundant. Unsurprisingly, the manager prefers even an expensive AAA to a cheap BBB bond.
It seems that Obama has come around to Paul Volcker’s position that “protected” financial institutions must not be allowed to take on proprietary risk. In this interview in Der Spiegel, Paul Volcker argues that banks must not be allowed to take on proprietary risk except for risk incidental to “client activities”. Quoting from the interview:
“SPIEGEL: Banking should become boring again?
Volcker: Banking will never be boring. Banking is a risky business. They are going to have plenty of activity. They can do underwriting. They can do securitization. They can do a lot of lending. They can do merger and acquisition advice. They can do investment management. These are all client activities. What I don’t want them doing is piling on top of that risky capital market business. That also leads to conflicts of interest.”
This is a more nuanced version of the argument that calls for the reinstatement of the Glass-Steagall Act. But it suffers from two fatal flaws:
Regulatory Arbitrage: Separation of “client risk” and “proprietary risk” sounds good in theory but it’s almost impossible to enforce in practise. As I’ve discussed previously, a detailed and fine-tuned regulatory policy will be easy to arbitrage and a blunt policy will result in a grossly inefficient financial system.
Losses on “Client Activities” were the major driver in the current crisis. My analysis of the UBS shareholder report highlighted how the accumulation of super-senior CDO tranches was justified primarily by their perceived importance in facilitating the sale of fee-generating junior tranches to clients. Quoting from the report: “within the CDO desk, the ability to retain these tranches was seen as a part of the overall CDO business, providing assistance to the structuring business more generally.” It is the losses on these tranches issued in the name of facilitating client business that were at the core of the crisis. It is these tranches that caused the majority of the losses on banks’ balance sheets. It is losses on insuring these tranches that brought down AIG. Segregated proprietary risk is monitored closely by almost all banks. The real villain of the piece was proprietary risk taken on under the cover of facilitating client business.
Implementation of the Ban
Clearly a simple ban on internal hedge funds and proprietary trading desks would not work. All banks trade the same product on their client’s behalf that they do on a proprietary basis and such a ban can be nullified simply by folding all proprietary operations into trading desks that also facilitate client business.
Another alternative would be to enforce market risk limits on banks, based on VaR for example. If VaR was the criteria in enforcing risk limits on banks in the previous crisis, the crisis would not have been averted. The super-senior CDO tranches at the heart of the crisis were low VaR assets on their own and “zero VaR” assets when merely delta hedged without any hedging of higher-order risks.
Again quoting from the UBS report: “MRC VaR methodologies relied on the AAA rating of the Super Senior positions. The AAA rating determined the relevant product-type time series to be used in calculating VaR. In turn, the product-type time series determined the volatility sensitivities to be applied to Super Senior positions. Until Q3 2007, the 5-year time series had demonstrated very low levels of volatility sensitivities. As a consequence, even unhedged Super Senior positions contributed little to VaR utilisation.” “Once hedged, either through NegBasis or AMPS trades, the Super Senior positions were VaR and Stress Testing neutral (i.e., because they were treated as fully hedged, the Super Senior positions were netted to zero and therefore did not utilize VaR and Stress limits). The CDO desk considered a Super Senior hedged with 2% or more of AMPS protection to be fully hedged. In several MRC reports, the long and short positions were netted, and the inventory of Super Seniors was not shown, or was unclear. For AMPS trades, the zero VaR assumption subsequently proved to be incorrect as only a portion of the exposure was hedged as described in section 4.2.3, although it was believed at the time that such protection was sufficient.”
To summarise, it is extremely unlikely that there exists a way to ban proprietary risk-taking that cannot be circumvented by the banks.
Much of the debate regarding the causes of the financial crisis ignores the fact that we live in a “second best” world. The “Theory of the Second Best” states that in a world that is far from a textbook “free market”, any move towards the theoretical free market optimum does not necessarily increase welfare.
Our current financial system is clearly far from a free market. The implicit and explicit guarantee to bank creditors via deposit insurance and the TBTF doctrine is a fundamental deviation from free market principles. On the other hand, derivatives markets are among the least regulated markets in any sector.
This second-best, hybrid nature of our financial system means that any discussion of the crisis must be strongly empirical in nature. Deductive logic is essential but a logical argument with incomplete facts can be made to fit almost any conclusion. So the Keynesians blame the free market and deregulation, the libertarians blame government action and the behavioural economists blame irrationality. But no one stops to consider any facts that don’t fit their preferred thesis.
The key conclusion of my work is that it is the combination of the moral hazard problem driven by bank creditor guarantees and the deregulated nature of key components of the financial system that caused the crisis. This is not a new argument. The argument for regulation itself rests on the need to protect the taxpayer in the presence of this creditor guarantee. The Fed recognised this argument as early as 1983. As John Kareken noted, “Deregulation Is the Cart, Not the Horse”. The growth of the CDS and other derivatives markets was not a problem by itself. It caused damage by enabling the banks to maximise the value of the free lunch derived from the taxpayer. The same could be said for bank compensation practices.
If re-regulation could work, then I’d be in favour of it. But I don’t think it can. As I’ve discussed before (1,2,3), almost any regulation will be arbitraged away by the banks. The only regulations that may be difficult to arbitrage are blunt and draconian regulations which will dramatically reduce the efficiency of the system. Even then, the odds of arbitrage are not low enough.
In an earlier note, I discussed how monitoring and incentive contracts can alleviate the asymmetric information problem in the principal-agent relationship. Perfect monitoring, apart from being impossible in many cases, is also too expensive. As a result, most principals will monitor to the extent that the expense is justified by the reduced incentive mismatch. In most industries, this approach is good enough. The menu of choices available to an agent is usually narrow and the principal only needs to monitor for the most egregious instances of abuse.
In fact, this was the case in banking as well until the advent of derivatives. Goodhart’s Law by itself does not guarantee arbitrage by the agent – the agent also needs a sufficiently wide menu of choices that the principal cannot completely monitor or contract for.
As discussed in an earlier note, agents in banking have a strong incentive to enter into bets with negatively skewed payoffs. The limiting factor was always the supply of such financial instruments. For example, supply of AAA corporate bonds has always been limited. Securitisation and tranching technology increased this limit substantially by using a diverse pool of credits with a lower rating to produce a substantial senior AAA tranche. But the supply was still limited by the number of mortgages or bonds that were available.
The innovation that effectively removed any limit on the agent’s ability to arbitrage was the growth of the CDS market and the development of the synthetic CDO. As the UBS shareholder report notes:
“Key to the growth of the CDO structuring business was the development of the credit default swap (”CDS”) on ABS in June 2005 (when ISDA published its CDS on ABS credit definitions). This permitted simple referencing of ABS through a CDS. Prior to this, cash ABS had to be sourced for inclusion in the CDO Warehouse.”
Steve Waldmann’s recent post explains why giving financial regulators discretion in choice of policy is almost always a bad idea. In his words:
“An enduring truth about financial regulation is this: Given the discretion to do so, financial regulators will always do the wrong thing.”
The reason of course is the time consistency problem . The temptation for the regulator and central bank to use their “discretion” to bail out the banks is overwhelming. The market will correctly equate a discretionary regulatory environment to be a bailout-prone one. As Lacker and Goodfriend observed in their paper on central bank lending policies in times of crisis:
“The problem with adding variability to central bank lending policy is that the central bank would have trouble sticking to it, for the same reason that central banks tend to overextend lending to begin with. An announced policy of constructive ambiguity does nothing to alter the ex post incentives that cause the central banks to lend in the first place.”
But what about the alternative? Would a regulatory environment that is written in stone perform any better? Most likely it would not – regulations that are written in stone suffer from Goodhart’s Law. The clearer and more detailed the regulation, the easier it is for market participants to arbitrage it.
Goodhart’s Law is the reason why algorithm-based technology services such as Google and Digg prefer to keep their algorithm private and opaque. However, as we’ve discussed above, discretion and opacity is not an option in financial regulation.
So how do we avoid arbitrage without having to resort to discretion and ambiguity in the regulatory framework? Goodhart’s Law is applicable only when we focus on intermediate targets that we presume are good proxies for our objective. The answer is to shift focus from intermediate proxy indicators of excessive risk, such as executive compensation or capital requirements, to the ultimate objective itself.
But is this even achievable? For example, Google and Digg have no option but to focus on a reasonable accurate proxy. The same may be true for financial regulation.