macroresilience

resilience, not stability

Archive for April, 2010

Ratings Reform: The “Centralised Clearing Platform” Proposal

with 5 comments

In an article “berating the raters”, Paul Krugman points to a proposal by Matthew Richardson and Lawrence White who suggest that the SEC create a centralised clearing platform for ratings agencies. Each issuer that would like its debt rated would have to approach this platform which would then choose an agency to rate the debt. The issuer would still have to pay for the rating but the agency would be chosen by the regulator. In their words, “This model has the advantage of simultaneously solving (i) the free rider problem because the issuer still pays, (ii) the conflict of interest problem because the agency is chosen by the regulating body, and (iii) the competition problem because the regulator’s choice can be based on some degree of excellence, thereby providing the rating agency with incentives to invest resources, innovate, and perform high quality work.”

The critical assumption behind the idea of a centralised clearing platform is that the total notional of bonds that exist and need to be rated is constant i.e. it assumes that the actions of the ratings agencies only divide up the ratings market and do not expand or contract it. So for example, a trillion dollars worth of bonds would be issued each year and the regulator would choose who rates which bond thus ensuring that none of the rating agencies are incentivised to give favourable ratings to junk assets. This assumption may hold for corporate bonds, but it is nowhere close to being true for structured bonds like CDOs which were the source of the losses in the crisis.

There are some fundamental differences between the ratings process for corporate bonds and the process for structured products such as CDOs. Whether a corporate bond is issued or not is usually not critically dependent on the rating assigned to it. For example, the fact that a corporate bond is rated as BBB instead of single-A will most likely not prevent the bond from being issued. A firm usually decides to undertake a bond issuance depending on its financing needs and unless the achieved rating is dramatically different from expectations, the bond will be issued and fees will be paid to the ratings agency.

On the other hand, whether a structured bond is issued or not is critically dependent on the ratings methodology applied to it. A structured bond is constructed via an iterative process involving the bank, the investor and the ratings agency. If the ratings methodology for a structured product is not generous enough to provide the investor with a yield comparable to equivalently rated assets and to enable the bank to earn a reasonable fee, the bond will just not get issued. When it comes to structured bonds, bonds are not created first and rated next. Instead the rating given to the bond is critical in determining whether it is issued and correspondingly, whether the ratings agency gets paid.

If each one of the ratings agencies that are part of the centralised platform adopts a stringent ratings methodology that destroys the economics of the trade, the bond is not issued and none of the agencies earns the fee. In this manner, the agencies are still incentivised to loosen their standards even in the absence of competition from other agencies. When it comes to structured bonds, ratings agencies have historically been focused as much on expanding the market as competing with each other. Indeed, the biggest catalyst in expanding rating agency profits over the last two decades has been the steady expansion of the universe of products that they were willing to rate using a generous methodology from vanilla bonds/loans/mortgages to tranches of bond portfolios to tranches of synthetic exposures to even complex algorithms and trading strategies – the crowning example of the last variety being the CPDO. Even when one of the agencies was the first-mover in rating a new product, it could not adopt too stringent a methodology for fear of killing the deal altogether.

Moreover, the very notion of restricting the choice of agencies that can rate a given structured bond is an oxymoron given the iterative process – let us assume that a particular type of structured bond is leniently rated by only one of the three ratings agencies. If the agency assigns the bond at random to another agency, the bank can merely make a small modification to the underlying portfolio and try its luck again till it gets allocated to the right agency. The inherently iterative ratings process also explains why the furore over ratings agencies making their models public is a red herring, as I  have explained in a previous post.

I am not claiming that competition between the agencies did not make things worse at the margin in the financial crisis. But a tangible difference to the outcome in the crisis would have been achieved only if the ratings agencies had adopted such a stringent methodology that would have caused subprime CDOs and many other leveraged/risky structures to not be issued at all. Given the demand for AAA assets with extra yield (driven by internal and external regulations), this could have been achieved only by an explicit ban on rating such structures. Else, even if there was just one monopolistic rating agency that was paid by the regulator, the agency would have been almost as aggressive in rating new structures just because of the indisputable fact that the agency got paid only when a deal got done, and lenient ratings standards got more deals done.

Bookmark and Share

Written by Ashwin Parameswaran

April 27th, 2010 at 4:33 pm

Posted in Financial Crisis

Rating Agencies, Financial Regulation and Goodhart’s Law

without comments

It was only a matter of time given the focus on the Goldman-SEC case before someone decided to apportion some of the blame onto the ratings agencies. And sure enough, the New York Times has a story out on how the ratings agencies were an integral part of the problem because they gave banks free access to their models and ratings methodology. But this is true of all banking regulations – banking regulators too make their rules, models and methodology freely available to banks who then proceed to arbitrage these rules, primarily to minimise the capital that they are required to hold. This is not surprising given that ratings agencies are essentially an outsourced function of the banking regulatory apparatus. And the problem of arbitrage is also well-known – I have referred to it as the Goodhart’s Law of financial regulation.

The NYT article implicitly suggests that increasing the opacity and ambiguity around the ratings methodology would have resulted in a better outcome. This is similar to how Google tries to discourage people from trying to arbitrage its search algorithm by keeping it opaque. Just keeping the algorithm private is not enough as search-engine optimisers soon figure out the key features of the algorithm by experimenting with what works and what does not, which means that Google needs to continuously modify the algorithm to stay one step ahead of the arbitrageurs.

Maintaining a continuously updated, opaque algorithm is not a suitable strategy for ratings agencies. Even if a banker does not know the exact ratings methodology, he can easily figure out the key features just by running a large number of sample portfolios through the ratings system and analysing the results. Moreover, ratings methodologies that are unpredictable by design can create unnecessary ratings volatility and friction in financial markets. And last but not least, ratings agencies have no incentive to engage in such an arms race with the banks given that they get paid by the bank only when a deal gets done.

The role of ratings agencies in exacerbating the financial crisis has been exaggerated. As David Merkel puts it, “Don’t blame the rating agencies for the failure of the regulators, because they ceded their statutory role to the rating agencies.” The mad rush to buy AAA bonds in the boom wasn’t as much a function of the irrational faith in ratings agencies as it was a function of the rational desire to obtain extra yield whilst not falling foul of internal and external rules and regulations. Even internal control functions in firms often limit the scope of investments by specifying minimum required ratings and then assume that this requirement makes all further supervision of the manager redundant. Unsurprisingly, the manager prefers even an expensive AAA to a cheap BBB bond.

Bookmark and Share

Written by Ashwin Parameswaran

April 24th, 2010 at 1:31 pm

Did Goldman Mislead ACA?

without comments

Having reviewed Goldman’s extended submission to the SEC, I have to agree with Erik Gerding at the Conglomerate who concludes that “if the SEC can show Goldman misrepresented Paulson’s role to ACA, it wins.”

I speculated in my previous post that “much of the case will depend upon whether using the term “Transaction Sponsor” to describe Paulson was an act of deception” and Goldman seems to agree, devoting an entire page (pg 33) in their submission to counter this allegation. The arguments that Goldman presents are unconvincing – as Goldman asserts, it is indisputable that the term “Transaction Sponsor” is not “uniformly defined in the context of a CDO transaction, and it need not refer to an equity investor.” However, the real question is whether it is reasonable to refer to a hedge fund seeking to short the tranches without any long exposure to the CDO as a “Transaction Sponsor”. Even allowing for the admittedly wide ambit of the term, this is a generous interpretation. As I mentioned earlier, it is relevant whether there is any industry precedent for such a definition of the term – my hunch is that there isn’t.

Also relevant are Gail Kreitman’s non-response to Laura Schwartz of ACA’s request for clarification on Paulson’s role, and Fabrice Tourre’s description of the equity tranche as pre-committed” . The defence put forward for both are a little troubling. Essentially, Fabrice Tourre “testified that he had no recollection of its meaning” and Ms. Kreitman was just an intermediary who did not understand “the significance of Ms. Schwartz’s statements suggesting that she believed Paulson to be an equity investor”.

Bookmark and Share

Written by Ashwin Parameswaran

April 19th, 2010 at 3:40 pm

Posted in Financial Crisis

The “Abacus Affair” : Goldman’s Defence

with one comment

Goldman’s rebuttal of the SEC lawsuit raises some specific points that deserve further analysis.

Goldman Sachs Lost Money On The Transaction.”

Whether Goldman made money or not would have been relevant if they were just an investor like IKB. However Goldman is not just an investor, it is a market-maker. Whether a market-maker loses or makes money on a specific trade with a client is irrelevant. The market-maker’s role is to tailor the product desired by the client and hedge out the residual risk with other counterparties in the market. The ultimate loss or profit on one trade is irrelevant unless considered in the context of the exposures of the trading book as a whole.

In this particular case, the residual equity exposure could have been left unhedged because it was a natural hedge to other positions in the book. Else, it could have been dynamically delta-hedged in the market via CDS on the underlying. Or it could have been macro-hedged via a short position on an index. The point is that analysing the profit or loss on an isolated trade in the book of a market-maker is meaningless.

“Extensive Disclosure Was Provided…..These investors also understood that a synthetic CDO transaction necessarily included both a long and short side.”

This is Goldman’s strongest rebuttal as sufficient disclosure on the asset pool was likely provided. Moreover, the argument that Goldman does not have to disclose that Paulson is on the short side is even stronger than most commentators realise. From the perspective of Abacus as a legal entity, the short side is Goldman itself. Paulson is only Goldman’s hedge against their exposure arising from Abacus. This is clear from slide 50 in the Abacus pitchbook which represents Goldman as the “Protection Buyer”.

What is unusual however is that Paulson was short the tranches themselves rather than the underlying bonds. From Paulson’s perspective, this makes sense as it enables him to short only the mezzanine and senior tranches and avoid the equity which would have been the most expensive tranche to go short (Shorting all the underlying bonds is equivalent to shorting all the tranches of the CDO). But nevertheless this is not common in structured products in any asset class. Most structured products involve the market-maker constructing a bespoke product for the client and managing the residual exposure via dynamic hedging. So in this case, Goldman would buy CDS referencing the underlying bonds, sell as many tranches as it can or wants to sell, and manage the residual exposure. In this case, that was the equity exposure and in many other cases as I have analysed before, it was the super-senior tranche. If exact matching hedges had to be found in the market for each product sold to a client, then the business of structured products would not exist. In this respect, Goldman’s assertion that each transaction includes a long and short side is technically accurate but a little bit disingenuous – most synthetic CDOs like most other structured products do not have a market counterparty other than the market-maker who is short exactly the same product. I suspect however that from a legal perspective, this is not relevant.

“ACA, the Largest Investor, Selected The Portfolio.”

The fact that ACA Asset Management was the “Portfolio Selection Agent” is Goldman’s best defense. But the assertion that ACA was the largest investor is true only in the most trivial sense. ACA Asset Management which selected the portfolio had no investment in Abacus. It was its parent company, ACA Capital which in the course of its normal business of insuring super-senior tranches had a $951 million exposure to the transaction. This may seem like an irrelevant distinction but it is not. The usual method of ensuring that the managers’ interests are aligned to those of the transaction would be to have the manager invest in the equity tranche of the transaction as ACA had done in the past. As per slide 31 of the pitchbook, ACA had over $200 million invested in the equity of the CDOs it manages.

All this tells us is that ACA Capital trusted that its asset management subsidiary had done a competent job and was more likely to guarantee against losses on the super-senior given the role of ACA Asset Management. But did this influence the incentives of ACA Asset Management and its asset managers? That seems unlikely given that it was ACA Capital’s prerogative to do its own due diligence even if its asset management subsidiary was the manager of the CDO in question and the decision to invest by ACA Capital was likely separate from, albeit influenced by the decision to manage the CDO.

Goldman Sachs Never Represented to ACA That Paulson Was Going To Be A Long Investor”

This is clearly the crux of the case. The SEC seems to assert that it was Goldman’s responsibility to disabuse ACA of the mistaken assumption it made that Paulson was an equity investor. The complaint also quotes an email from Fabrice Tourre to ACA where he explicitly refers to Paulson as the “Transaction Sponsor” (page 14 of the complaint). At best, this description is misleading. It is a stretch to describe a counterparty who is short the exact tranches being marketed to investors as a “Transaction Sponsor”. I suspect much of the case will depend upon whether using the term “Transaction Sponsor” to describe Paulson was an act of deception. In this respect, it is relevant whether there is any precedent of counterparties seeking to short the tranches being referred to in this manner. My suspicion is that Goldman’s definition of “Transaction Sponsor” does not have much precedent. None of this however absolves ACA of its share of blame – it should have obtained written clarification that Paulson was the equity investor failing which it should have refused to do the deal.

Bookmark and Share

Written by Ashwin Parameswaran

April 18th, 2010 at 5:17 pm

Posted in Financial Crisis

The Magnetar Trade

with 7 comments

The Magnetar Trade according to ProPublica’s recent article is a long-short strategy that worked due to the perverse incentives operating in the CDO market during the boom. According to Jesse Eisinger and Jake Bernstein, Magnetar went long the equity tranche and short the senior tranches and used their position as the buyer of the equity tranche to ensure that the asset quality of the CDO was poorer than it would otherwise be. If ProPublica’s account is true, then this is a moral hazard trade i.e. Magnetar buys insurance against the burning down of a house and uses its influence as an equity buyer to significantly improve the odds of the house burning.

However, there are some hints in Magnetar’s response to the story that cast significant doubt on the accuracy of ProPublica’s narrative. To understand why this is the case, we need to understand what exactly the Magnetar trade as described in the story would look like. Magnetar’s portfolio was most likely a “close to carry neutral” portfolio consisting of long equity tranche positions and short senior/mezzanine tranche positions. In order to be carry-neutral, the notional value of senior tranches that are shorted needs to be an order of magnitude higher than the notional value of equity tranches purchased. In option parlance, this is equivalent to a zero-premium strategy consisting of short ATM options and long OTM options.

There are two reasons to execute such a strategy – one, simply to fund a “short options” strategy and the second, to execute a market-neutral “arbitrage” strategy. The significant advantage that such a long-short strategy has over a “naked short” strategy a la John Paulson is the absence of negative carry. As Taleb explains: “A butterfly position allows you to wait a lot longer for the wings to become profitable. In other words, a strategy that involves a butterfly allows you to be far more aggressive [when buying out-of-the-money options]. When you short near-the-money options, they bring in a lot of cash, so you can afford to spend more on out-of-the-money options. You can do a lot better as a spread trader.”

However, Magnetar describe their portfolio as market-neutral and “designed to have a positive return whether housing performed well or did poorly”.This implies that the portfolio was carry-positive i.e. the coupons on the long-equity positions exceeded the running-premium cost of buying protection on the senior tranches. This ensures that the portfolio will be profitable in the event that there are no defaults in the portfolio.

If the Magnetar Trade was based upon moral hazard, then it would have to short the senior tranches of the same CDO that it bought equity in and the notional of this short position would have to be multiples of the notional value of the equity position. However, Magnetar in their response to ProPublica explicitly deny this and state: “focusing solely on the group of CDOs in which Magnetar was the initial purchaser of the equity, Magnetar had a net long notional position. To put this into perspective, Magnetar would earn materially more money if these CDOs in aggregate performed well than if these CDOs performed poorly.” The operative term here is “net long notional position” as opposed to “net long position”. A net long position measured in delta terms could easily imply a net short notional position in which case the portfolio would outperform if all the tranches in the CDO were wiped out. But Magnetar seem to make it clear in their response that in the deals where they were the initial purchaser of equity, the notional of the equity positions exceeded the notional of the senior positions that they were short. They also assert that “the majority of the notional value of Magnetar’s hedges referenced CDOs in which Magnetar had no long investment” i.e. of course the notional value of their short positions exceeded that of their long positions, but these short positions were in other CDOs in which they did not have a long position.

But what about the fact that Magnetar seemed to be influencing the portfolio composition of these CDOs to include riskier assets in them? Surely this proves conclusively that Magnetar would profit if the CDOs collapsed? To understand why this may not necessarily be true, we need to examine the payoff profile of the Magnetar trade.

As with most market-neutral “arbitrage” trades, it is unlikely that the trade would deliver a positive return in every conceivable scenario. Rather, it would deliver a positive return in every scenario that Magnetar deemed probable. The Magnetar trade would pay off in two scenarios – if there were no defaults in any of their CDOs, or if there were so many defaults that the tranches that they were short also defaulted alongwith the equity tranche. The trade would likely lose money if there were limited defaults in all the CDOs and the senior tranches did not default. Essentially, the trade was attractive if one believed that this intermediate scenario was improbable.

A distribution where intermediate scenarios are improbable can arise from many underlying processes but there is one narrative that is particularly relevant to complex adaptive systems such as financial markets. Intermediate scenarios are unlikely when the system is characterised by multiple stable states and “catastrophic” transitions between these states. In adaptive systems such as ecosystems or macroeconomies, such transitions are most likely when the system is fragile and in a state of low resilience. The system tends to be dominated by positive feedback processes that amplify the impact of small perturbations, with no negative feedback processes present that can arrest this snowballing effect.

It turns out that such a framework was extremely well-suited to describing the housing market before the crash. Once house prices started falling and refinancing was no longer an option, the initial wave of defaults triggered a vicious cycle of house price declines and further defaults. Similarly, collateral requirements on leveraged investors, mark-to-market pressures and other positive feedback processes in the market created a vicious cycle of price declines in the market for mortage-backed securities and CDOs.

So what does all this have to do with Magnetar’s desire to include riskier assets in their long equity portfolios? If one believes that only a small perturbation is required to tip the market over into a state of collapse, then the long position should be weighted towards the riskiest possible asset portfolio. Essentially, the above framework implies that there is no benefit to having “safer” long positions in the long-short portfolio. The fragility of the system means that either there is no perturbation and all assets perform no matter how low-quality they are, or there is a perturbation and even “high quality” assets default.

The above framework of catastrophic shifts between multiple stable states is not uncommon, especially in fixed income markets. In fact, the Greek funding situation is a perfect example. If one had to sketch out a distribution of the yield on Greek debt, it is likely that intermediate levels are the least likely scenarios. In other words, either Greece funds at low sustainable rates or it moves rapidly to a state of default – it is unlikely that Greece raises say 50 billion Euros at an interest rate of 10%. The situation is of course made even more stark by Greece’s inability to inflate away its debt via the printing press. Of course, the bifurcation exists in fiat currency issuing countries as well, but at the point when hyperinflation kicks in.

Bank incentives are the real problem

Even if my arguments are valid, it is nevertheless obvious that even if Magnetar may not have executed the moral hazard trade, someone else could quite easily have done so. But the moral hazard trade was only possible because there was sufficient investor demand for the rated tranches of the CDO and even more crucially, because the originating bank was willing to hold onto the super-senior tranche. As I have discussed many times earlier in detail, bank demand for super-senior tranches is a logical consequence of the cheap leverage that they are afforded via the moral hazard subsidy of the TBTF doctrine. If banks were less levered, many of these deals would not have been issued at all.

In fact, two of the hedging strategies that we know were implemented in banks – UBS’ “AMPS” strategy and Howie Hubler’s trade in Morgan Stanley – were mirror images of the Magnetar trade. It is not a coincidence that bank traders chose the negatively skewed payoff distribution and Magnetar chose the positively skewed one.


Disclaimer: The above note is just my analysis of the facts and assertions in ProPublica’s article. I have no additional knowledge of the facts of the case and it is entirely possible that Magnetar are being less than fully forthright in their responses to the story. The above analysis is more useful as an illustration of how the facts as described in the article can be reconciled to a narrative that does not imply moral hazard.

Bookmark and Share

Written by Ashwin Parameswaran

April 11th, 2010 at 4:19 pm

Micro-Foundations of a Resilience Approach to Macro-Economic Analysis

with 4 comments

Before assessing whether a resilience approach is relevant to macro-economic analysis, we need to define resilience. Resilience is best defined as “the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks.”

The assertion that an ecosystem can lose resilience and become fragile is not controversial. To claim that the same can occur in social systems such as macro-economies is nowhere near as obvious, not least due to our ability to learn, forecast the future and adapt to changes in our environment. Any analysis of how social systems can lose resilience is open to the objection that loss of resilience implies systematic error on the part of economic actors in assessing the economic conditions accurately and an inability to adapt to the new reality. For example, one of the common objections to Minsky’s Financial Instability Hypothesis (FIH) is that it requires irrational behaviour on the part of economic actors. Rajiv Sethi’s post has a summary of this debate with a notable objection coming from Bernanke’s paper on the subject which insists thatHyman Minsky and Charles Kindleberger have in several places argued for the inherent instability of the financial system, but in doing so have had to depart from the assumption of rational behavior.”

One response to this objection is “So What?” and indeed the stability-resilience trade-off can be explained within the Kahneman-Tversky framework. Another response which I’ve invoked on this blog and Rajiv has also mentioned in a recent post focuses on the pervasive principal-agent relationship in the financial economy. However, I am going to focus on a third and a more broadly applicable rationale which utilises a “rationality” that incorporates Knightian uncertainty as the basis for the FIH. The existence of irreducible uncertainty is sufficient to justify an evolutionary approach for any social system, whether it be an organization or a macro-economy.

Cognitive Rigidity as a Rational Response to Uncertainty

Rajiv touches on the crux of the issue when he notes: “Selection of strategies necessarily implies selection of people, since individuals are not infinitely flexible with respect to the range of behavior that they can exhibit.” But is achieving infinite flexibility a worthwhile aim? The evidence suggests that it is not. In the face of true uncertainty, infinite flexibility is not only unrealistic due to finite cognitive resources but it is also counterproductive and may deliver results that are significantly inferior to a partially “rigid” framework. V.S. Ramachandran explains this brilliantly: “At any given moment in out waking lives, our brains are flooded with a bewildering variety of sensory inputs, all of which have to be incorporated into a coherent perspective based on what stored memories already tell us is true about ourselves and the world. In order to act, the brain must have some way of selecting from this superabundance of detail and ordering it into a consistent ‘belief system’, a story that makes sense of the available evidence. When something doesn’t quite fit the script, however, you very rarely tear up the entire story and start from scratch. What you do, instead, is to deny or confabulate in order to make the information fit the big picture. Far from being maladaptive, such everyday defense mechanisms keep the brain from being hounded into directionless indecision by the ‘combinational explosion’ of possible stories that might be written from the material available to the senses.”

This rigidity is far from being maladaptive and appears to be irrational only when measured against a utopian definition of rational choice. Behavioural Economics also frequently commits the same error – As Brian Loasby notes: “It is common to find apparently irrational behaviour attributed to ‘framing effects’, as if ‘framing’ were a remediable distortion. But any action must be taken within a framework.” This notion of true rationality being less than completely flexible is not a new one – Ramachandran’s work provides the neurological bases for the notion of ‘rigidity as a rational response to uncertainty’. I have already discussed Ronald Heiner’s framework in a previous post which bears a striking resemblance to Ramachandran’s thesis:

“Think of an omniscient agent with literally no uncertainty in identifying the most preferred action under any conceivable condition, regardless of the complexity of the environment which he encounters. Intuitively, such an agent would benefit from maximum flexibility to use all potential information or to adjust to all environmental conditions, no matter how rare or subtle those conditions might be. But what if there is uncertainty because agents are unable to decipher all of the complexity of the environment? Will allowing complete flexibility still benefit the agents?

I believe the general answer to this question is negative: that when genuine uncertainty exists, allowing greater flexibility to react to more information or administer a more complex repertoire of actions will not necessarily enhance an agent’s performance.”

Brian Loasby has an excellent account of ‘rationality under uncertainty’ and its evolutionary implications in this excellent book which traces hints of this idea running through the work of Adam Smith, Alfred Marshall, George Kelly’s ‘Personal Construct Theory’ and Hayek’s ‘Sensory Order’. But perhaps the clearest exposition of the idea was provided by Kenneth Boulding in his description of subjective human knowledge as an ‘Image’. Most external information either conforms so closely to the image that it is ignored or it adds to the image in a well-defined manner. But occasionally, we receive information that is at odds with our image. Boulding recognised that such change is usually abrupt and explained it in the following manner: “The sudden and dramatic nature of these reorganizations is perhaps a result of the fact that our image is in itself resistant to change. When it receives messages which conflict with it, its first impulse is to reject them as in some sense untrue….As we continue to receive messages which contradict our image, however, we begin to have doubts, and then one day we receive a message which overthrows our previous image and we revise it completely.” He also recognises that this resistance is not “irrational” but merely a logical response to uncertainty in an “imperfect” market. “The buyer or seller in an imperfect market drives on a mountain highway where he cannot see more than a few feet around each curve; he drives it, moreover, in a dense fog. There is little wonder, therefore, that he tends not to drive it at all but to stay where he is. The well-known stability or stickiness of prices in imperfect markets may have much more to do with the uncertain nature of the image involved than with any ideal of maximizing behavior.”

Loasby describes the key principles of this framework as follows: “The first principle is that all action is decided in the space of representations. These representations include, for example, neural networks formed in the brain by processes which are outside our conscious control…None are direct copies of reality; all truncate complexity and suppress uncertainty……The second principle of this inquiry is that viable processes must operate within viable boundaries; in human affairs these boundaries limit our attention and our procedures to what is manageable without, we hope, being disastrously misleading – though no guarantees are available……The third principle is that these frameworks are useless unless they persist, even when they do not fit very well. Hahn’s definition of equilibrium as a situation in which the messages received by agents do not cause them to change the theories that they hold or the policies that they pursue offers a useful framework for the analysis both of individual behaviour and of the co-ordination of economic activity across a variety of circumstances precisely because it is not to be expected that theories and policies will be readily changed just because some evidence does not appear readily compatible with them.” (For a more detailed account, read Chapter 3 ‘Cognition and Institutions’ of the aforementioned book or his papers here and here.)

The above principles are similar to Ronald Heiner’s assertion that actions chosen under true uncertainty must satisfy a ‘reliability condition’. It also accounts for the existence of the stability-resilience trade-off. In Loasby’s words: “If behaviour is a selected adaptation and not a specific application of a general logic of choice, then the introduction of substantial novelty – a change not of weather but of climate – is liable to be severely disruptive, as Schumpeter also insisted. In biological systems it can lead to the extinction of species, sometimes on a very large scale.” Extended periods of stability narrow the scope of events that fit the script and correspondingly broaden the scope of events that appear to be anomalous and novel. When the inevitable anomalous event comes along, we either adapt too slowly or in extreme cases, not at all.

Bookmark and Share

Written by Ashwin Parameswaran

April 11th, 2010 at 7:51 am

Diversity and the Political Economy of Banking

without comments

From a system resilience viewpoint, there are many reasons why a reduction in diversity is harmful. But one of the lesser appreciated benefits of a diverse pool of firms in an industry is the impact it has in reducing the political clout that the industry wields. Diversity is one of the best defences against crony capitalism. As Luigi Zingales explains, commenting here on the political impact of Gramm-Leach-Bliley: “The real effect of Gramm-Leach-Bliley was political, not directly economic. Under the old regime, commercial banks, investment banks, and insurance companies had different agendas, and so their lobbying efforts tended to offset one another. But after the restrictions were lifted, the interests of all the major players in the financial industry became aligned, giving the industry disproportionate power in shaping the political agenda. The concentration of the banking industry only added to this power.”

There’s been a lot of discussion recently on the merits of breaking up the big banks and one of the arguments in favour of this policy is the perceived reduction in the political clout that the banks would possess. Arnold Kling, for example, lays out the thesis in this recent article. Breaking up the banks may help but I would argue that the impact of such a move on the political economy of banking will be limited unless the industry becomes less homogenous.

The prime driver of this homogeneity is the combination of the moral hazard subsidy and regulatory capital guidelines which ensures that there is one optimal strategy that maximises this subsidy and outcompetes all other strategies. This strategy is of course to maintain a highly levered balance sheet invested in low capital-intensity, highly-rated assets.

Bookmark and Share

Written by Ashwin Parameswaran

April 6th, 2010 at 4:21 pm

Maturity Transformation and the Yield Curve

with 10 comments

Maturity Transformation (MT) enables all firms, not just banks to borrow short-term money to invest in long-term projects. Of course, banks are the most effective maturity transformers, enabled by deposit insurance/TBTF protection which discourages their creditors from demanding their money back all at the same time and a liquidity backstop from a fiat currency-issuing central bank if panic sets in despite the guarantee. Given the above definition, it is obvious that the presence of MT results in a flatter yield curve than would be the case otherwise (Mencius Moldbug explains it well and this insight is implicit as well in Austrian Business Cycle Theory). This post tries to delineate the exact mechanisms via which the yield curve flattens and how the impact of MT has evolved over the last half-century, particularly due to changes in banks’ asset-liability management (ALM) practices.

Let’s take a simple example of a bank that funds via demand deposits and lends these funds out in the form of 30-year fixed-rate mortgages. This loan if left unhedged exposes the bank to three risks: Liquidity Risk, Interest Rate Risk and Credit Risk. The liquidity risk is of course essentially unhedgeable – it can and is mitigated by for example, converting the mortgage into a securitised form that can be sold onto other banks. But the gap inherent in borrowing short and lending long is unhedgeable. The credit risk of the loan can be hedged but often is not, as compensation for taking on credit risk is one of the fundamental functions of a bank. However, the interest rate risk can be and often is hedged out in the interest rate swaps market.

Interest Rate Risk Management in Bank ALM

Prior to the advent of interest rate derivatives as hedging tools, banks had limited avenues to hedge out interest rate risk. As a result, most banks suffered significant losses whenever interest rates rose. For example, after World War II, US banks were predominantly invested in fixed rate government bonds they had bought during the war. Martin Mayer’s excellent book on ‘The Fed’ documents a Chase banker who said to him in reaction to a Fed rate hike in 1952 that “he never thought he would live to see the day when the government would deliberately make the banking system technically insolvent.” The situation had not changed much even by the 1980s – the initial trigger that set off the S&L crisis was the dramatic rise in interest rates in 1981 that rendered the industry insolvent.

By the 1990s however, many banks had started hedging their duration gap with the aim of mitigating the damage that a sudden move in interest rates could do to their balance sheets. One of the earlier examples is the case of Banc One and the HBS case study on the bank’s ALM strategy is a great introduction to the essence of interest rate hedging. More recently, the Net Interest Income (NII) sensitivity of Bank of America according to slide 35 in this investor presentation is exactly the opposite of the typical maturity-transforming unhedged bank – the bank makes money when rates go up or when the curve steepens. But more importantly, the sensitivity is negligible compared to the size of the bank which suggests a largely duration-matched position.

In the above analysis, I am not suggesting that the banking system does not play the interest carry trade at all. The FDIC’s decision to release an interest rate risk advisory in January certainly suggests that some banks are. I am only suggesting that if a bank does play the carry trade, it is because it chooses to do so and not because it is forced to do so by the nature of its asset-liability profile. Moreover, the indications are that many of the larger banks are reasonably insensitive to changes in interest rates and currently choose not to play the carry game ( See also Wells Fargo’s interest rate neutral stance ).

What does this mean for the impact of MT on the yield curve? It means that the role of the interest rate carry trade inherent in MT in flattening the yield curve is an indeterminate one. At the very least, it has a much smaller role than one would suspect. Taking the earlier example of the bank invested in a 30-year fixed rate mortgage, the bank would simply enter into a 30-year interest rate swap where it pays a fixed rate and receives a floating rate to hedge away its interest rate risk. There are many possible counterparties who want to receive fixed rates in long durations – two obvious examples are corporates who want to hedge their fixed rate issuance back into floating and pension funds and life insurers who need to invest in long-tenor instruments to match their liabilities.

So if interest rate carry is not the source of the curve flattening caused by MT, what is? The answer lies in the other unhedged risk – credit risk. Credit risk curves are also usually upward sloping (except when the credit is distressed) and banks take advantage by funding themselves at a very short tenor where credit spreads are low and lending at long tenors where spreads are much higher. This strategy of course exposes them to the risk of credit risk repricing on their liabilities and this was exactly the problem that banks and corporate maturity transformers such as GE faced during the crisis. Credit was still available but the spreads had widened so much that refinancing at those spreads alone would cause insolvency. This is not dissimilar to the problem that Greece faces at present.

The real benefit of the central bank’s liquidity backstop is realised in this situation. When interbank repo markets and commercial paper markets lock up as they did during the crisis, banks and influential corporates like GE can always repo their assets with the central bank on terms not available to any other private player. The ECB’s 12-month repo program is probably the best example of such a quasi-fiscal liquidity backstop.

Conclusion

Given my view that the interest rate carry trade is a limited phenomenon, I do not believe that the sudden removal of MT will produce a “smoking heap of rubble” (Mencius Moldbug’s view). The yield curve will steepen to the extent that the credit carry trade vanishes but even this will be limited by increased demand from long-term investors, most notably pension funds. The conventional story that MT is the only way to fund long-term projects ignores the increasing importance of pension funds and life insurers who have natural long-tenor liabilities that need to be matched against long-tenor assets.

Bookmark and Share

Written by Ashwin Parameswaran

April 4th, 2010 at 5:54 am