macroresilience

resilience, not stability

Archive for December, 2009

Efficient Markets and Pattern Predictions

with 4 comments

Markets can be “inefficient” and yet almost impossible to beat because of the existence of “Limits to Arbitrage” . It is essential not only to have the correct view but also to know when the view will be realised.

Why is it so difficult to time the market? Because the market is a complex adaptive system and complex adaptive systems are amenable only to what Hayek called “pattern predictions”. Hayek introduced this concept in his essay “The Theory of Complex Phenomena” where he analysed economic and other social phenomena as “phenomena of organised complexity” (A term introduced by Warren Weaver in this essay).

In such phenomena, according to Hayek, only pattern predictions are possible about the social structure as a whole: As Hayek explained in an interview with Leo Rosten:

“We can build up beautiful theories which would explain everything, if we could fit into the blanks of the formulae the specific information; but we never have all the specific information. Therefore, all we can explain is what I like to call “pattern prediction.” You can predict what sort of pattern will form itself, but the specific manifestation of it depends on the number of specific data, which you can never completely ascertain. Therefore, in that intermediate field — intermediate between the fields where you can ascertain all the data and the fields where you can substitute probabilities for the data–you are very limited in your predictive capacities.”

“Our capacity of prediction in a scientific sense is very seriously limited. We must put up with this. We can only understand the principle on which things operate, but these explanations of the principle, as I sometimes call them, do not enable us to make specific predictions on what will happen tomorrow.”

Hayek was adamant however that theories of pattern prediction were useful and scientific and had “empirical significance”. The example he drew upon was the Darwinian theory of evolution by natural selection, which provided only predictions as to the patterns one could observe over evolutionary time at levels of analysis above the individual entity.

Hayek’s intention with his theory was to debunk the utility of statistics and econometrics in the forecast of macroeconomic outcomes (See his Nobel lecture). The current neoclassical defense against their inability to predict the crisis takes the other extreme position i.e. our theories are right because no one could predict the crisis. This contention explicitly denies the possibility of “pattern predictions” and is not a valid defense. Any macroeconomic theory should be capable of explaining the patterns of our economic system – no more, no less.

One of the key reasons why timing and exact prediction is so difficult is the futility of conventional cause-effect thinking in complex adaptive systems. As Michael Mauboussin observed, ” Cause and effect thinking is futile, if not dangerous”. The underlying causes may be far removed from the effect, both in time and in space and the proximate cause may only be the “straw that broke the camel’s back”.

Many excellent examples of “pattern prediction” can be seen in ecology. For example, the proximate cause of the catastrophic degradation of Jamaica’s coral reefs since the 1980s was the mass mortality of the dominant species of urchin (reference). However, the real reason was the progressive loss of diversity due to overfishing since the 1950s.

As CS Holling observed in his analysis of a similar collapse in fisheries in the Great Lakes:

“Whatever the specific causes, it is clear that the precondition for the collapse was set by the harvesting of fish, even though during a long period there were no obvious signs of problems. The fishing activity, however, progressively reduced the resilience of the system so that when the inevitable unexpected event occurred, the populations collapsed. If it had not been the lamprey, it would have been something else: a change in climate as part of the normal pattern of fluctuation, a change in the chemical or physical environment, or a change in competitors or predators.”

The financial crisis of 2008-2009 can be analysed as the inevitable result of a progressive loss of system resilience. Whether the underlying cause was a buildup of debt, moral hazard or monetary policy errors is a different debate and can only be analysed by looking at the empirical evidence. However, just as is the case in ecology, the inability to predict the time of collapse or even the proximate cause of collapse does not equate to an inability to explain macroeconomic patterns.

Bookmark and Share

Written by Ashwin Parameswaran

December 31st, 2009 at 10:52 am

John Hempton on Efficient Markets

without comments

John Hempton has a great post on the difficulty of “beating the market” even if one possesses superior insight or knowledge. I would just add the following:

It is common among commentators to conflate two very different assertions:

  1. It is extremely difficult to “beat the market”
  2. Markets are efficient.

The common error is to assume that 1 proves 2 which is most definitely not the case. Shleifer and Vishny discussed this in their paper on the “Limits to Arbitrage”.

Timing is extremely important especially when taking a short position for many reasons:

  1. Short equity is a short volatility position i.e. limited upside, unlimited downside. At the very least, margining requirements in the interim “inefficient” period may kill you before the market corrects. Long equity positions can atleast be left alone if liquidity permits and if the position is not leveraged.
  2. Principal-agent problem: Fund managers need to get the timing spot on when they are taking on contrarian positions. Else, they will be fired by their investors long before the market corrects.
  3. Even if one is not an agent, simple uncertainty means that we’re never certain about our judgement. The longer the market refuses to come around to our viewpoint, the less certain we become and the more tempted we are to liquidate.

As John mentions, timing the market requires not only holding the contrarian view but knowing when this view will dissipate through the market. “Wisdom of the Crowds” explanations of the market require that the uninformed majority hold sufficiently diverse opinions. Given that betting on Kodak’s demise is tantamount to betting on a technological paradigm shift, the “crowd” by definition is not diverse and wedded to the old paradigm.

This is essentially the reason most contrarian investors are long-only long-term investors. This is the style of investing that Jack Treynor called betting on “slow travelling ideas”. The always excellent Michael Mauboussin has a discussion on wisdom of crowds and slow travelling ideas here and here.

Bookmark and Share

Written by Ashwin Parameswaran

December 30th, 2009 at 11:28 am

Posted in Market Efficiency

The “Theory of the Second Best” and the Financial Crisis

without comments

Much of the debate regarding the causes of the financial crisis ignores the fact that we live in a “second best” world. The “Theory of the Second Best” states that in a world that is far from a textbook “free market”, any move towards the theoretical free market optimum does not necessarily increase welfare.

Our current financial system is clearly far from a free market. The implicit and explicit guarantee to bank creditors via deposit insurance and the TBTF doctrine is a fundamental deviation from free market principles. On the other hand, derivatives markets are among the least regulated markets in any sector.

This second-best, hybrid nature of our financial system means that any discussion of the crisis must be strongly empirical in nature. Deductive logic is essential but a logical argument with incomplete facts can be made to fit almost any conclusion. So the Keynesians blame the free market and deregulation, the libertarians blame government action and the behavioural economists blame irrationality. But no one stops to consider any facts that don’t fit their preferred thesis.

The key conclusion of my work is that it is the combination of the moral hazard problem driven by bank creditor guarantees and the deregulated nature of key components of the financial system that caused the crisis. This is not a new argument. The argument for regulation itself rests on the need to protect the taxpayer in the presence of this creditor guarantee. The Fed recognised this argument as early as 1983. As John Kareken noted, “Deregulation Is the Cart, Not the Horse”. The growth of the CDS and other derivatives markets was not a problem by itself. It caused damage by enabling the banks to maximise the value of the free lunch derived from the taxpayer. The same could be said for bank compensation practices.

If re-regulation could work, then I’d be in favour of it. But I don’t think it can. As I’ve discussed before (1,2,3), almost any regulation will be arbitraged away by the banks. The only regulations that may be difficult to arbitrage are blunt and draconian regulations which will dramatically reduce the efficiency of the system. Even then, the odds of arbitrage are not low enough.

Bookmark and Share

Written by Ashwin Parameswaran

December 28th, 2009 at 12:31 pm

Complete Markets and the Principal-Agent Problem in Banking

without comments

In an earlier note, I discussed how monitoring and incentive contracts can alleviate the asymmetric information problem in the principal-agent relationship. Perfect monitoring, apart from being impossible in many cases, is also too expensive. As a result, most principals will monitor to the extent that the expense is justified by the reduced incentive mismatch. In most industries, this approach is good enough. The menu of choices available to an agent is usually narrow and the principal only needs to monitor for the most egregious instances of abuse.

In fact, this was the case in banking as well until the advent of derivatives. Goodhart’s Law by itself does not guarantee arbitrage by the agent – the agent also needs a sufficiently wide menu of choices that the principal cannot completely monitor or contract for.

As discussed in an earlier note, agents in banking have a strong incentive to enter into bets with negatively skewed payoffs. The limiting factor was always the supply of such financial instruments. For example, supply of AAA corporate bonds has always been limited. Securitisation and tranching technology increased this limit substantially by using a diverse pool of credits with a lower rating to produce a substantial senior AAA tranche. But the supply was still limited by the number of mortgages or bonds that were available.

The innovation that effectively removed any limit on the agent’s ability to arbitrage was the growth of the CDS market and the development of the synthetic CDO. As the UBS shareholder report notes:

“Key to the growth of the CDO structuring business was the  development of the credit default swap (”CDS”) on ABS in June 2005 (when ISDA published  its CDS on ABS credit definitions). This permitted simple referencing of ABS through a CDS. Prior to this, cash ABS had to be sourced for inclusion in the CDO Warehouse.”

Bookmark and Share

Written by Ashwin Parameswaran

December 28th, 2009 at 9:18 am

Information Asymmetry and the Principal-Agent Problem

with 2 comments

Information Asymmetry is often held as the cause of many agency problems. The most famous such study is Akerlof’s “Market for Lemons”. Many recent studies have pinned the blame for aspects of the recent financial crisis on information asymmetry between various market participants. On the face of it, this view is hard to dispute. The principal-agent problem is pervasive in financial institutions and markets – between shareholders and CEOs, CEOs and traders, shareholders and bank creditors, and between banks and their clients.

Monitoring and Incentive Contracts

In most circumstances, market participants find ways to mitigate this principal-agent problem. In the case of simple tasks, monitoring by the principal may be enough. Unfortunately, many tasks are too complex to be monitored effectively by the principal. Comprehensive monitoring can also be too expensive.

Another option is to amend the contract between the principal and the agent so as to align their interests. Examples of this are second-hand car dealers offering warranties, bonds carrying covenants, bank bonuses being paid in deferred equity rather than cash etc. This approach is not perfect either. There are limits to how perfectly a contract can align interests and agents will arbitrage imperfect contracts to maximise their own interests – again Goodhart’s Law in operation. In fact, firms frequently discover that contracts that seem to align principal and agent interests have exactly the opposite effect. As a seminal paper in management theory puts it, “the folly of rewarding A, while hoping for B” . In the absence of a “perfectly aligned” contract, a close proxy (A) of the real objective (B) may make things worse.

At this point, it is worth noting that the imperfect nature of contracting and monitoring does not necessarily mean that the principal-agent relationship will break down. In many contracts, a small amount of moral hazard may not significantly reduce the economic benefit derived by the principal .

The Option To Walk Away

If the loss due to information asymmetry is too large despite all available contractual arrangement, then the principal always retains the option to walk away. This is of course Akerlof’s conclusion as well. In his example on health insurance for people over the age of 65, he notes that “no insurance sales may take place at any price.”

In the context of a repeated game where the principal and agent transact regularly, merely the existence of the option to walk away can mitigate the principal-agent problem. For example, let’s replace the used-car seller in Akerlof’s analysis with a fruit-seller. Even if a buyer of fruits has no knowledge of fruit quality, the seller will not sell him “lemons” as the buyer can always walk away. The seller is incentivised to maximise profits over the series of sales rather than one sale.

It is puzzling that many recent studies of the crisis neglect this option to walk away. For example,

  • For example, Richard Squire analyses the asymmetric risk incentive facing shareholders in AIG and concludes that it is not cost-effective for creditors to monitor shareholders and therefore, the problem persists. But if this is indeed the case, the optimal course of action for creditors is to simply walk away.
  • Arora, Borak et al conclude that banks that structure and sell CDOs can cherry-pick portfolios to include lemons in a manner undetectable by the buyer of the CDO.

I must stress that I am not disputing the arguments presented in either paper. But the information asymmetry problem cannot be the basis of a persistent, repeated principal-agent problem. Market mechanisms do not guarantee that no mistakes are made. However, they do ensure that repeated mistakes are unlikely. As the saying goes, “Fool me once, shame on you; fool me twice, shame on me.”

Indeed the break-down in the CDO market may simply be a case of the buyer having walked away. In the case of AIG, creditors have not walked away primarily due to the near-explicit guarantee accorded to them by the United States government.

Creditors have not walked away because of the guarantee of the state. Clients have walked away from complex products in many cases. But what about bank shareholders who have suffered so much in the crisis? Why have they not walked away from the sector? The answer lies in the implicit/explicit guarantee provided to bank creditors that is essentially a free lunch courtesy of the taxpayer. As I discussed in the conclusion to an earlier note:

“Principal-agent problems and conflicts between the interests of shareholders, managers and creditors are inherent in each organisation to some degree but usually, the stakeholders develop ways to mitigate such problems. If no such avenues for mitigation are feasible, they always retain the option to walk away from the relationship.

This dynamic changes significantly in the presence of a “free lunch” such as the one provided by creditor protection. In such a case, not walking away even after suffering losses is an entirely rational strategy. Each stakeholder has a positive probability of capturing part of the free lunch in the future even if he has not been able to do so in the past. In fact, shareholder optimism may well be proven correct if significant compensation restrictions are imposed on the entire industry and this increases the share of the “free lunch” flowing to them.”

The free lunch subsidy of “Too Big to Fail” and deposit insurance takes away the option to walk away, not only in the context of bank creditors but also in the context of other principal-agent problems within the industry. The problems of asymmetric information are thus allowed to persist at all levels in the industry for far longer periods than would be the case otherwise.

Bookmark and Share

Written by Ashwin Parameswaran

December 28th, 2009 at 6:21 am

The Role of Discretion in Financial Regulation

with one comment

Steve Waldmann’s recent post explains why giving financial regulators discretion in choice of policy is almost always a bad idea. In his words:

“An enduring truth about financial regulation is this: Given the discretion to do so, financial regulators will always do the wrong thing.”

The reason of course is the time consistency problem . The temptation for the regulator and central bank to use their “discretion” to bail out the banks is overwhelming. The market will correctly equate a discretionary regulatory environment to be a bailout-prone one. As Lacker and Goodfriend observed in their paper on central bank lending policies in times of crisis:

“The problem with adding variability to central bank lending policy is that the central bank would have trouble sticking to it, for the same reason that central banks tend to overextend lending to begin with. An announced policy of constructive ambiguity does nothing to alter the ex post incentives that cause the central banks to lend in the first place.”

But what about the alternative? Would a regulatory environment that is written in stone perform any better? Most likely it would not – regulations that are written in stone suffer from Goodhart’s Law. The clearer and more detailed the regulation, the easier it is for market participants to arbitrage it.

Goodhart’s Law is the reason why algorithm-based technology services such as Google and Digg prefer to keep their algorithm private and opaque. However, as we’ve discussed above, discretion and opacity is not an option in financial regulation.

So how do we avoid arbitrage without having to resort to discretion and ambiguity in the regulatory framework? Goodhart’s Law is applicable only when we focus on intermediate targets that we presume are good proxies for our objective. The answer is to shift focus from intermediate proxy indicators of excessive risk, such as executive compensation or capital requirements, to the ultimate objective itself.

But is this even achievable? For example, Google and Digg have no option but to focus on a reasonable accurate proxy. The same may be true for financial regulation.

Bookmark and Share

Written by Ashwin Parameswaran

December 25th, 2009 at 1:34 pm

The Chicago Pit on Negatively Skewed Bets

without comments

The Chicago pit has a saying that captures exactly the perils of entering into a negatively skewed bet:

“Traders who sell volatility eat like chickens and shit like elephants.”

Taleb has shown that negatively skewed bets are tempting enough even when we’re risking our own capital. The moral hazard problem and the resultant cheap leverage makes the trade a no-brainer for a bank.

Bookmark and Share

Written by Ashwin Parameswaran

December 16th, 2009 at 5:20 pm

Volcker on Financial Innovation

without comments

In a discussion in the WSJ, Paul Volcker had this to say about the impact of financial engineering:

“Now, I have no doubts that it moves around the rents in the financial system, but not only this, as it seems to have vastly increased them.”

This is an important and often overlooked point. Financial innovation has led to a significant increase in the rents that the financial sector is able to extract from the rest of the economy. Moreover, much of this increased rent is extracted from the taxpayer.

As I discussed earlier in more detail, incomplete markets kept the moral hazard genie inside the bottle. Financial innovation such as CDS and synthetic CDOs arose primarily to make markets more complete and enable the financial sector to maximise the rents that it could extract from the explicit/implicit guarantee of the state. The solution to the problem is not better regulation but the removal of the guarantee.

Bookmark and Share

Written by Ashwin Parameswaran

December 15th, 2009 at 5:56 pm

On The Futility of Banning Proprietary Risk-Taking By Banks

with one comment

In his interview in Der Spiegel, Paul Volcker argues that banks must not be allowed to take on proprietary risk except for risk incidental to “client activities”. Quoting from the interview:

SPIEGEL: Banking should become boring again?

Volcker: Banking will never be boring. Banking is a risky business. They are going to have plenty of activity. They can do underwriting. They can do securitization. They can do a lot of lending. They can do merger and acquisition advice. They can do investment management. These are all client activities. What I don’t want them doing is piling on top of that risky capital market business. That also leads to conflicts of interest.”

This is a more nuanced version of the argument that calls for the reinstatement of the Glass-Steagall Act. But it suffers from two fatal flaws:

  • Regulatory Arbitrage: Separation of “client risk” and “proprietary risk” sounds good in theory but it’s almost impossible to enforce in practise. As I’ve discussed previously, a detailed and fine-tuned regulatory policy will be easy to arbitrage and a blunt policy will result in a grossly inefficient financial system.
  • Losses on “Client Activities” were the major driver in the current crisis. My analysis of the UBS shareholder report highlighted how the accumulation of super-senior CDO tranches was justified primarily by their perceived importance in facilitating the sale of fee-generating junior tranches to clients. It is the losses on these tranches issued in the name of facilitating client business that were at the core of the crisis. It is these tranches that caused the majority of the losses on banks’ balance sheets. It is losses on insuring these tranches that brought down AIG. Segregated proprietary risk is monitored closely by almost all banks. The real villain of the piece was proprietary risk taken on under the cover of facilitating client business.
Bookmark and Share

Written by Ashwin Parameswaran

December 13th, 2009 at 1:36 pm

Minsky’s Financial Instability Hypothesis and Holling’s conception of Resilience and Stability

with 10 comments

Minsky’s Financial Instability Hypothesis

Minsky’s Financial Instability Hypothesis (FIH) is best summarised as the idea that “stability is destabilizing”. As Laurence Meyer put it:

“a period of stability induces behavioral responses that erode margins of safety, reduce liquidity, raise cash flow commitments relative to income and profits, and raise the price of risky relative to safe assets–all combining to weaken the ability of the economy to withstand even modest adverse shocks.”

Meyer’s interpretation highlights two important aspects of Minsky’s hypothesis:

  • It is the “behavioral responses” of economic agents that induce the fragility into the macroeconomic system.
  • After a prolonged period of stability, the economy cannot “withstand even modest adverse shocks”.

Holling’s “Pathology of Natural Resource Management”

Minsky’s idea that stability breeds instability is an important theme in the field of ecology. The “Pathology of Natural Resource Management” is described by Holling and Meffe as follows:

“when the range of natural variation in a system is reduced, the system loses resilience.”

Resilience and stability are dramatically different concepts. Holling explained the difference in his seminal paper on the topic as follows:

“Resilience determines the persistence of relationships within a system and is a measure of the ability of these systems to absorb changes of state variables, driving variables, and parameters, and still persist. In this definition resilience is the property of the system and persistence or probability of extinction is the result. Stability, on the other hand, is the ability of a system to return to an equilibrium state after a temporary disturbance. The more rapidly it returns, and with the least fluctuation, the more stable it is. In this definition stability is the property of the system and the degree of fluctuation around specific states the result.”

The relevant insight in Holling’s work is that resilience and stability as goals for an ecosystem are frequently at odds with each other. In many ecosystems, “the very fact of low stability seems to produce high resilience”. Conversely, “the goal of producing a maximum sustained yield may result in a more stable system of reduced resilience”.

Forest Fires: An Example of the Resilience-Stability Tradeoff

One of the most striking examples of the resilience-stability tradeoff in ecosystems is the impact of fire suppression over the last century on the dynamics of forest fires in the United States.

From Holling and Meffe:

“Suppression of fire in fire-prone ecosystems is remarkably successful in reducing the short-term probability of fire in the national parks of the United States and in fire-prone suburban regions. But the consequence is an accumulation of fuel over large areas that eventually produces fires of an intensity, extent, and human cost never before encountered (Kilgore 1976; Christensen et al. 1989). Fire suppression in systems that would frequently experience low-intensity fires results in the systems becoming severely affected by the huge fires that finally erupt; that is, the systems are not resilient to the major fires that occur with large fuel loads and may fundamentally change state after the fire.”

For example, fire suppression “selects” for tree species that are not adapted to frequent fires over species like the Ponderosa Pine that have adapted to survive frequent fires. Over time, the composition of the forest ecosystem tilts towards species that are less capable of withstanding even a minor disturbance that would have been absorbed easily in the absence of fire suppression.

The similarity to Meyer’s interpretation of the FIH is striking. In an ecosystem, it is natural selection rather than adaptation that induces the fragility but the result in both the economic and the ecological system is an inability to absorb a modest shock i.e. a loss of resilience.

Bookmark and Share

Written by Ashwin Parameswaran

December 6th, 2009 at 5:09 pm