macroresilience

resilience, not stability

Archive for the ‘Complex Adaptive Systems’ Category

Notes on the Evolutionary Approach to the Moral Hazard Explanation of the Financial Crisis

with 5 comments

In arguing the case for the moral hazard explanation of the financial crisis, I have frequently utilised evolutionary metaphors. This approach is not without controversy and this post is a partial justification as well as an explication of the conditions under which such an approach is valid. In particular, the simple story of selective forces maximising the moral hazard subsidy that I have outlined is dependent upon the specific circumstances and facts of our current financial system.

The “Natural Selection” Analogy

One point of dispute is whether selective forces are relevant in economic systems. The argument against selection usually invokes the possibility of firms or investors surviving for long periods of time despite losses i.e. bankruptcy is not strong enough as a selective force. My arguments rely not on firm survival as the selective force but the principal-agent relationship between investors and asset managers, between shareholders and CEOs etc. Selection kicks in much before the point of bankruptcy in the modern economy. In this respect, it is relevant to note the increased prevalence of shareholder activism in the last 25 years which has strengthened this argument. Moreover, the natural selection argument only serves as a more robust justification for the moral hazard story that does not depend upon explicit agent intentionality but is nevertheless strengthened by it.

The “Optimisation” Analogy

The argument that selective forces lead to optimisation is of course an old argument, most famously put by Milton Friedman and Armen Alchian. However, evolutionary economic processes only lead to optimisation if some key assumptions are satisfied. A brief summary of the key conditions under which an evolutionary process equates to neoclassical outcomes can be found on pages 26-27 of this paper by Nelson and Winter. Below is a partial analysis of these conditions with some examples relevant to the current crisis.

Diversity

Genetic diversity is the raw material upon which Darwinian natural selection operates. Similarly, to achieve anything close to an “optimal” outcome, the strategies available to be chosen by economic agents must be sufficiently diverse. The “natural selection” explanation of the moral hazard problem which I had elaborated upon in my previous post, therefore depends upon the toolset of banks’ strategies being sufficiently varied. The toolset available to banks to exploit the moral hazard subsidy is primarily determined by two factors: technology/innovation and regulation. The development of new financial products via securitisation, tranching and most importantly synthetic issuances with a CDS rather than a bond as an underlying which I discussed here, has significantly expanded this toolset.

Stability

The story of one optimal strategy outcompeting all others is also dependent on environmental conditions being stable. Quoting from Nelson and Winter: “If the analysis concerns a hypothetical static economy, where the underlying economic problem is standing still, it is reasonable to ask whether the dynamics of an evolutionary selection process can solve it in the long run. But if the economy is undergoing continuing exogenous change, and particularly if it is changing in unanticipated ways, then there really is no “long run” in a substantive sense. Rather, the selection process is always in a transient phase, groping toward its temporary target. In that case, we should expect to find firm behavior always maladapted to its current environment and in characteristic ways—for example, out of date because of learning and adjustment lags, or “unstable” because of ongoing experimentation and trial-and-error learning.”

This follows logically from the ‘Law of Competitive Exclusion‘. In an environment free of disturbances, diversity of competing strategies must reduce dramatically as the optimal strategy will outcompete all others. In fact, disturbances are a key reason why competitive exclusion is rarely observed in ecosystems. When Evelyn Hutchinson examined the ‘Paradox of the Plankton’, one of the explanations he offered was the “permanent failure to achieve equilibrium” . Indeed, one of the most accepted explanations of the paradox is the ‘Intermediate Disturbance Hypothesis’ which concludes that ecosystem diversity may be low when the environment is free of disturbances.

Stability here is defined as “stability with respect to the criteria of selection”. In the principal-agent selective process, the analogous criteria to Darwinian “fitness” is profitability. Nelson and Winter’s objection is absolutely relevant when the strategy that maximises profitability is a moving target and there is significant uncertainty regarding the exact contours of this strategy. On the other hand, the kind of strategies that maximise profitability in a bank have not changed for a while, in no small part because of the size of the moral hazard free lunch available. A CEO who wants to maximise Return on Equity for his shareholders would maximise balance sheet leverage, as I explained in my first post. The stability of the parameters of the strategy that would maximise the moral hazard subsidy and accordingly profitability, ensures that this strategy outcompetes all others.

Bookmark and Share

Written by Ashwin Parameswaran

March 13th, 2010 at 5:22 am

Natural Selection, Self-Deception and the Moral Hazard Explanation of the Financial Crisis

with 15 comments

Moral Hazard and Agent Intentionality

A common objection to the moral hazard explanation of the financial crisis is the following: Bankers did not explicitly factor in the possibility of being bailed out. In fact, they genuinely believed that their firms could not possibly collapse under any circumstances. For example, Megan McArdle says: I went to business school with these people, and talked to them when they were at the banks, and the operating assumption was not that they could always get the government to bail them out if something went wrong.  The operating assumption was that they had gotten a whole lot smarter, and would not require a bailout.” And Jeffrey Friedman has this to say about the actions of Ralph Cioffi and Matthew Tannin, the managers of the Bear Stearns fund whose collapse was the canary in the coal mine for the crisis: These are not the words, nor were Tannin and Cioffi’s actions the behavior, of people who had deliberately taken what they knew to be excessive risks. If Tannin and Cioffi were guilty of anything, it was the mistake of believing the triple-A ratings.”

This objection errs in assuming that the moral hazard problem requires an explicit intention on the part of economic agents to take on more risk and maximise the free lunch available courtesy of the taxpayer. The essential idea which I outlined at the end of this post is as follows: The current regime of explicit and implicit bank creditor protection and regulatory capital requirements means that a highly levered balance sheet invested in “safe” assets with severely negatively skewed payoffs is the optimal strategy to maximise the moral hazard free lunch. Reaching this optimum does not require explicit intentionality on the part of economic actors. The same may be achieved via a Hayekian spontaneous order of agents reacting to local incentives or even more generally through “natural selection”-like mechanisms.

Let us analyse the “natural selection” argument a little further. If we assume that there is a sufficient diversity of balance-sheet strategies being followed by various bank CEOs, those CEOs who follow the above-mentioned strategy of high leverage and assets with severely negatively skewed payoffs will be “selected” by their shareholders over other competing CEOs. As I have explained in more detail in this post, the cheap leverage afforded by the creditor guarantee means that this strategy can be levered up to achieve extremely high rates of return. Even better, the assets will most likely not suffer any loss in the extended stable period before a financial crisis. The principal, in this case the bank shareholder, will most likely mistake the returns to be genuine alpha rather than the severe blowup risk trade it truly represents. The same analysis applies to all levels of the principal-agent relationship in banks where an asymmetric information problem exists.

Self-Deception and Natural Selection

But this argument still leaves one empirical question unanswered – given that such a free lunch is on offer, why don’t we see more examples of active and intentional exploitation of the moral hazard subsidy? In other words, why do most bankers seem to be true believers like Tannin and Cioffi. To answer this question, we need to take the natural selection analogy a little further. In the evolutionary race between true believers and knowing deceivers, who wins? The work of Robert Trivers on the evolutionary biology of self-deception tells us that the true believer has a significant advantage in this contest.

Trivers’ work is well summarised by Ramachandran: “According to Trivers, there are many occasions when a person needs to deceive someone else. Unfortunately, it is difficult to do this convincingly since one usually gives the lie away through subtle cues, such as facial expressions and tone of voice. Trivers proposed, therefore, that maybe the best way to lie to others is to first lie to yourself. Self-deception, according to Trivers, may have evolved specifically for this purpose, i.e. you lie to yourself in order to enable you to more effectively deceive others.” Or as Conor Oberst put it more succinctly here: “I am the first one I deceive. If I can make myself believe, the rest is easy.” Trivers’ work is not as relevant for the true believers as it is for the knowing deceivers. It shows that active deception is an extremely hard task to pull off especially when attempted in competition with a true believer who is operating with the same strategy as the deceiver.

Between a CEO who is consciously trying to maximise the free lunch and a CEO who genuinely believes that a highly levered balance sheet of “safe” assets is the best strategy, who is likely to be more convincing to his shareholders and regulator? Bob Trivers’ work shows that it is the latter. Bankers who drink their own Kool-Aid are more likely to convince their bosses, shareholders or regulators that there is nothing to worry about. Given a sufficiently strong selective mechanism such as the principal-agent relationship, it is inevitable that such bankers would end up being the norm rather than the exception. The real deviation from the moral hazard explanation would be if it were any other way!

There is another question which although not necessary for the above analysis to hold is still intriguing: How and why do people transform into true believers? Of course we can assume a purely selective environment where a small population of true believers merely outcompete the rest. But we can do better. There is ample evidence from many fields of study that we tend to cling onto our beliefs even in the face of contradictory pieces of information. Only after the anomalous information crosses a significant threshold do we revise our beliefs. For a neurological explanation of this phenomenon, the aforementioned paper by V.S. Ramachandran analyses how and why patients with right hemisphere strokes vehemently deny their paralysis with the aid of numerous self-deceiving defence mechanisms.

Jeffrey Friedman’s analysis of how Cioffi and Tannin clung to their beliefs in the face of mounting evidence to the contrary until the “threshold” was cleared and they finally threw in the towel is a perfect example of this phenomenon. In Ramachandran’s words, “At any given moment in our waking lives, our brains are flooded with a bewildering variety of sensory inputs, all of which have to be incorporated into a coherent perspective based on what stored memories already tell us is true about ourselves and the world. In order to act, the brain must have some way of selecting from this superabundance of detail and ordering it into a consistent ‘belief system’, a story that makes sense of the available evidence. When something doesn’t quite fit the script, however, you very rarely tear up the entire story and start from scratch. What you do, instead, is to deny or confabulate in order to make the information fit the big picture. Far from being maladaptive, such everyday defense mechanisms keep the brain from being hounded into directionless indecision by the ‘combinational explosion’ of possible stories that might be written from the material available to the senses.” However, once a threshold is passed, the brain finds a way to revise the model completely. Ramachandran’s analysis also provides a neurological explanation for Thomas Kuhn‘s phases of science where the “normal” period is overturned once anomalies accumulate beyond a threshold. It also provides further backing for the thesis that we follow simple rules and heuristics in the face of significant uncertainty which I discussed here.

Fix The System, Don’t Blame the Individuals

The “selection” argument provides the rationale for how the the extraction of the moral hazard subsidy can be maximised despite the lack of any active deception on the part of economic agents. Therefore, as I have asserted before, we need to fix the system rather than blaming the individuals. This does not mean that we should not pursue those guilty of fraud. But merely pursuing instances of fraud without fixing the incentive system in place will get us nowhere.

Bookmark and Share

Written by Ashwin Parameswaran

February 17th, 2010 at 10:30 am

Knightian Uncertainty and the Resilience-Stability Trade-off

with 11 comments

This note examines the implications of adaptation by economic agents under Knightian uncertainty for the resilience of the macroeconomic system. It expands on themes I touched upon here and here. To summarise the key conclusions,

  • Under Knightian uncertainty, homo economicus is an irrelevant construct. The “optimal” course of action is one that restricts the choice of actions available and depends on a small set of simple rules and heuristics.
  • The choice of actions is restricted to those that are applicable in reasonably likely or recurrent situations. Actions applicable to rare situations are ignored. Therefore, it is entirely rational to take on severely negatively skewed bets.
  • By the same logic, economic agents find it harder to adapt to severe macroeconomic shocks as compared to mild shocks. This is the rationale for Axel Leijonhufvud’s “Corridor Hypothesis”.
  • Minsky’s Financial Instability Hypothesis states that prolonged periods of stability reduce the width of the “corridor” until the point where a macroeconomic crisis is inevitable.
  • The only assumptions needed to draw the above conclusions are the existence of uncertainty and sufficient adaptive/selective forces operating upon economic agents.
  • Minksy believed that this loss of resilience in the macroeconomic system is endogenous and inevitable. Although such a loss of resilience can arise endogenously, the evidence suggests that a significant proportion of the blame for the current crisis can be attributed to the stabilising policies favoured during the Great Moderation.
  • Buzz Holling’s work on ecosystem resilience has highlighted the peril of stabilising complex adaptive systems and how increased stability reduces system resilience.

Uncertainty and Negatively Skewed Payoffs

In a previous note, I explained how the existence of Knightian uncertainty leads to a perceived preference for severely negatively skewed payoffs. Ronald Heiner explains exactly how this occurs in his seminal paper on decision making under uncertainty.

Heiner argues that in the presence of uncertainty, the “optimal” course of action is one that restricts the choice of actions available and depends on a small set of simple rules and heuristics. In his words,

” Think of an omniscient agent with literally no uncertainty in identifying the most preferred action under any conceivable condition, regardless of the complexity of the environment which he encounters. Intuitively, such an agent would benefit from maximum flexibility to use all potential information or to adjust to all environmental conditions, no matter how rare or subtle those conditions might be. But what if there is uncertainty because agents are unable to decipher all of the complexity of the environment? Will allowing complete flexibility still benefit the agents?

I believe the general answer to this question is negative: that when genuine uncertainty exists, allowing greater flexibility to react to more information or administer a more complex repertoire of actions will not necessarily enhance an agent’s performance. “

In Heiner’s framework, actions chosen must satisfy a “Reliability Condition” which he summarises as: ” do so if the actual reliability in selecting the action exceeds the minimum required reliability necessary to improve performance. ” This required reliability cannot be achieved in the tails of the distribution and economic agents therefore ignore actions that are appropriate only in such situations. This explains our reluctance to insure against rare disasters which Heiner notes:

” Rare events are precisely those which are remote to a person’s normal experience, so that uncertainty in detecting which rare disasters to insure against increases as p(probability of disaster) approaches zero. Such greater uncertainty will reduce the reliability of insurance decisions as disasters become increasingly remote to a person’s normal experience.”

” At some point as p approaches zero, the Reliability Condition will be violated. This implies people will switch from typically buying to typically ignoring insurance conditions, which is just the pattern documented in Kunreuther’s 1978 study.”

Note the similarity between Heiner’s analysis of tail risks under uncertainty and Kahneman and Tversky’s distinction between “possible” and “impossible” events. The reliability problem is also connected to the difficulty of ascertaining the properties of tail events through a statistical analysis of historical data.

In an uncertainty-driven framework, it may be more appropriate to refer to this pattern as a reluctance to insure against tail risks rather than a preference for “blowup risks”. This distinction is also relevant in the moral hazard debate where the actions are often characterised better as a neglect of insurance of tail risks than an explicit taking on of such risks.

Impossible Events and Axel Leijonhufvud’s “Corridor Hypothesis”

Heiner also extends this analysis of the reluctance to insure against “impossible” events to provide the rationale for Axel Leijonhufvud’s “Corridor Hypothesis” of macroeconomic shocks and recessions. In his words:

“Now suppose, analogous to the insurance case, that there are different types of shocks. some more severe than others; where larger shocks are possible but less and less likely to happen. In addition, the reliability of detecting when and how to prepare for large shocks decreases as their determinants and repercussions are more remote to agents’ normal experience.

In a similar manner to that discussed for the insurance case, we can derive that the economy’s structure will evolve so as to prepare for and react quickly to small shocks. However, outside of a certain zone or “corridor” around its long-run growth path, it will only very sluggishly react to sufficiently large, infrequent shocks.”

Minsky’s Financial Instability Hypothesis and Leijonhufvud’s Corridor

Minsky’s Financial Instability Hypothesis (FIH) asserts that stability breeds instability i.e. stability reduces the width of the corridor to the point where even a small shock is enough to push the system outside it. Leijonhufvud acknowledged Minsky’s insight that the width of the corridor was variable and depended upon the recency of past disturbances. In his own words: “Our theory implies a variable width of the corridor. Transactors who have once suffered through a displacement of unanticipated magnitude (on the order of the Great Depression, say) will be encouraged to maintain larger buffers thereafter-until the memory dims…”

The assertion that stability breeds instability is well established in ecology, especially in Buzz Holling’s work as I discussed here. Heiner’s framework explains Minsky’s assertion as the logical consequence of agent adaptation under uncertainty. But the same can also be explained via “natural selection”-like mechanisms as well. The most relevant is the principal-agent relationship. Principals that “select” agents under asymmetric information can effectively mimic the effect of natural selection in ecosystems.

Minsky also argues that sooner or later, a capitalist economy will move outside this corridor due to entirely endogenous reasons. This is a more controversial assertion and can only be evaluated through a careful analysis of the empirical evidence. The assertion that an economy can move outside the corridor due to endogenous factors is difficult to reject. All it takes is a chance prolonged period of stability. However, this does not imply that the economy must move outside the corridor, which requires us to prove that prolonged periods of stability are the norm rather than the exception in a capitalist economy.

Minsky’s Financial Instability Hypothesis and C.S. Holling’s conception of Resilience and Stability

Minsky’s idea that stability breeds instability is an important theme in the field of ecology. Buzz Holling however defined the problem as loss of resilience rather than instability. Resilience and stability are dramatically different concepts and Holling explained the difference in his seminal paper on the topic as follows:

“Resilience determines the persistence of relationships within a system and is a measure of the ability of these systems to absorb changes of state variables, driving variables, and parameters, and still persist. In this definition resilience is the property of the system and persistence or probability of extinction is the result. Stability, on the other hand, is the ability of a system to return to an equilibrium state after a temporary disturbance. The more rapidly it returns, and with the least fluctuation, the more stable it is. In this definition stability is the property of the system and the degree of fluctuation around specific states the result.”

The relevant insight in Holling’s work is that resilience and stability as goals for an ecosystem are frequently at odds with each other. In many ecosystems, “the very fact of low stability seems to produce high resilience“. Conversely, “the goal of producing a maximum sustained yield may result in a more stable system of reduced resilience”. Minsky’s hypothesis is thus better described as “stability breeds loss of resilience”, not “stability breeds instability”.

The Pathology of Macroeconomic Stabilisation

The “Pathology of Natural Resource Management” is described by Holling and Meffe as follows:

“when the range of natural variation in a system is reduced, the system loses resilience.That is, a system in which natural levels of variation have been reduced through command-and-control activities will be less resilient than an unaltered system when subsequently faced with external perturbations.”

Similarly, the dominant macroeconomic policy paradigm explicitly aims to stabilise the macroeconomy. In particular, monetary policy during the Great Moderation was used as a blunt instrument to put out all but the most minor macroeconomic fire. Stabilising policies of this nature can and do cause the same kind of loss of resilience that Minsky describes. Indeed, as I mentioned in my previous note, agent adaptation to stabilising monetary and fiscal policies can be viewed as a more profound kind of moral hazard. Economic agents may take on severely negatively skewed bets not even as an adaptation to uncertainty but merely as a rational response to stabilising macroeconomic policies.

Bookmark and Share

Written by Ashwin Parameswaran

January 30th, 2010 at 2:08 pm

Efficient Markets and Pattern Predictions

with 4 comments

Markets can be “inefficient” and yet almost impossible to beat because of the existence of “Limits to Arbitrage” . It is essential not only to have the correct view but also to know when the view will be realised.

Why is it so difficult to time the market? Because the market is a complex adaptive system and complex adaptive systems are amenable only to what Hayek called “pattern predictions”. Hayek introduced this concept in his essay “The Theory of Complex Phenomena” where he analysed economic and other social phenomena as “phenomena of organised complexity” (A term introduced by Warren Weaver in this essay).

In such phenomena, according to Hayek, only pattern predictions are possible about the social structure as a whole: As Hayek explained in an interview with Leo Rosten:

“We can build up beautiful theories which would explain everything, if we could fit into the blanks of the formulae the specific information; but we never have all the specific information. Therefore, all we can explain is what I like to call “pattern prediction.” You can predict what sort of pattern will form itself, but the specific manifestation of it depends on the number of specific data, which you can never completely ascertain. Therefore, in that intermediate field — intermediate between the fields where you can ascertain all the data and the fields where you can substitute probabilities for the data–you are very limited in your predictive capacities.”

“Our capacity of prediction in a scientific sense is very seriously limited. We must put up with this. We can only understand the principle on which things operate, but these explanations of the principle, as I sometimes call them, do not enable us to make specific predictions on what will happen tomorrow.”

Hayek was adamant however that theories of pattern prediction were useful and scientific and had “empirical significance”. The example he drew upon was the Darwinian theory of evolution by natural selection, which provided only predictions as to the patterns one could observe over evolutionary time at levels of analysis above the individual entity.

Hayek’s intention with his theory was to debunk the utility of statistics and econometrics in the forecast of macroeconomic outcomes (See his Nobel lecture). The current neoclassical defense against their inability to predict the crisis takes the other extreme position i.e. our theories are right because no one could predict the crisis. This contention explicitly denies the possibility of “pattern predictions” and is not a valid defense. Any macroeconomic theory should be capable of explaining the patterns of our economic system – no more, no less.

One of the key reasons why timing and exact prediction is so difficult is the futility of conventional cause-effect thinking in complex adaptive systems. As Michael Mauboussin observed, ” Cause and effect thinking is futile, if not dangerous”. The underlying causes may be far removed from the effect, both in time and in space and the proximate cause may only be the “straw that broke the camel’s back”.

Many excellent examples of “pattern prediction” can be seen in ecology. For example, the proximate cause of the catastrophic degradation of Jamaica’s coral reefs since the 1980s was the mass mortality of the dominant species of urchin (reference). However, the real reason was the progressive loss of diversity due to overfishing since the 1950s.

As CS Holling observed in his analysis of a similar collapse in fisheries in the Great Lakes:

“Whatever the specific causes, it is clear that the precondition for the collapse was set by the harvesting of fish, even though during a long period there were no obvious signs of problems. The fishing activity, however, progressively reduced the resilience of the system so that when the inevitable unexpected event occurred, the populations collapsed. If it had not been the lamprey, it would have been something else: a change in climate as part of the normal pattern of fluctuation, a change in the chemical or physical environment, or a change in competitors or predators.”

The financial crisis of 2008-2009 can be analysed as the inevitable result of a progressive loss of system resilience. Whether the underlying cause was a buildup of debt, moral hazard or monetary policy errors is a different debate and can only be analysed by looking at the empirical evidence. However, just as is the case in ecology, the inability to predict the time of collapse or even the proximate cause of collapse does not equate to an inability to explain macroeconomic patterns.

Bookmark and Share

Written by Ashwin Parameswaran

December 31st, 2009 at 10:52 am

Minsky’s Financial Instability Hypothesis and Holling’s conception of Resilience and Stability

with 10 comments

Minsky’s Financial Instability Hypothesis

Minsky’s Financial Instability Hypothesis (FIH) is best summarised as the idea that “stability is destabilizing”. As Laurence Meyer put it:

“a period of stability induces behavioral responses that erode margins of safety, reduce liquidity, raise cash flow commitments relative to income and profits, and raise the price of risky relative to safe assets–all combining to weaken the ability of the economy to withstand even modest adverse shocks.”

Meyer’s interpretation highlights two important aspects of Minsky’s hypothesis:

  • It is the “behavioral responses” of economic agents that induce the fragility into the macroeconomic system.
  • After a prolonged period of stability, the economy cannot “withstand even modest adverse shocks”.

Holling’s “Pathology of Natural Resource Management”

Minsky’s idea that stability breeds instability is an important theme in the field of ecology. The “Pathology of Natural Resource Management” is described by Holling and Meffe as follows:

“when the range of natural variation in a system is reduced, the system loses resilience.”

Resilience and stability are dramatically different concepts. Holling explained the difference in his seminal paper on the topic as follows:

“Resilience determines the persistence of relationships within a system and is a measure of the ability of these systems to absorb changes of state variables, driving variables, and parameters, and still persist. In this definition resilience is the property of the system and persistence or probability of extinction is the result. Stability, on the other hand, is the ability of a system to return to an equilibrium state after a temporary disturbance. The more rapidly it returns, and with the least fluctuation, the more stable it is. In this definition stability is the property of the system and the degree of fluctuation around specific states the result.”

The relevant insight in Holling’s work is that resilience and stability as goals for an ecosystem are frequently at odds with each other. In many ecosystems, “the very fact of low stability seems to produce high resilience”. Conversely, “the goal of producing a maximum sustained yield may result in a more stable system of reduced resilience”.

Forest Fires: An Example of the Resilience-Stability Tradeoff

One of the most striking examples of the resilience-stability tradeoff in ecosystems is the impact of fire suppression over the last century on the dynamics of forest fires in the United States.

From Holling and Meffe:

“Suppression of fire in fire-prone ecosystems is remarkably successful in reducing the short-term probability of fire in the national parks of the United States and in fire-prone suburban regions. But the consequence is an accumulation of fuel over large areas that eventually produces fires of an intensity, extent, and human cost never before encountered (Kilgore 1976; Christensen et al. 1989). Fire suppression in systems that would frequently experience low-intensity fires results in the systems becoming severely affected by the huge fires that finally erupt; that is, the systems are not resilient to the major fires that occur with large fuel loads and may fundamentally change state after the fire.”

For example, fire suppression “selects” for tree species that are not adapted to frequent fires over species like the Ponderosa Pine that have adapted to survive frequent fires. Over time, the composition of the forest ecosystem tilts towards species that are less capable of withstanding even a minor disturbance that would have been absorbed easily in the absence of fire suppression.

The similarity to Meyer’s interpretation of the FIH is striking. In an ecosystem, it is natural selection rather than adaptation that induces the fragility but the result in both the economic and the ecological system is an inability to absorb a modest shock i.e. a loss of resilience.

Bookmark and Share

Written by Ashwin Parameswaran

December 6th, 2009 at 5:09 pm

Fix The System, Don’t Blame The Individuals

with 2 comments

Quoting from John Sterman’s authoritative book on system dynamics,

” A fundamental principle of system dynamics states that the structure of the system gives rise to its behavior. However, people have a strong tendency to attribute the behavior of others to dispositional rather than situational factors, that is, to character and especially character flaws rather than the system in which these people are acting. The tendency to blame the person rather than the system is so strong psychologists call it the “fundamental attribution error” (Ross 1977). In complex systems, different people placed in the same structure tend to behave in similar ways. When we attribute behavior to personality we lose sight of how the structure of the system shaped our choices. The attribution of behavior to individuals and special circumstances rather than system structure diverts our attention from the high leverage points where redesigning the system or government policy can have significant, sustained, beneficial effects on performance (Forrester 1969, chap.6; Meadows 1982). When we attribute behavior to people rather than system structure the focus of management becomes scapegoating and blame rather than the design of organizations in which ordinary people can achieve extraordinary results. ” (page 28-29)

Sterman’s comment is especially relevant to the current debate on reforming and regulating our financial system. It is misguided to focus on greedy bankers and incompetent or compromised regulators. Bankers and regulators are merely adapting to the incentives presented to them by our current economic and political system.

In fact, the real question is why so few economic actors indulge in fraud or milking taxpayer guarantees when they have every incentive to. After all, choosing not to play the game means accepting lower returns if one’s a shareholder and accepting lower bonuses and possibly even being fired for underperformance if one’s a manager or a trader.

The answer is that our ethics prevent us from exploiting the situation. But our ethical standards do not remain constant. They can and will erode if a perverse system is in place for too long. This gradual erosion of ethical standards is the real risk we face if we do not reform our system and fix the incentives. We may not realise this until it’s already too late and reversing this process and rebuilding ethical standards and trust in an economic system will be no easy task.

Bookmark and Share

Written by Ashwin Parameswaran

December 4th, 2009 at 3:19 pm