macroresilience

resilience, not stability

Archive for the ‘Evolutionary Economics’ Category

The Control Revolution And Its Discontents

with 20 comments

One of the key narratives on this blog is how the Great Moderation and the neo-liberal era has signified the death of truly disruptive innovation in much of the economy. When macroeconomic policy stabilises the macroeconomic system, every economic actor is incentivised to take on more macroeconomic systemic risks and shed idiosyncratic, microeconomic risks. Those that figured out this reality early on and/or had privileged access to the programs used to implement this macroeconomic stability, such as banks and financialised corporates, were the big winners – a process that is largely responsible for the rise in inequality during this period. In such an environment the pace of disruptive product innovation slows but the pace of low-risk process innovation aimed at cost-reduction and improving efficiency flourishes. therefore we get the worst of all worlds – the Great Stagnation combined with widespread technological unemployment.

This narrative naturally begs the question: when was the last time we had a truly disruptive Schumpeterian era of creative destruction. In a previous post looking at the evolution of the post-WW2 developed economic world, I argued that the so-called Golden Age was anything but Schumpeterian – As Alexander Field has argued, much of the economic growth till the 70s was built on the basis of disruptive innovation that occurred in the 1930s. So we may not have been truly Schumpeterian for at least 70 years. But what about the period from at least the mid 19th century till the Great Depression? Even a cursory reading of economic history gives us pause for thought – after all wasn’t a significant part of this period supposed to be the Gilded Age of cartels and monopolies which sounds anything but disruptive.

I am now of the opinion that we have never really had any long periods of constant disruptive innovation – this is not a sign of failure but simply a reality of how complex adaptive systems across domains manage the tension between efficiency,robustness, evolvability and diversity. What we have had is a subverted control revolution where repeated attempts to achieve and hold onto an efficient equilibrium fail. Creative destruction occurs despite our best efforts to stamp it out. In a sense, disruption is an outsider to the essence of the industrial and post-industrial period of the last two centuries, the overriding philosophy of which is automation and algorithmisation aimed at efficiency and control. And much of our current troubles are a function of the fact that we have almost perfected the control project.

The operative word and the source of our problems is “almost”. Too many people look at the transition from the Industrial Revolution to the Algorithmic Revolution as a sea-change in perspective. But in reality, the current wave of reducing everything to a combination of “data & algorithm” and tackling every problem with more data and better algorithms is the logical end-point of the control revolution that started in the 19th century. The difference between Ford and Zara is overrated – Ford was simply the first step in a long process that focused on systematising each element of the industrial process (production,distribution,consumption) but also crucially putting in place a feedback loop between each element. In some sense, Zara simply follows a much more complex and malleable algorithm than Ford did but this algorithm is still one that is fundamentally equilibriating (not disruptive) and focused on introducing order and legibility into a fundamentally opaque environment via a process that reduces human involvement and discretion by replacing intuitive judgements with rules and algorithms. Exploratory/disruptive innovation on the other hand is a disequilibriating force that is created by entrepreneurs and functions outside this feedback/control loop. Both processes are important – the longer period of the gradual shedding of diversity and homogenisation in the name of efficiency as well as the periodic “collapse” that shakes up the system and puts it eventually on the path to a new equilibrium.

Of course, control has been a aim of western civilisation for a lot longer but it was only in the 19th century that the tools of control were good enough for this desire to be implemented in any meaningful sense. And even more crucially, as James Beniger has argued, it was only in the last 150 years that the need for large-scale control arose. And now the tools and technologies in our hands to control and stabilise the economy are more powerful than they’ve ever been, likely too powerful.

If we had perfect information and everything could be algorithmised right now i.e. if the control revolution had been perfected, then the problem disappears. Indeed it is arguable that the need for disruption in the innovation process no longer exists. If we get to a world where radical uncertainty has been eliminated, then the problem of systemic fragility is moot and irrelevant. It is easy to rebut the stabilisation and control project by claiming that we cannot achieve this perfect world.

But even if the techno-utopian project can achieve all that it claims it can, the path matters. We need to make it there in one piece. The current “algorithmic revolution” is best viewed as a continuation of the process through which human beings went from being tool-users to minders and managers of automated systems. The current transition is simply one where the many of these algorithmic and automated systems can essentially run themselves with human beings simply performing the role of supervisors who only need to intervene in extraordinary circumstances. Therefore, it would seem logical that the same process of increased productivity that has occurred during the modern era of automation will continue during the creation of the “vast,automatic and invisible” ‘second economy’. However there are many signs that this may not be the case. What has made things better till now and has been genuine “progress” may make things worse in higher doses and the process of deterioration can be quite dramatic.

The Uncanny Valley on the Path towards “Perfection”

In 1970, Masahiro Mori coined the term ‘uncanny valley’ to denote the phenomenon that “as robots appear more humanlike, our sense of their familiarity increases until we come to a valley”. When robots are almost but not quite human-like, they invoke a feeling of revulsion rather than empathy. As Karl McDorman notes, “Mori cautioned robot designers not to make the second peak their goal — that is, total human likeness — but rather the first peak of humanoid appearance to avoid the risk of their robots falling into the uncanny valley.”

A similar valley exists in the path of increased automation and algorithmisation. Much of the discussion in this section of the post builds upon concepts I explored via a detailed case study in a previous post titled ‘People Make Poor Monitors for Computers’.

The 21st century version of the control project i.e. the algorithmic project consists of two components:
1. More Data – ‘Big Data’.
2. Better and more comprehensive Algorithm.

The process goes hand in hand therefore with increased complexity and crucially, poorer and less intuitive feedback for the human operator. This results in increased fragility and a system prone to catastrophic breakdowns. The typical solution chosen is either further algorithmisation i.e. an improved algorithm and more data and if necessary increased slack and redundancy. This solution exacerbates the problem of feedback and temporarily pushes the catastrophic scenario further out to the tail but it does not eliminate it. Behavioural adaptation by human agents to the slack and the “better” algorithm can make a catastrophic event as likely as it was before but with a higher magnitude. But what is even more disturbing is that this cycle of increasing fragility can occur even without any such adaptation. This is the essence of the fallacy of the ‘defence in depth’ philosophy that lies at the core of most fault-tolerant algorithmic designs that I discussed in my earlier postthe increased “safety” of the automated system allows the build up of human errors without any feedback available from deteriorating system performance.

A thumb rule to get around this problem is to use slack only in those domains where failure is catastrophic and to prioritise feedback when failure is not critical and cannot kill you. But in an uncertain environment, this rule is very difficult to manage. How do you really know that a particular disturbance will not kill you? Moreover the loop of automation -> complexity -> redundancy endogenously turns a non-catastrophic event into one with catastrophic consequences.

This is a trajectory which is almost impossible to reverse once it has gone beyond a certain threshold without undergoing an interim collapse. The easy short-term fix is always to make a patch to the algorithm, get more data and build in some slack if needed. An orderly rollback is almost impossible due to the deskilling of the human workforce and risk of collapse due to other components in the system having adapted to new reality. Even simply reverting to the old more tool-like system makes things a lot worse because the human operators are no longer experts at using those tools – the process of algorithmisation has deskilled the human operator. Moreover, the endogenous nature of this buildup of complexity eventually makes the system fundamentally illegible to the human operator – a phenomenon that is ironic given that the fundamental aim of the control revolution is to increase legibility.

The Sweet Spot Before the Uncanny Valley: Near-Optimal Yet Resilient

Although it is easy to imagine the characteristics of an inefficient and dramatically sub-optimal system that is robust, complex adaptive systems operate at a near-optimal efficiency that is also resilient. Efficiency is not only important due to the obvious reality that resources are scarce but also because slack at the individual and corporate level is a significant cause of unemployment. Such near-optimal robustness in both natural and economic systems is not achieved with simplistically diverse agent compositions or with significant redundancies or slack at agent level.

Diversity and redundancy carry a cost in terms of reduced efficiency. Precisely due to this reason, real-world economic systems appear to exhibit nowhere near the diversity that would seem to ensure system resilience. Rick Bookstaber noted recently, that capitalist competition if anything seems to lead to a reduction in diversity. As Youngme Moon’s excellent book ‘Different’ lays out, competition in most markets seems to result in less diversity, not more. We may have a choice of 100 brands of toothpaste but most of us would struggle to meaningfully differentiate between them.

Similarly, almost all biological and ecological complex adaptive systems are a lot less diverse and contain less pure redundancy than conventional wisdom would expect. Resilient biological systems tend to preserve degeneracy rather than simple redundancy and resilient ecological systems tend to contain weak links rather than naive ‘law of large numbers’ diversity. The key to achieving resilience with near-optimal configurations is to tackle disturbances and generate novelty/innovation with an an emergent systemic response that reconfigures the system rather than simply a localised response. Degeneracy and weak links are key to such a configuration. The equivalent in economic systems is a constant threat of new firm entry.

The viewpoint which emphasises weak links and degeneracy also implies that it is not the keystone species and the large firms that determine resilience but the presence of smaller players ready to reorganise and pick up the slack when an unexpected event occurs. Such a focus is further complicated by the fact that in a stable environment, the system may become less and less resilient with no visible consequences – weak links may be eliminated, barriers to entry may progressively increase etc with no damage done to system performance in the stable equilibrium phase. Yet this loss of resilience can prove fatal when the environment changes and can leave the system unable to generate novelty/disruptive innovation. This highlights the folly of statements such as ‘what’s good for GM is good for America’. We need to focus not just on the keystone species, but on the fringes of the ecosystem.

 THE UNCANNY VALLEY AND THE SWEET SPOT

The Business Cycle in the Uncanny Valley – Deterioration of the Median as well as the Tail

Many commentators have pointed out that the process of automation has coincided with a deskilling of the human workforce. For example, below is a simplified version of the relation between mechanisation and skill required by the human operator that James Bright documented in 1958 (via Harry Braverman’s ‘Labor and Monopoly Capital’). But till now, it has been largely true that although human performance has suffered, the performance of the system has gotten vastly better. If the problem was just a drop in human performance while the system got better, our problem is less acute.

AUTOMATION AND DESKILLING OF THE HUMAN OPERATOR

But what is at stake is a deterioration in system performance – it is not only a matter of being exposed to more catastrophic setbacks. Eventually mean/median system performance deteriorates as more and more pure slack and redundancy needs to be built in at all levels to make up for the irreversibly fragile nature of the system. The business cycle is an oscillation between efficient fragility and robust inefficiency. Over the course of successive cycles, both poles of this oscillation get worse which leads to median/mean system performance falling rapidly at the same time that the tails deteriorate due to the increased illegibility of the automated system to the human operator.

THE UNCANNY VALLEY BUSINESS CYCLE

The Visible Hand and the Invisible Foot, Not the Invisible Hand

The conventional economic view of the economy is one of a primarily market-based equilibrium punctuated by occasional shocks. Even the long arc of innovation is viewed as a sort of benign discovery of novelty without any disruptive consequences. The radical disequilibrium view (which I have been guilty of espousing in the past) is one of constant micro-fragility and creative destruction. However, the history of economic evolution in the modern era has been quite different – neither market-based equilibrium nor constant disequilibrium, but a series of off-market attempts to stabilise relations outside the sphere of the market combined with occasional phase transitions that bring about dramatic change. The presence of rents is a constant and the control revolution has for the most part succeeded in preserving the rents of incumbents, barring the occasional spectacular failure. It is these occasional “failures” that have given us results that in some respect resemble those that would have been created by a market over the long run.

As Bruce Wilder puts it (sourced from 1, 2, 3 and mashed up together by me):

The main problem with the standard analysis of the “market economy”, as well as many variants, is that we do not live in a “market economy”. Except for financial markets and a few related commodity markets, markets are rare beasts in the modern economy. The actual economy is dominated by formal, hierarchical, administrative organization and transactions are governed by incomplete contracts, explicit and implied. “Markets” are, at best, metaphors…..
Over half of the American labor force works for organizations employing over 100 people. Actual markets in the American economy are extremely rare and unusual beasts. An economics of markets ought to be regarded as generally useful as a biology of cephalopods, amid the living world of bones and shells. But, somehow the idealized, metaphoric market is substituted as an analytic mask, laid across a vast variety of economic relations and relationships, obscuring every important feature of what actually is…..
The elaborate theory of market price gives us an abstract ideal of allocative efficiency, in the absence of any firm or household behaving strategically (aka perfect competition). In real life, allocative efficiency is far less important than achieving technical efficiency, and, of course, everyone behaves strategically.
In a world of genuine uncertainty and limitations to knowledge, incentives in the distribution of income are tied directly to the distribution of risk. Economic rents are pervasive, but potentially beneficial, in that they provide a means of stable structure, around which investments can be made and production processes managed to achieve technical efficiency.
In the imaginary world of complete information of Econ 101, where markets are the dominant form of economic organizations, and allocative efficiency is the focus of attention, firms are able to maximize their profits, because they know what “maximum” means. They are unconstrained by anything.
In the actual, uncertain world, with limited information and knowledge, only constrained maximization is possible. All firms, instead of being profit-maximizers (not possible in a world of uncertainty), are rent-seekers, responding to instituted constraints: the institutional rules of the game, so to speak. Economic rents are what they have to lose in this game, and protecting those rents, orients their behavior within the institutional constraints…..
In most of our economic interactions, price is not a variable optimally digesting information and resolving conflict, it is a strategic instrument, held fixed as part of a scheme of administrative control and information discovery……The actual, decentralized “market” economy is not coordinated primarily by market prices—it is coordinated by rules. The dominant relationships among actors is not one of market exchange at price, but of contract: implicit or explicit, incomplete and contingent.

James Beniger’s work is the definitive document on how the essence of the ‘control revolution’ has been an attempt to take economic activity out of the sphere of direct influence of the market. But that is not all – the long process of algorithmisation over the last 150 years has also, wherever possible, replaced implicit rules/contracts and principal-agent relationships with explicit processes and rules. Beniger also notes that after a certain point, the increasing complexity of the system is an endogenous phenomenon i.e. further iterations are aimed at controlling the control process itself. As I have illustrated above, after a certain threshold, the increasing complexity, fragility and deterioration in performance becomes a self-fulfilling positive feedback process.

Although our current system bears very little resemblance to the market economy of the textbook, there was a brief period during the transition from the traditional economy to the control economy during the early part of the 19th century when this was the case. 26% of all imports into the United States in 1827 sold in an auction. But the displacement of traditional controls (familial ties) with the invisible hand of market controls was merely a transitional phase, soon to be displaced by the visible hand of the control revolution.

The Soviet Project, Western Capitalism and High Modernity

Communism and Capitalism are both pillars of the high-modernist control project. The signature of modernity is not markets, but technocratic control projects. Capitalism has simply done it in a manner that is more easily and more regularly subverted. It is the occasional failure of the control revolution that is the source of the capitalist economy’s long-run success. Conversely, the failure of the Soviet Project was due to its too successful adherence and implementation of the high-modernist ideal. The significance of the threat from crony capitalism is a function of the fact that by forming a coalition and partnership of the corporate and state control projects, it enables the implementation of the control revolution to be that much more effective.

The Hayekian argument of dispersed knowledge and its importance in seeking equilibrium is not as important as it seems in explaining why the Soviet project failed. As Joseph Berliner has illustrated, the Soviet economy did not fail to reach local equilibria. Where it failed so spectacularly was in extracting itself out of these equilibria. The dispersed knowledge argument is open to the riposte that better implementation of the control revolution will eventually overcome these problems – indeed much of the current techno-utopian version of the control revolution is based on this assumption. It is a weak argument for free enterprise, a much stronger argument for which is the need to maintain a system that retains the ability to reinvent itself and find a new, hitherto unknown trajectory via the destruction of the incumbents combined with the emergence of the new. Where the Soviet experiment failed is that it eliminated the possibility of failure, that Berliner called the ‘invisible foot’. The success of the free enterprise system has been built not upon the positive incentive of the invisible hand but the negative incentive of the invisible foot to counter the visible hand of the control revolution. It is this threat and occasional realisation of failure and disorder that is the key to maintaining system resilience and evolvability.

 

 

Notes:

  • Borrowing from Beniger, control here simply means “purposive influence towards a predetermined goal”. Similarly, equilibrium in this context is best defined as a state in which economic agents are not forced to change their routines, theories and policies.
  • On the uncanny valley, I wrote a similar post on why perfect memory does not lead to perfect human intelligence. Even if a computer benefits from more data and better memory, we may not. And the evidence suggests that the deterioration in human performance is steepest in the zone close to “perfection”.
  • An argument similar to my assertion on the misconception of a free enterprise economy as a market economy can be made about the nature of democracy. Rather than as a vehicle that enables the regular expression of the political will of the electorate, democracy may be more accurately thought of as the ability to effect a dramatic change when the incumbent system of plutocratic/technocratic rule diverges too much from popular opinion. As always, stability and prevention of disturbances can cause the eventual collapse to be more catastrophic than it needs to be.
  • Although James Beniger’s ‘Control Revolution’ is the definitive reference, Antoine Bousquet’s book ‘The Scientific Way of Warfare’ on the similar revolution in military warfare is equally good. Bousquet’s book highlights the fact that the military is often the pioneer of the key projects of the control revolution and it also highlights just how similar the latest phase of this evolution is to early phases – the common desire for control combined with its constant subversion by reality. Most commentators assume that the threat to the project is external – by constantly evolving guerrilla warfare for example. But the analysis of the uncanny valley suggests that an equally great threat is endogenous – of increasing complexity and illegibility of the control project itself. Bousquet also explains how the control revolution is a child of the modern era and the culmination of the philosophy of the Enlightenment.
  • Much of the “innovation” of the control revolution was not technological but institutional – limited liability, macroeconomic stabilisation via central banks etc.
  • For more on the role of degeneracy in biological systems and how it enables near-optimal resilience, this paper by James Whitacre and Axel Bender is excellent.
Bookmark and Share

Written by Ashwin Parameswaran

February 21st, 2012 at 5:38 pm

Innovation, Stagnation and Unemployment

with 18 comments

All economists assert that wants are unlimited. From this follows the view that technological unemployment is impossible in the long run. Yet there are a growing number of commentators (such as Brian Arthur) who insist that increased productivity from automation and improvements in artificial intelligence has a part to play in the current unemployment crisis. At the same time, a growing chorus laments the absence of innovation – Tyler Cowen’s thesis that the recent past has been a ‘Great Stagnation’ is compelling.

But don’t the two assertions contradict each other? Can we have an increase in technological unemployment as well as an innovation deficit? Is the concept of technological unemployment itself valid? Is there anything about the current phase of labour-displacing technological innovation that is different from the past 150 years? To answer these questions, we need a deeper understanding of the dynamics of innovation in a capitalist economy i.e. how exactly has innovation and productivity growth proceeded in a manner consistent with full employment in the past? In the process, I also hope to connect the long-run structural dynamic with the Minskyian business cycle dynamic. It is common to view the structural dynamic of technological change as a sort of ‘deus ex machine’ – if not independent, certainly as a phenomenon that is unconnected with the business cycle. I hope to convince some of you that our choices regarding business cycle stabilisation have a direct bearing on the structural dynamic of innovation. I have touched upon many of these topics in a scattered fashion in previous posts but this post is an attempt to present many of these thoughts in a coherent fashion with all my assumptions explicitly laid out in relation to established macroeconomic theory.

Micro-Foundations

Imperfectly competitive markets are the norm in most modern economies. In instances where economies of scale or network effects dominate, a market may even be oligopolistic or monopolistic (e.g. Google, Microsoft) This assumption is of course nothing new to conventional macroeconomic theory. Where my analysis differs is in viewing the imperfectly competitive process as one that is permanently in disequilibrium. Rents or “abnormal” profits are a persistent feature of the economy at the level of the firm and are not competed away even in the long run. The primary objective of incumbent rent-earners is to build a moat around their existing rents whereas the primary objective of competition from new entrants is not to drive rents down to zero, but to displace the incumbent rent-earner. It is not the absence of rents but the continuous threat to the survival of the incumbent rent-earner that defines a truly vibrant capitalist economy i.e. each niche must be continually contested by new entrants. This does not imply, even if the market for labour is perfectly competitive, that an abnormal share of GDP goes to “capital”. Most new entrants fail and suffer economic losses in their bid to capture economic rents and even a dominant incumbent may lose a significant proportion of past earned rents in futile attempts to defend its competitive position before its eventual demise.

This emphasis on disequilibrium points to the fact that the “optimum” state for a dynamically competitive capitalist economy is one of constant competitive discomfort and disorder. This perspective leads to a dramatically different policy emphasis from conventional theory which universally focuses on increasing positive incentives to economic players and relying on the invisible hand to guide the economy to a better equilibrium. Both Schumpeter and Marx understood the importance of this competitive discomfort for the constant innovative dynamism of a capitalist economy – my point is simply that a universal discomfort of capital is also important to maintain the distributive justice in a capitalist economy. in fact it is the only way to do so without sacrificing the innovative dynamism of the economy.

Competition in monopolistically competitive markets manifests itself through two distinct forms of innovation: exploitation and exploration. Exploitation usually takes the form of what James Utterback identified as process innovation with an emphasis on “real or potential cost reduction, improved product quality, and wider availability, and movement towards more highly integrated and continuous production processes.” As Utterback noted, such innovation is almost always driven by the incumbent firms. Exploitation is an act of optimisation under a known distribution i.e. it falls under the domain of homo economicus. In the language of fitness landscapes, exploitative process innovation is best viewed as competition around a local peak. On the other hand, exploratory product innovation (analogous to what Utterback identified as product innovation) occurs under conditions of significant irreducible uncertainty. Exploration is aimed at finding a significantly higher peak on the fitness landscape and as Utterback noted, is almost always driven by new entrants (For a more detailed explanation of incumbent preference for exploitation and organisational rigidity, see my earlier post).

An Investment Theory of the Business Cycle

Soon after publishing the ‘General Theory’, Keynes summarised his thesis as follows: “given the psychology of the public, the level of output and employment as a whole depends on the amount of investment. I put it in this way, not because this is the only factor on which aggregate output depends, but because it is usual in a complex system to regard as the causa causans that factor which is most prone to sudden and wide fluctuation.” In Keynes‘ view, the investment decision was undertaken in a condition of irreducible uncertainty, “influenced by our views of the future about which we know so little”. Just how critical the level of investment is in maintaining full employment is highlighted by GLS Shackle in his interpretation of Keynes’ theory: “In a money-using society which wishes to save some of the income it receives in payment for its productive efforts, it is not possible for the whole (daily or annual) product to be sold unless some of it is sold to investors and not to consumers. Investors are people who put their money on time-to-come. But they do not have to be investors. They can instead be liquidity-preferrers; they can sweep up their chips from the table and withdraw. If they do, they will give no employment to those who (in face of society’s propensity to save) can only be employed in making investment goods, things whose stream of usefulness will only come out over the years to come.”

If we accept this thesis, then it is no surprise that the post–2008 recovery has been quite so anaemic. Investment spending has remained low throughout the developed world, nowhere more so than in the United Kingdom. What makes this low level of investment even more surprising is the strength of the rebound in corporate profits and balance sheets – corporate leverage in the United States is as low as it has been for two decades and the proportion of cash in total assets as high as it has been for almost half a century. Specifically, the United States has also experienced an unusual increase in labour productivity during the recession which has exacerbated the disconnect between the recovery in GDP and employment. Some of these unusual patterns have been with us for a much longer time than the 2008 financial crisis. For example, the disconnect between GDP and employment in the United States has been obvious since atleast 1990, and the 2003 recession too saw an unusual rise in labour productivity. The labour market has been slack for at least a decade. It is hard to differ from Paul Krugman’s intuition that the character of post–1980 business cycles has changed. Europe and Japan are not immune from these “structural” patterns either – the ‘corporate savings glut’ has been a problem in the United Kingdom since atleast 2002, and Post-Keynesian economists have been pointing out the relationship between ‘capital accumulation’ and unemployment for a while, even attributing the persistently high unemployment in Europe to a lack of investment. Japan’s condition for the last decade is better described as a ‘corporate savings trap’ rather than a ‘liquidity trap’. Even in Greece, that poster child for fiscal profligacy, the recession is accompanied by a collapse in private sector investment.

A Theory of Business Investment

Business investments can typically either operate upon the scale of operations (e.g. capacity,product mix) or they can change the fundamental character of operations (e.g. changes in process, product). The degree of irreducible uncertainty in capacity and product mix decisions has reduced dramatically in the last half-century. The ability of firms to react quickly and effectively to changes in market conditions has improved dramatically with improvements in production processes and information technology – Zara being a well-researched example. Investments that change the very nature of business operations are what we typically identify as innovations. However, not all innovation decisions are subject to irreducible uncertainty either. In a seminal article, James March distinguished between “the exploration of new possibilities and the exploitation of old certainties. Exploration includes things captured by terms such as search, variation, risk taking, experimentation, play, flexibility, discovery, innovation. Exploitation includes such things as refinement, choice, production, efficiency, selection, implementation, execution.” Exploratory innovation operates under conditions of irreducible uncertainty whereas exploitation is an act of optimisation under a known distribution.

Investments in scaling up operations are most easily influenced by monetary policy initiatives which reduce interest rates and raise asset prices or direct fiscal policy initiatives which operate via the multiplier effect. In recent times, especially in the United States and United Kingdom, the reduction in rates has also directly facilitated the levering up of the consumer balance sheet and a reduction in the interest servicing burden of past consumer debt taken on. The resulting boost to consumer spending and demand also stimulates businesses to invest in expanding capacity. Exploitative innovation requires the presence of price competition within the industry i.e. monopolies or oligopolies have little incentive to make their operations more efficient beyond the price point where demand for their product is essentially inelastic. This sounds like an exceptional case but is in fact very common in critical industries such as finance and healthcare. Exploratory innovation requires not only competition amongst incumbent firms but competition from a constant and robust stream of new entrants into the industry. I outlined the rationale for this in a previous post:

Let us assume a scenario where the entry of new firms has slowed to a trickle, the sector is dominated by a few dominant incumbents and the S-curve of growth is about to enter its maturity/decline phase. To trigger off a new S-curve of growth, the incumbents need to explore. However, almost by definition, the odds that any given act of exploration will be successful is small. Moreover, the positive payoff from any exploratory search almost certainly lies far in the future. For an improbable shot at moving from a position of comfort to one of dominance in the distant future, an incumbent firm needs to divert resources from optimising and efficiency-increasing initiatives that will deliver predictable profits in the near future. Of course if a significant proportion of its competitors adopt an exploratory strategy, even an incumbent firm will be forced to follow suit for fear of loss of market share. But this critical mass of exploratory incumbents never comes about. In essence, the state where almost all incumbents are content to focus their energies on exploitation is a Nash equilibrium.
On the other hand, the incentives of any new entrant are almost entirely skewed in favour of exploratory strategies. Even an improbable shot at glory is enough to outweigh the minor consequences of failure. It cannot be emphasised enough that this argument does not depend upon the irrationality of the entrant. The same incremental payoff that represents a minor improvement for the incumbent is a life-changing event for the entrepreneur. When there exists a critical mass of exploratory new entrants, the dominant incumbents are compelled to follow suit and the Nash equilibrium of the industry shifts towards the appropriate mix of exploitation and exploration.

A Theory of Employment

My fundamental assertion is that a constant and high level of uncertain, exploratory investment is required to maintain a sustainable and resilient state of full employment. And as I mentioned earlier, exploratory investment driven by product innovation requires a constant threat from new entrants.

Long-run increases in aggregate demand require product innovation. As Rick Szostak notes:

While in the short run government spending and investment have a role to play, in the long run it is per capita consumption that must rise in order for increases in per capita output to be sustained…..the reason that we consume many times more than our great-grandparents is not to be found for the most part in our consumption of greater quantities of the same items which they purchased…The bulk of the increase in consumption expenditures, however, has gone towards goods and services those not-too-distant forebears had never heard of, or could not dream of affording….Would we as a society of consumers/workers have striven as hard to achieve our present incomes if our consumption bundle had only deepened rather than widened? Hardly. It should be clear to all that the tremendous increase in per capita consumption in the past century would not have been possible if not for the introduction of a wide range of different products. Consumers do not consume a composite good X. Rather, they consume a variety of goods, and at some point run into a steeply declining marginal utility from each. As writers as diverse as Galbraith and Marshall have noted, if declining marginal utility exists with respect to each good it holds over the whole basket of goods as well…..The simple fact is that, in the absence of the creation of new goods, aggregate demand can be highly inelastic, and thus falling prices will have little effect on output.

Therefore, when cost-cutting and process optimisation in an industry enables a product to be sold at a lower cost, the economy may not be able to reorganise back to full employment with simply an increased demand for that particular product. In the early stages of a product when demand is sufficiently elastic, process innovation can increase employment. But as the product ages, process improvements have a steadily negative effect on employment.

Eventually, a successful reorganisation back to full employment entails creating demand for new products. If such new products were simply an addition to the set of products that we consumed, disruption would be minimal. But almost any significant new product that arises from exploratory investment also destroys an old product. The tablet cannibalises the netbook, the smartphone cannibalises the camera etc. This of course is the destruction in Schumpeter’s creative destruction. It is precisely because of this cannibalistic nature of exploratory innovation that established incumbents rarely engage in it, unless compelled to do so by the force of new entrants. Burton Klein put it well: “ firms involved in such competition must compare two risks: the risk of being unsuccessful when promoting a discovery or bringing about an innovation versus the risk of having a market stolen away by a competitor: the greater the risk that a firm’s rivals take, the greater must be the risks to which must subject itself for its own survival.” Even when new firms enter a market at a healthy pace, it is rare that incumbent firms are successful at bringing about disruptive exploratory changes. When the pace of dynamic competition is slow, incumbents can choose to simply maintain slack and wait for any promising new technology to emerge which it can buy up rather than risking investment in some uncertain new technology.

We need exploratory investment because this expansion of the economy into its ‘adjacent possible’ does not derive its thrust from the consumer but from the entrepreneur. In other words, new wants are not demanded by the consumers but are instead created by entrepreneurs such as Steve Jobs. In the absence of dynamic competition from new entrants, wants remain limited.

In essence, this framework incorporates technological innovation into a distinctly “Chapter 12” Keynesian view of the business cycle. Although my views are far removed from macroeconomic orthodoxy, they are not quite so radical that they have no precedents whatsoever. My views can be seen as a simple extension of Burton Klein’s seminal work outlined in his books ‘Dynamic Economics’ and ‘Prices, wages, and business cycles: a dynamic theory’. But the closest parallels to this explanation can be found in Rick Szostak’s book ‘Technological innovation and the Great Depression’. Szostak uses an almost identical rationale to explain unemployment during the Great Depression, “how an abundance of labor-saving production technology coupled with a virtual absence of new product innovation could affect consumption, investment and the functioning of the labor market in such a way that a large and sustained contraction in employment would result.”

As I have hinted at in a previous post, this is not a conventional “structural” explanation of unemployment. Szostak explains the difference: “An alternative technological argument would be that the skills required of the workforce changed more rapidly in the interwar period than did the skills possessed by the workforce. Thus, there were enough jobs to go around; workers simply were not suited to them, and a painful decade of adjustment was required…I argue that in fact there simply were not enough jobs of any kind available.” In other words, this is a partly technological explanation for the shortfall in aggregate demand.

The Invisible Foot and New Firm Entry

The concept of the “Invisible Foot” was introduced by Joseph Berliner as a counterpoint to Adam Smith’s “Invisible Hand” to explain why innovation was so hard in the centrally planned Soviet economy:

Adam Smith taught us to think of competition as an “invisible hand” that guides production into the socially desirable channels….But if Adam Smith had taken as his point of departure not the coordinating mechanism but the innovation mechanism of capitalism, he may well have designated competition not as an invisible hand but as an invisible foot. For the effect of competition is not only to motivate profit-seeking entrepreneurs to seek yet more profit but to jolt conservative enterprises into the adoption of new technology and the search for improved processes and products. From the point of view of the static efficiency of resource allocation, the evil of monopoly is that it prevents resources from flowing into those lines of production in which their social value would be greatest. But from the point of view of innovation, the evil of monopoly is that it enables producers to enjoy high rates of profit without having to undertake the exacting and risky activities associated with technological change. A world of monopolies, socialist or capitalist, would be a world with very little technological change.” 

For disruptive innovation to persist, the invisible foot needs to be “applied vigorously to the backsides of enterprises that would otherwise have been quite content to go on producing the same products in the same ways, and at a reasonable profit, if they could only be protected from the intrusion of competition”Burton Klein’s great contribution along with Gunnar Eliasson was to highlight the critical importance of entry of new firms in maintaining the efficacy of the invisible foot. Klein believed that

the degree of risk taking is determined by the robustness of dynamic competition, which mainly depends on the rate of entry of new firms. If entry into an industry is fairly steady, the game is likely to have the flavour of a highly competitive sport. When some firms in an industry concentrate on making significant advances that will bear fruit within several years, others must be concerned with making their long-run profits as large as possible, if they hope to survive. But after entry has been closed for a number of years, a tightly organised oligopoly will probably emerge in which firms will endeavour to make their environments highly predictable in order to make their environments highly predictable in order to make their short-run profits as large as possible….Because of new entries, a relatively concentrated industry can remain highly dynamic. But, when entry is absent for some years, and expectations are premised on the future absence of entry, a relatively concentrated industry is likely to evolve into a tight oligopoly. In particular, when entry is long absent, managers are likely to be more and more narrowly selected; and they will probably engage in such parallel behaviour with respect to products and prices that it might seem that the entire industry is commanded by a single general!

This argument does not depend on incumbent firms leaving money on the table – on the contrary, they may redouble their attempts at cost reduction via process innovation in times of deficient demand. Rick Szostak documents how “despite the availability of a massive amount of inexpensive labour, process innovation would continue in the 1930s. Output per man-hour in manufacturing rose by 25% in the 1930s…..national output was higher in 1939 than in 1929, while employment was over two million less.”

Macroeconomic Policy and Exploratory Product Innovation

Monetary policy has been the preferred cure for insufficient aggregate demand throughout and since the Great Moderation. The argument goes that lower real rates, inflation and higher asset prices will increase investment via Tobin’s Q and increase consumption via the wealth effect and reduction in rewards to savings, all bound together in the virtuous cycle of the multiplier. If monetary policy is insufficient, fiscal policy may be deployed with a focus on either directly increasing aggregate demand or providing businesses with supply-side incentives such as tax cuts.

There is a common underlying theme to all of the above policy options – they focus on the question “how do we make businesses want to invest?” i.e. on positively incentivising incumbent business and startups and trusting that the invisible hand will do the rest. In the context of exploratory investments, the appropriate question is instead “how do we make businesses have to invest?” i.e. on compelling incumbent firms to invest in speculative projects in order to defend their rents or lose out to new entrants if they fail to do so. But the problem isn’t just that these policies are ineffectual. Many of the policies that focus on positive incentives weaken the competitive discomfort from the invisible foot by helping to entrench the competitive position of incumbent corporates and reducing their incentive to engage in exploratory investment. It is in this context that interventions such as central bank purchase of assets and fiscal stimulus measures that dole out contracts to the favoured do permanent harm to the economy.

The division that matters from the perspective of maintaining the appropriate level of exploratory investment and product innovation is not monetary vs fiscal but the division between existing assets and economic interests and new firms/entrepreneurs. Almost all monetary policy initiatives focus on purchasing existing assets from incumbent firms or reducing real rates for incumbent banks and their clients. A significant proportion of fiscal policy does the same. The implicit assumption is, as Nick Rowe notes, that there is “high substitutability between old and new investment projects, so the previous owners of the old investment projects will go looking for new ones with their new cash”. This assumption does not hold in the case of exploratory investments – asset-holders will likely chase after a replacement asset but this asset will likely be an existing investment project, not a new one. The result of the intervention will be an increase in prices of such assets but it will not feed into any “real” new investment activity. In other words, the Tobin’s q effect is negligible for exploratory investments in the short run and in fact negative in the long run as the accumulated effect of rents derived from monetary and fiscal intervention reduces the need for incumbent firms to engage in such speculative investment.

A Brief History of the Post-WW2 United States Macroeconomy

In this section, I’m going to use the above framework to make sense of the evolution of the macroeconomy in the United States after WW2. The framework is relevant for post–70s Europe and Japan as well which is why the ‘investment deficit problem’ afflicts almost the entire developed world today. But the details differ quite significantly especially with regards to the distributional choices made in different countries.

The Golden Age

The 50s and the 60s are best characterised as a period of “order for all” characterised by as Bill Lazonick put it, “oligopolistic competition, career employment with one company, and regulated financial markets”. The ‘Golden Age’ delivered prosperity for a few reasons:

  • As Minsky noted, the financial sector had only just begun the process of adapting to and circumventing regulations designed to constrain and control it. As a result, the Fed had as much control over credit creation and bank policies as it would ever have.
  • The pace of both product and process innovation had slowed down significantly in the real economy, especially in manufacturing. Much of the productivity growth came from product innovations that had already been made prior to WW2. As Alexander Field explains (on the slowdown in manufacturing TFP): “Through marketing and planned obsolescence, the disruptive force of technological change – what Joseph Schumpeter called creative destruction – had largely been domesticated, at least for a time. Whereas large corporations had funded research leading to a large number of important innovations during the 1930s, many critics now argued that these behemoths had become obstacles to transformative innovation, too concerned about the prospect of devaluing rent-yielding income streams from existing technologies. Disruptions to the rank order of the largest U.S. industrial corporations during this quarter century were remarkably few. And the overall rate of TFP growth within manufacturing fell by more than a percentage point compared with the 1930s and more than 3.5 percentage points compared with the 1920s.”
  • Apart from the fact that the economy had to catch up to earlier product innovation, the dominant position of the US in the global economy post WW2 limited the impact from foreign competition.

It was this peculiar confluence of factors that enabled a system of “order and stability for all” without triggering a complete collapse in productivity or financial instability – a system where both labour and capital were equally strong and protected and shared in the rents available to all.

Stagflation

The 70s are best described as the time when this ordered, stabilised system could not be sustained any longer.

  • By the late 60s, the financial sector had adapted to the regulatory environment. Innovations such as Fed Funds market and the Eurodollar market gradually came into being such that by the late 60s, credit creation and bank lending were increasingly difficult for the Fed to control. Reserves were no longer a binding constraint on bank operations.
  • The absence of real competition either on the basis of price or from new entrants meant that both process and product innovation were low just like during the Golden Age but the difference was that there were no more low-hanging fruit to pick from past product innovations. Therefore, a secular slowdown in productivity took hold.
  • The rest of world had caught up and foreign competition began to intensify.

As Burton Klein noted, “competition provides a deterrent to wage and price increases because firms that allow wages to increase more rapidly than productivity face penalties in the form of reduced profits and reduced employment”. In the absence of adequate competition, demand is inelastic and there is little pressure to reduce costs. As the level of price/cost competition reduces, more and more unemployment is required to keep inflation under control. Even worse, as Klein noted, it only takes the absence of competition in a few key sectors for the disease to afflict the entire economy. Controlling overall inflation in the macroeconomy when a few key sectors are sheltered from competitive discomfort requires monetary action that will extract a disproportionate amount of pain from the remainder of the economy. Stagflation is the inevitable consequence in a stabilised economy suffering from progressive competitive sclerosis.

The “Solution”

By the late 70s, the pressures and conflicts of the system of “order for all” meant that change was inevitable. The result was what is commonly known as the neoliberal revolution. There are many different interpretations of this transition. To right-wing commentators, neoliberalism signified a much-needed transition towards a free-market economy. Most left-wing commentators lament the resultant supremacy of capital over labour and rising inequality. For some, the neoliberal era started with Paul Volcker having the courage to inflict the required pain to break the back of inflationary forces and continued with central banks learning the lessons of the past which gave us the Great Moderation.

All these explanations are relevant but in my opinion, they are simply a subset of a larger and simpler explanation. The prior economic regime was a system where both the invisible hand and the invisible foot were shackled – firms were protected but their profit motive was also shackled by the protection provided to labour. The neoliberal transition unshackled the invisible hand (the carrot of the profit motive) without ensuring that all key sectors of the economy were equally subject to the invisible foot (the stick of failure and losses and new firm entry). Instead of tackling the root problem of progressive competitive and democratic sclerosis and cronyism, the neoliberal era provided a stop-gap solution. “Order for all” became “order for the classes and disorder for the masses”. As many commentators have noted, the reality of neoliberalism is not consistent with the theory of classical liberalism. Minsky captured the hypocrisy well: “Conservatives call for the freeing of markets even as their corporate clients lobby for legislation that would institutionalize and legitimize their market power; businessmen and bankers recoil in horror at the prospect of easing entry into their various domains even as technological changes and institutional evolution make the traditional demarcations of types of business obsolete. In truth, corporate America pays lip service to free enterprise and extols the tenets of Adam Smith, while striving to sustain and legitimize the very thing that Smith abhorred – state-mandated market power.”

The critical component of this doctrine is the emphasis on macroeconomic and financial sector stabilisation implemented primarily through monetary policy focused on the banking and asset price channels of policy transmission:
Any significant fall in asset prices (especially equity prices) has been met with a strong stimulus from the Fed i.e. the ‘Greenspan Put’. In his plea for increased quantitative easing via purchase of agency MBS, Joe Gagnon captured the logic of this policy: ““This avalanche of money would surely push up stock prices, push down bond yields, support real estate prices, and push up the value of foreign currencies. All of these financial developments would stimulate US economic activity.” In other words, prop up asset prices and the real economy will mend itself.
Similarly, Fed and Treasury policy has ensured that none of the large banks can fail. In particular, bank creditors have been shielded from any losses. The argument is that allowing banks to fail will cripple the flow of credit to the real economy and result in a deflationary collapse that cannot be offset by conventional monetary policy alone. This is the logic for why banks were allowed access to a panoply of Federal Reserve liquidity facilities at the height of the crisis. In other words, prop up the banks and the real economy will mend itself.

In this increasingly financialised economy, “the increased market-sensitivity combined with the macro-stabilisation commitment encourages low-risk process innovation and discourages uncertain and exploratory product innovation.” This tilt towards exploitation/cost-reduction without exploration kept inflation in check but it also implied a prolonged period of sub-par wage growth and a constant inability to maintain full employment unless the consumer or the government levered up. For the neo-liberal revolution to sustain a ‘corporate welfare state’ in a democratic system, the absence of wage growth necessitated an increase in household leverage for consumption growth to be maintained. The monetary policy doctrine of the Great Moderation exacerbated the problem of competitive sclerosis and the investment deficit but it also provided the palliative medicine that postponed the day of reckoning. The unshackling of the financial sector was a necessary condition for this cure to work its way through the economy for as long as it did.

It is this focus on the carrot of higher profits that also triggered the widespread adoption of high-powered incentives such as stock options and bonuses to align manager and stockholder incentives. When the risk of being displaced by innovative new entrants is low, high-powered managerial incentives help to tilt the focus of the firm towards a focus on process innovation and cost reduction, optimisation of leverage etc. From the stockholders and the managers’ perspective, the focus on short-term profits is a feature, not a bug.

The Dénouement

So long as unemployment and consumption could be propped up by increasing leverage from the consumer and/or the state, the long-run shortage in exploratory product innovation and the stagnation in wages could be swept under the rug and economic growth could be maintained. But there is every sign that the household sector has reached a state of peak debt and the financial system has reached its point of peak elasticity. The policy that worked so well during the Great Moderation is now simply focused on preventing the collapse of the cronyist and financialised economy. The system has become so fragile that Minsky’s vision is more correct than ever – an economy at full employment will yo-yo uncontrollably between a state of debt-deflation and high,variable inflation. Instead the goal of full employment seems to have been abandoned in order to postpone the inevitable collapse. This only substitutes an economic fragility with a deeper social fragility.

The aim of full employment is made even harder with the acceleration of process innovation due to advances in artificial intelligence and computerisation. Process innovation gives us technological unemployment while at the same time the absence of exploratory product innovation leaves us stuck in the Great Stagnation.

 

The solution preferred by the left is to somehow recreate the golden age of the 50s and the 60s i.e. order for all. Apart from the impossibility of retrieving the docile financial system of that age (which Minsky understood), the solution of micro-stability for all is an environment of permanent innovative stagnation. The Schumpeterian solution is to transform the system into one of disorder for all, masses and classes alike. Micro-fragility is the key to macro-resilience but this fragility must be felt by all economic agents, labour and capital alike. In order to end the stagnation and achieve sustainable full employment, we need to allow incumbent banks and financialised corporations to collapse and dismantle the barriers to entry of new firms that pervade the economy (e.g. occupational licensing, the patent system). But this does not imply that the macroeconomy should suffer from a deflationary contraction. Deflation can be prevented in a simple and effective manner with a system of direct transfers to individuals as Steve Waldman has outlined. This solution reverses the flow of rents that have exacerbated inequality over the past few decades, as well as tackling the cronyism and demosclerosis that is crippling innovation and preventing full employment.

Bookmark and Share

Written by Ashwin Parameswaran

November 2nd, 2011 at 7:29 pm

Macroeconomic Stabilisation and Financialisation in The Real Economy

with 35 comments

The argument against stabilisation is akin to a broader, more profound form of the moral hazard argument. But the ecological ‘systems’ approach is much more widely applicable than the conventional moral hazard argument for a couple of reasons:

  • The essence of the Minskyian explanation is not that economic agents get fooled by the period of stability or that they are irrational. It is that there are sufficient selective forces (especially amongst principal-agent relationships) in the modern economy that the moral hazard outcome can be achieved even without any active intentionality on the part of economic agents to game the system.
  • The micro-prudential consequences of stabilisation and moral hazard are dwarfed by their macro-prudential systemic consequences. The composition of agents changes and becomes less diverse as those firms and agents that try to follow more resilient or less leveraged strategies will be outcompeted and weeded out – this loss of diversity is exacerbated by banks’ adaptation to the intervention strategies preferred by central banks in order to minimise their losses. And most critically, the suppression of disturbances increases the connectivity and reduces the ‘patchiness’ and modularity of the macroeconomic system. In the absence of disturbances, connectivity builds up within the network, both within and between scales. Increased within-scale connectivity increases the severity of disturbances and increased between-scale connectivity increases the probability that a disturbance at a lower level will propagate up to higher levels and cause systemic collapse.

Macro-stabilisation therefore breeds fragility in the financial sector. But what about the real economy? One could argue that in the long run, it is creative destruction in the real economy that drives economic growth and surely macro-stabilisation does not impede the pace of long-run innovation? Moreover, even if non-financial economic agents were ‘Ponzi borrowers’, wouldn’t real economic shocks be sufficient to deliver the “disturbances” consistent with macroeconomic resilience? Unfortunately, the assumption that nominal income stabilisation has no real impact is too simplistic. Macroeconomic stabilisation is one of the key drivers of the process of financialisation through which it transmits financial fragility throughout the real economy and hampers the process of exploratory innovation and creative destruction.

Financialisation is a term with many definitions. Since my focus is on financialisation in the corporate domain (rather than in the household sector), Greta Krippner’s definition of financialisation as a ““pattern of accumulation in which profit making occurs increasingly through financial channels rather than through trade and commodity production” is closest to the mark. But from a resilience perspective, it is more accurate to define financialisation as a “pattern of accumulation in which risk-taking occurs increasingly through financial channels rather than through trade and commodity production”.

In the long run, creating any source of stability in a capitalist economy incentivises economic agents to realign themselves to exploit that source of security and thereby reduce risk. Similar to how banks adaptation to the intervention strategies preferred by central banks by taking on more “macro” risks, macro-stabilisation incentivises real economy firms to shed idiosyncratic micro-risks and take on financial risks instead. Suppressing nominal volatility encourages economic agents to shed real risks and take on nominal risks. In the presence of the Greenspan/Bernanke put, a strategy focused on “macro” asset price risks and leverage outcompetes strategies focused on “risky” innovation.  Just as banks that exploit the guarantees offered by central banks outcompete those that don’t, real economy firms that realign themselves to become more bank-like outcompete those that choose not to.

The poster child for this dynamic is the transformation of General Electric during the Jack Welch Era, when “GE’s no-growth, blue-chip industrial businesses were run for profits and to maintain the AAA credit rating which was then used to expand GE Capital.” Again, the financialised strategy outcompetes all others and drives out “real economy” firms. As Doug Rushkoff observed, “the closer to the creation of value you get under this scheme, the farther you are from the money”. General Electric’s strategy is an excellent example of how financialisation is not just a matter of levering up the balance sheet. It could just as easily be focused on aggressively extending leverage to one’s clients, a strategy that is just as adept at delivering low-risk profits in an environment where the central bank is focused on avoiding even the smallest snap-back in an elastic, over-extended monetary system. When central bankers are focused on preventing significant pullbacks in equity prices (the Greenspan/Bernanke put), then real-economy firms are incentivised to take on more systematic risk and reduce their idiosyncratic risk exposure.

Some Post-Keynesian and Marxian economists also claim that this process of financialisation is responsible for the reluctance of corporates to invest in innovation. As Bill Lazonick puts it, “the financialization of corporate resource allocation undermines investment in innovation”. This ‘investment deficit’ has in turn led to the secular downturn in productivity growth across the Western world since the 1970s, a phenomenon that Tyler Cowen has coined as ‘The Great Stagnation’. This thesis, appealing though it is, is too simplistic. The increased market-sensitivity combined with the macro-stabilisation commitment encourages low-risk process innovation and discourages uncertain and exploratory product innovation. The collapse in high-risk, exploratory innovation is exacerbated by the rise in the influence of special interests that accompanies any extended period of stability, a dynamic that I discussed in an earlier post.

The easiest way to explain the above dynamic is to take a slightly provocative example. Let us assume that the Fed decides to make the ‘Bernanke Put’ more explicit by either managing a floor on equity prices or buying a significant amoubt of equities outright. The initial result may be positive but in the long run, firms will simply align their risk profile to that of the broader market. The end result will be a homogenous corporate sector free of any disruptive innovation – a state of perfect equilibrium but also a state of rigor mortis.

Bookmark and Share

Written by Ashwin Parameswaran

October 3rd, 2011 at 4:16 pm

Forest Fire Suppression and Macroeconomic Stabilisation

with 24 comments

In an earlier post, I compared Minsky’s Financial Instability Hypothesis with Buzz Holling’s work on ecological resilience and briefly touched upon the consequences of wildfire suppression as an example of the resilience-stability tradeoff. This post expands upon the lessons we can learn from the history of fire suppression and its impact on the forest ecosystem in the United States and draws some parallels between the theory and history of forest fire management and macroeconomic management.

Origins of Stabilisation as the Primary Policy Objective and Initial Ease of Implementation

The impetus for both fire suppression and macroeconomic stabilisation came from a crisis. In economics, this crisis was the Great Depression which highlighted the need for stabilising fiscal and monetary policy during a crisis. Out of all the initiatives, the most crucial from a systems viewpoint was the expansion of lender-of-last-resort operations and bank bailouts which tried to eliminate all disturbances at their source. In Minsky’s words: “The need for lender-of-Iast-resort operations will often occur before income falls steeply and before the well nigh automatic income and financial stabilizing effects of Big Government come into play.” (Stabilizing an Unstable Economy pg 46)

SImilarly, the battle for complete fire suppression was won after the Great Idaho Fires of 1910. “The Great Idaho Fires of August 1910 were a defining event for fire policy and management, indeed for the policy and management of all natural resources in the United States. Often called the Big Blowup, the complex of fires consumed 3 million acres of valuable timber in northern Idaho and western Montana…..The battle cry of foresters and philosophers that year was simple and compelling: fires are evil, and they must be banished from the earth. The federal Weeks Act, which had been stalled in Congress for years, passed in February 1911. This law drastically expanded the Forest Service and established cooperative federal-state programs in fire control. It marked the beginning of federal fire-suppression efforts and effectively brought an end to light burning practices across most of the country. The prompt suppression of wildland fires by government agencies became a national paradigm and a national policy” (Sara Jensen and Guy McPherson). In 1935, the Forest Service implemented the ‘10 AM policy’, a goal to extinguish every new fire by 10 AM the day after it was reported.

In both cases, the trauma of a catastrophic disaster triggered a new policy that would try to stamp out all disturbances at the source, no matter how small. This policy also had the benefit of initially being easy to implement and cheap. In the case of wildfires, “the 10 am policy, which guided Forest Service wildfire suppression until the mid 1970s, made sense in the short term, as wildfires are much easier and cheaper to suppress when they are small. Consider that, on average, 98.9% of wildfires on public land in the US are suppressed before they exceed 120 ha, but fires larger than that account for 97.5% of all suppression costs” (Donovan and Brown). As Minsky notes, macroeconomic stability was helped significantly by the deleveraged nature of the American economy from the end of WW2 till the 1960s. Even in interventions by the Federal Reserve in the late 60s and 70s, the amount of resources needed to shore up the system was limited.

Consequences of Stabilisation

Wildfire suppression in forests that are otherwise adapted to regular, low-intensity fires (e.g. understory fire regimes) causes the forest to become more fragile and susceptible to a catastrophic fire. As Holling and Meffe note, “fire suppression in systems that would frequently experience low-intensity fires results in the systems becoming severely affected by the huge fires that finally erupt; that is, the systems are not resilient to the major fires that occur with large fuel loads and may fundamentally change state after the fire”. This increased fragility arises from a few distinct patterns and mechanisms:

Increased Fuel Load: Just like channelisation of a river results in increased silt load within the river banks, the absence of fires leads to a fuel buildup thus making the eventual fire that much more severe. In Minskyian terms, this is analogous to the buildup of leverage and ‘Ponzi finance’ within the economic system.

Change in Species Composition: Species compositions inevitably shift towards less fire resistant trees when fires are suppressed (Allen et al 2002). In an economic system, it is not simply that ‘Ponzi finance’ players thrive but that more prudently financed actors get outcompeted in the cycle. This has critical implications for the ability of the system to recover after the fire. This is an important problem in the financial sector where as Richard Fisher observed, “more prudent and better-managed banks have been denied the market share that would have been theirs if mismanaged big banks had been allowed to go out of business”.

Reduction in Diversity: As I mentioned here, “In an environment free of disturbances, diversity of competing strategies must reduce dramatically as the optimal strategy will outcompete all others. In fact, disturbances are a key reason why competitive exclusion is rarely observed in ecosystems”. Contrary to popular opinion, the post-disturbance environment is incredibly productive and diverse. Even after a fire as severe as the Yellowstone fires of 1988, the regeneration of the system was swift and effective as the ecosystem was historically adapted to such severe fires.

Increased Connectivity: This is the least appreciated impact of eliminating all disturbances in a complex adaptive system. Disturbances perform a critical role by breaking connections within a network. Frequent forest fires result in a “patchy” modularised forest where no one fire can cause catastrophic damage. As Thomas Bonnicksen notes: “Fire seldom spread over vast areas in historic forests because meadows, and patches of young trees and open patches of old trees were difficult to burn and forced fires to drop to the ground…..Unlike the popular idealized image of historic forests, which depicts old trees spread like a blanket over the landscape, a real historic forest was patchy. It looked more like a quilt than a blanket. It was a mosaic of patches. Each patch consisted of a group of trees of about the same age, some young patches, some old patches, or meadows depending on how many years passed since fire created a new opening where they could grow. The variety of patches in historic forests helped to contain hot fires. Most patches of young trees, and old trees with little underneath did not burn well and served as firebreaks. Still, chance led to fires skipping some patches. So, fuel built up and the next fire burned a few of them while doing little harm to the rest of the forest”. Suppressing forest fires converts the forest into one connected whole, at risk of complete destruction from the eventual fire that cannot be suppressed.

In the absence of disturbances, connectivity builds up within the network, both within and between scales. Increased within-scale connectivity increases the severity but between-scale connectivity increases the probability of a disturbance at a lower level propagating up to higher levels and causing systemic collapse. Fire suppression in forests adapted to frequent undergrowth fires can cause an accumulation of ladder fuels which connect the undergrowth to the crown of the forest. The eventual undergrowth ignition then risks a crown fire by a process known as “torching”. Unlike understory fires, crown fires can spread across firebreaks such as rivers by a process known as “spotting” where the wind carries burning embers through the air – the fire can spread in this manner even without direct connectivity. Such fires can easily cause systemic collapse and a state from which natural forces cannot regenerate the forest. In this manner, stabilisation can cause changes which cause a fundamental change in the nature of the system rather than simply an increased severity of disturbances. For example, “extensive stand-replacing fires are in many cases resulting in “type conversions” from ponderosa pine forest to other physiognomic types (for example, grassland or shrubland) that may be persistent for centuries or perhaps even millennia” (Allen 2007).

Long-Run Increase in Cost of Stabilisation and Area Burned: The initial low cost of suppression is short-lived and the cumulative effect of the fragilisation of the system has led to rapidly increasing costs of wildfire suppression and levels of area burned in the last three decades (Donovan and Brown 2007).

Dilemmas in the Management of a Stabilised System

In my post on river flood management, I claimed that managing a stabilised and fragile system is “akin to choosing between the frying pan and the fire”. This has been the case in many forests around the United States for the last few decades and is the condition into which the economies of the developed world are heading into. Once the forest ecosystem has become fragile, the resultant large fire exacerbates the problem thus triggering a vicious cycle. As Thomas Bonnicksen observed, “monster fires create even bigger monsters. Huge blocks of seedlings that grow on burned areas become older and thicker at the same time. When it burns again, fire spreads farther and creates an even bigger block of fuel for the next fire. This cycle of monster fires has begun”. The system enters an “unending cycle of monster fires and blackened landscapes”.

Minsky of course understood this end-state very well: “The success of a high-private-investment strategy depends upon the continued growth of relative needs to validate private investment. It also requires that policy be directed to maintain and increase the quasi-rents earned by capital – i.e.,rentier and entrepreneurial income. But such high and increasing quasi-rents are particularly conducive to speculation, especially as these profits are presumably guaranteed by policy. The result is experimentation with liability structures that not only hypothecate increasing proportions of cash receipts but that also depend upon continuous refinancing of asset positions. A high-investment, high-profit strategy for full employment – even with the underpinning of an active fiscal policy and an aware Federal Reserve system – leads to an increasingly unstable financial system, and an increasingly unstable economic performance. Within a short span of time, the policy problem cycles among preventing a deep depression, getting a stagnant economy moving again, reining in an inflation, and offsetting a credit squeeze or crunch….As high investment and high profits depend upon and induce speculation with respect to liability structures, the expansions become increasingly difficult to control; the choice seems to become whether to accomodate to an increasing inflation or to induce a debt-deflation process that can lead to a serious depression”. (John Maynard Keynes pg163–164)

The evolution of the system means that turning back the clock to a previous era of stability is not an option. As Minsky observed in the context of our financial system, “the apparent stability and robustness of the financial system of the 1950s and early 1960s can now be viewed as an accident of history, which was due to the financial residue of World War 2 following fast upon a great depression”. Re-regulation is not enough because it cannot undo the damage done by decades of financial “innovation” in a manner that does not risk systemic collapse.

At the same time, simply allowing an excessively stabilised system to burn itself out is a recipe for disaster. For example, on the role that controlled burns could play in restoring America’s forests to a resilient state, Thomas Bonnicksen observed: “Prescribed fire would come closer than any tool toward mimicking the effects of the historic Indian and lightning fires that shaped most of America’s native forests. However, there are good reasons why it is declining in use rather than expanding. Most importantly, the fuel problem is so severe that we can no longer depend on prescribed fire to repair the damage caused by over a century of fire exclusion. Prescribed fire is ineffective and unsafe in such forests. It is ineffective because any fire that is hot enough to kill trees over three inches in diameter, which is too small to eliminate most fire hazards, has a high probability of becoming uncontrollable”. The same logic applies to a fragile economic system.

Update: corrected date of Idaho fires from 2010 to 1910 in para 3 thanks to Dean.

Bookmark and Share

Written by Ashwin Parameswaran

June 8th, 2011 at 11:35 am

The Cause and Impact of Crony Capitalism: the Great Stagnation and the Great Recession

with 23 comments

STABILITY AS THE PRIMARY CAUSE OF CRONY CAPITALISM

The core insight of the Minsky-Holling resilience framework is that stability and stabilisation breed fragility and loss of system resilience . TBTF protection and the moral hazard problem is best seen as a subset of the broader policy of stabilisation, of which policies such as the Greenspan Put are much more pervasive and dangerous.

By itself, stabilisation is not sufficient to cause cronyism and rent seeking. Once a system has undergone a period of stabilisation, the system manager is always tempted to prolong the stabilisation for fear of the short-term disruption or even collapse. However, not all crisis-mitigation strategies involve bailouts and transfers of wealth to the incumbent corporates. As Mancur Olson pointed out, society can confine its “distributional transfers to poor and unfortunate individuals” rather than bailing out incumbent firms and still hope to achieve the same results.

To fully explain the rise of crony capitalism, we need to combine the Minsky-Holling framework with Mancur Olson’s insight that extended periods of stability trigger a progressive increase in the power of special interests and rent-seeking activity. Olson also noted the self-preserving nature of this phenomenon.  Once rent-seeking has achieved sufficient scale, “distributional coalitions have the incentive and..the power to prevent changes that would deprive them of their enlarged share of the social output”.

SYSTEMIC IMPACT OF CRONY CAPITALISM

Crony capitalism results in a homogenous, tightly coupled and fragile macroeconomy. The key question is: Via which channels does this systemic malformation occur? As I have touched upon in some earlier posts [1,2], the systemic implications of crony capitalism arise from its negative impact on new firm entry. In the context of the exploration vs exploitation framework, absence of new firm entry tilts the system towards over-exploitation1 .

Exploration vs Exploitation: The Importance of New Firm Entry in Sustaining Exploration

In a seminal article, James March distinguished between “the exploration of new possibilities and the exploitation of old certainties. Exploration includes things captured by terms such as search, variation, risk taking, experimentation, play, flexibility, discovery, innovation. Exploitation includes such things as refinement, choice, production, efficiency, selection, implementation, execution.” True innovation is an act of exploration under conditions of irreducible uncertainty whereas exploitation is an act of optimisation under a known distribution.

The assertion that dominant incumbent firms find it hard to sustain exploratory innovation is not a controversial one. I do not intend to reiterate the popular arguments in the management literature, many of which I explored in a previous post. Moreover, the argument presented here is more subtle: I do not claim that incumbents cannot explore effectively but simply that they can explore effectively only when pushed to do so by a constant stream of new entrants. This is of course the “invisible foot” argument of Joseph Berliner and Burton Klein for which the exploration-exploitation framework provides an intuitive and rigorous rationale.

Let us assume a scenario where the entry of new firms has slowed to a trickle, the sector is dominated by a few dominant incumbents and the S-curve of growth is about to enter its maturity/decline phase. To trigger off a new S-curve of growth, the incumbents need to explore. However, almost by definition, the odds that any given act of exploration will be successful is small. Moreover, the positive payoff from any exploratory search almost certainly lies far in the future. For an improbable shot at moving from a position of comfort to one of dominance in the distant future, an incumbent firm needs to divert resources from optimising and efficiency-increasing initiatives that will deliver predictable profits in the near future. Of course if a significant proportion of its competitors adopt an exploratory strategy, even an incumbent firm will be forced to follow suit for fear of loss of market share. But this critical mass of exploratory incumbents never comes about. In essence, the state where almost all incumbents are content to focus their energies on exploitation is a Nash equilibrium.

On the other hand, the incentives of any new entrant are almost entirely skewed in favour of exploratory strategies. Even an improbable shot at glory is enough to outweigh the minor consequences of failure2 . It cannot be emphasised enough that this argument does not depend upon the irrationality of the entrant. The same incremental payoff that represents a minor improvement for the incumbent is a life-changing event for the entrepreneur. When there exists a critical mass of exploratory new entrants, the dominant incumbents are compelled to follow suit and the Nash equilibrium of the industry shifts towards the appropriate mix of exploitation and exploration.

The Crony Capitalist Boom-Bust Cycle: A Tradeoff between System Resilience and Full Employment

Due to insufficient exploratory innovation, a crony capitalist economy is not diverse enough. But this does not imply that the system is fragile either at firm/micro level or at the level of the macroeconomy. In the absence of any risk of being displaced by new entrants, incumbent firms can simply maintain significant financial slack3. If incumbents do maintain significant financial slack, sustainable full employment is impossible almost by definition.  However, full employment can be achieved temporarily in two ways: Either incumbent corporates can gradually give up their financial slack and lever up as the period of stability extends as Minsky’s Financial Instability Hypothesis (FIH) would predict, or the household or government sector can lever up to compensate for the slack held by the corporate sector.

Most developed economies went down the route of increased household and corporate leverage with the process aided and abetted by monetary and regulatory policy. But it is instructive that developing economies such as India faced exactly the same problem in their “crony socialist” days. In keeping with its ideological leanings pre-1990, India tackled the unemployment problem via increased government spending. Whatever the chosen solution, full employment is unsustainable in the long run unless the core problem of cronyism is tackled. The current over-leveraged state of the consumer in the developed world can be papered over by increased government spending but in the face of increased cronyism, it only kicks the can further down the road. Restoring corporate animal spirits depends upon corporate slack being utilised in exploratory investment, which as discussed above is inconsistent with a cronyist economy.

Micro-Fragility as the Key to a Resilient Macroeconomy and Sustainable Full Employment

At the appropriate mix of exploration and exploitation, individual incumbent and new entrant firms are both incredibly vulnerable. Most exploratory investments are destined to fail as are most firms, sooner or later. Yet due to the diversity of firm-level strategies, the macroeconomy of vulnerable firms is incredibly resilient. At the same time, the transfer of wealth from incumbent corporates to the household sector via reduced corporate slack and increased investment means that sustainable full employment can be achieved without undue leverage. The only question is whether we can break out of the Olsonian special interest trap without having to suffer a systemic collapse in the process.

  1. It cannot be emphasized enough that absence of new firm entry is simply the channel through which crony capitalism malforms the macroeconomy. Therefore, attempts to artificially boost new firm entry are likely to fail unless they tackle the ultimate cause of the problem which is stabilisation []
  2. It is critical that the personal consequences of firm failure are minor for the entrepreneur – this is not the case for cultural and legal reasons in many countries around the world but is largely still true in the United States. []
  3. It could be argued that incumbents could follow this strategy even when new entrants threaten them. This strategy however has its limits – an extended period of standing on the sidelines of exploratory activity can degrade the ability of the incumbent to rejoin the fray. As Brian Loasby remarked : “For many years, Arnold Weinberg chose to build up GEC’s reserves against an uncertain technological future in the form of cash rather than by investing in the creation of technological capabilities of unknown value. This policy, one might suggest, appears much more attractive in a financial environment where technology can often be bought by buying companies than in one where the market for corporate control is more tightly constrained; but it must be remembered that some, perhaps substantial, technological capability is likely to be needed in order to judge what companies are worth acquiring, and to make effective use of the acquisitions. As so often, substitutes are also in part complements.” []
Bookmark and Share

Written by Ashwin Parameswaran

November 24th, 2010 at 6:01 pm

Uncertainty and the Cyclical vs Structural Unemployment Debate

with 6 comments

There are two schools of thought on the primary cause of our current unemployment problem: Some claim that the unemployment is cyclical (low aggregate demand) whereas others think it’s structural (mismatch in the labour market). The “Structuralists” point to the apparent shift in the Beveridge curve and the increased demand in healthcare and technology whereas the “Cyclicalists” point to the fall in employment across all other sectors. So who’s right? In my opinion, neither explanation is entirely satisfactory. This post is an expansion of some thoughts I touched upon in my last post that describe the “persistent unemployment” problem as a logical consequence of a dynamically uncompetitive “Post Minsky Moment” economy.

Narayana Kocherlakota explains the mismatch thesis as follows: “Firms have jobs, but can’t find appropriate workers. The workers want to work, but can’t find appropriate jobs. There are many possible sources of mismatch—geography, skills, demography—and they are probably all at work….the Fed does not have a means to transform construction workers into manufacturing workers.” Undoubtedly this argument has some merit – the real question is how much of our current unemployment can be attributed to the mismatch problem? Kocherlakota draws on work done by Robert Shimer and extrapolates from the Beveridge curve relationship since 2000 to arrive at a implied unemployment rate of 6.3% if mismatch were not a bigger problem and the Beveridge curve relationship had not broken down. Jan Hatzius of Goldman Sachs on the other hand attributes as little as 0.75% of the current unemployment problem to structural reasons. Murat Tasci and Dave Lindner however conclude that the recent behaviour of the Beveridge curve is not anomalous when viewed in the context of previous post-war recessions. Shimer himself was wary of extrapolating too much from the limited data set from 2000 (see pg 12-13 here)  This would imply that Kocherlakota’s estimate is an overestimate even if Jan Hatzius’ may be an underestimate.

Incorporating Uncertainty into the Mismatch Argument

It is likely therefore that there is a significant pool of unemployment that cannot be justified by the simple mismatch argument. But this does not mean that the “recalculation” thesis is not valid. The simple mismatch argument ignores the uncertainty involved in the “Post-Minsky Moment economy” – it assumes that firms have known jobs that remain unfilled whereas in reality, firms need to engage in a process of exploration that will determine the nature of jobs consistent with the new economic reality before they search for suitable workers. The problem we face right now is of firms unwilling to take on the risk inherent in such an exploration. The central message in my previous posts on evolvability and organisational rigidity is that this process of exploration is dependent upon the maintenance of a dynamically competitive economy rather than a statically competitive economy. Continuous entry of new firms is of critical importance in maintaining a dynamically competitive economy that retains the ability to evolve and reconfigure itself when faced with a dramatic change in circumstances.

The “Post Minsky Moment” Economy

In Minsky’s Financial Instability Hypothesis, the long period of stability before the crash creates a homogeneous and fragile ecosystem – the fragility arises due to the fragility of the individual firms as well the absence of diversity. Post the inevitable crash, the system inevitably regains some of its robustness via the slack built up by the incumbent firms, usually in the form of financial liquidity. However, so long as this slack at firm level is maintained, the macro-system cannot possibly revert to a state where it attains conventional welfare optima such as full employment. The conventional Keynesian solution suggests that the state pick up the slack in economic activity whereas some assume that sooner or later, market forces will reorganise to utilise this firm-level slack. This post is an attempt to partially refute both explanations – As Burton Klein often notedthere is no hidden hand that can miraculously restore the “animal spirits” of an economy or an industry once it has lost its evolvability. Similarly, Keynesian policies that shore up the position of the incumbent firms can cause fatal damage to the evolvability of the macro-economy.

Corporate Profits and Unemployment

This thesis does not imply that incumbent firms leave money on the table. In fact, incumbents typically redouble their efforts at static optimisation – hence the rise in corporate profits. Some may argue that this rise in profitability is illusory and represents capital consumption i.e. short-term gain at the expense of long-term loss of competence and capabilities at firm level. But in the absence of new firm entry, it is unlikely that there is even a long-term threat to incumbents’ survival i.e. firms are making a calculated bet that loss of evolvability represents a minor risk. It is only the invisible foot of the threat of new firms that prevents incumbents from going down this route.

Small Business Financing Constraints as a Driver of Unemployment

The role of new firms in generating employment is well-established and my argument implies that incumbent firms will effectively contribute to solving the unemployment problem only when prodded to do so by the hidden foot of new firm entry. The credit conditions faced by small businesses remain extremely tight despite funding costs for big incumbent firms having eased considerably since the peak of the crisis. Of course this may be due to insufficient investment opportunities – some of which may be due to dominant large incumbents in specific sectors. But a more plausible explanation lies in the unevolvable and incumbent-dominated state of our banking sector. Expanding lending to new firms is an act of exploration and incumbent banks are almost certainly content with exploiting their known and low-risk sources of income instead. One of Burton Klein’s key insights was how only a few key dynamically uncompetitive sectors can act as a deadweight drag on the entire economy and banking certainly fits the bill.

Bookmark and Share

Written by Ashwin Parameswaran

September 8th, 2010 at 9:21 am

Evolvability, Robustness and Resilience in Complex Adaptive Systems

with 14 comments

In a previous post, I asserted that “the existence of irreducible uncertainty is sufficient to justify an evolutionary approach for any social system, whether it be an organization or a macro-economy.” This is not a controversial statement – Nelson and Winter introduced their seminal work on evolutionary economics as follows: “Our evolutionary theory of economic change…is not an interpretation of economic reality as a reflection of supposedly constant “given data” but a scheme that may help an observer who is sufficiently knowledgeable regarding the facts of the present to see a little further through the mist that obscures the future.”

In microeconomics, irreducible uncertainty implies a world of bounded rationality where many heuristics become not signs of irrationality but a rational and effective tool of decision-making. But it is the implications of human action under uncertainty for macro-economic outcomes that is the focus of this blog – In previous posts (1,2) I have elaborated upon the resilience-stability tradeoff and its parallels in economics and ecology. This post focuses on another issue critical to the functioning of all complex adaptive systems: the relationship between evolvability and robustness.

Evolvability and Robustness Defined

Hiroaki Kitano defines robustness as follows: “Robustness is a property that allows a system to maintain its functions despite external and internal perturbations….A system must be robust to function in unpredictable environments using unreliable components.” Kitano makes it explicit that robustness is concerned with the maintenance of functionality rather than specific components: “Robustness is often misunderstood to mean staying unchanged regardless of stimuli or mutations, so that the structure and components of the system, and therefore the mode of operation, is unaffected. In fact, robustness is the maintenance of specific functionalities of the system against perturbations, and it often requires the system to change its mode of operation in a flexible way. In other words, robustness allows changes in the structure and components of the system owing to perturbations, but specific functions are maintained.”

Evolvability is defined as the ability of the system to generate novelty and innovate thus enabling the system to “adapt in ways that exploit new resources or allow them to persist under unprecedented environmental regime shifts” (Whitacre 2010). At first glance, evolvability and robustness appear to be incompatible: Generation of novelty involves a leap into the dark, an exploration rather than an act of “rational choice” and the search for a beneficial innovation carries with it a significant risk of failure. It’s worth noting that in social systems, this dilemma vanishes in the absence of irreducible uncertainty. If all adaptations are merely a realignment to a known systemic configuration (“known” in either a deterministic or a probabilistic sense), then an inability to adapt needs other explanations such as organisational rigidity.

Evolvability, Robustness and Resilience

Although it is typical to equate resilience with robustness, resilient complex adaptive systems also need to possess the ability to innovate and generate novelty. As Allen and Holling put it : “Novelty and innovation are required to keep existing complex systems resilient and to create new structures and dynamics following system crashes”. Evolvability also enables the system to undergo fundamental transformational change – it could be argued that such innovations are even more important in a modern capitalist economic system than they are in the biological or ecological arena. The rest of this post will focus on elaborating upon how macro-economic systems can be both robust and evolvable at the same time – the apparent conflict between evolvability and robustness arises from a fallacy of composition where macro-resilience is assumed to arise from micro-resilience, when in fact it arises from the very absence of micro-resilience.

EVOLVABILITY, ROBUSTNESS AND RESILIENCE IN MACRO-ECONOMIC SYSTEMS

The pre-eminent reference on how a macro-economic system can be both robust and evolvable at the same time is the work of Burton Klein in his books “Dynamic Economics” and “Prices, Wages and Business Cycles: A Dynamic Theory”. But as with so many other topics in evolutionary economics, no one has summarised it better than Brian Loasby: “Any economic system which is to remain viable over a long period must be able to cope with unexpected change. It must be able to revise or replace policies which have worked well. Yet this ability is problematic. Two kinds of remedy may be tried, at two different system levels. One is to try to sensitize those working within a particular research programme to its limitations and to possible alternatives, thus following Menger’s principle of creating private reserves against unknown but imaginable dangers, and thereby enhancing the capacity for internal adaptation….But reserves have costs; and it may be better , from a system-wide perspective, to accept the vulnerability of a sub-system in order to exploit its efficiency, while relying on the reserves which are the natural product of a variety of sub-systems….
Research programmes, we should recall, are imperfectly specified, and two groups starting with the same research programme are likely to become progressively differentiated by their experience, if there are no strong pressures to keep them closely aligned. The long-run equilibrium of the larger system might therefore be preserved by substitution between sub-systems as circumstances change. External selection may achieve the same overall purpose as internal adaptation – but only if the system has generated adequate variety from which the selection may be made. An obvious corollary which has been emphasised by Klein (1977) is that attempts to preserve sub-system stability may wreck the larger system. That should not be a threatening notion to economists; it also happens to be exemplified by Marshall’s conception of the long-period equilibrium of the industry as a population equilibrium, which is sustained by continued change in the membership of that population. The tendency of variation is not only a chief cause of progress; it is also an aid to stability in a changing environment (Eliasson, 1991). The homogeneity which is conducive to the attainment of conventional welfare optima is a threat to the resilience which an economy needs.”

Uncertainty can be tackled at the micro-level by maintaining reserves and slack (liquidity, retained profits) but this comes at the price of slack at the macro-level in terms of lost output and employment. Note that this is essentially a Keynesian conclusion, similar to how individually rational saving decisions can lead to collectively sub-optimal outcomes. From a systemic perspective, it is more preferable to substitute the micro-resilience with a diverse set of micro-fragilities. But how do we induce the loss of slack at firm-level? And how do we ensure that this loss of micro-resilience occurs in a sufficiently diverse manner?

The “Invisible Foot”

The concept of the “Invisible Foot” was introduced by Joseph Berliner as a counterpoint to Adam Smith’s “Invisible Hand” to explain why innovation was so hard in the centrally planned Soviet economy: “Adam Smith taught us to think of competition as an “invisible hand” that guides production into the socially desirable channels….But if Adam Smith had taken as his point of departure not the coordinating mechanism but the innovation mechanism of capitalism, he may well have designated competition not as an invisible hand but as an invisible foot. For the effect of competition is not only to motivate profit-seeking entrepreneurs to seek yet more profit but to jolt conservative enterprises into the adoption of new technology and the search for improved processes and products. From the point of view of the static efficiency of resource allocation, the evil of monopoly is that it prevents resources from flowing into those lines of production in which their social value would be greatest. But from the point of view of innovation, the evil of monopoly is that it enables producers to enjoy high rates of profit without having to undertake the exacting and risky activities associated with technological change. A world of monopolies, socialist or capitalist, would be a world with very little technological change.” To maintain an evolvable macro-economy, the invisible foot needs to be “applied vigorously to the backsides of enterprises that would otherwise have been quite content to go on producing the same products in the same ways, and at a reasonable profit, if they could only be protected from the intrusion of competition.”

Entry of New Firms and the Invisible Foot

Burton Klein’s great contribution along with other dynamic economists of the time (notably Gunnar Eliasson) was to highlight the critical importance of entry of new firms in maintaining the efficacy of the invisible foot. Klein believed that “the degree of risk taking is determined by the robustness of dynamic competition, which mainly depends on the rate of entry of new firms. If entry into an industry is fairly steady, the game is likely to have the flavour of a highly competitive sport. When some firms in an industry concentrate on making significant advances that will bear fruit within several years, others must be concerned with making their long-run profits as large as possible, if they hope to survive. But after entry has been closed for a number of years, a tightly organised oligopoly will probably emerge in which firms will endeavour to make their environments highly predictable in order to make their environments highly predictable in order to make their short-run profits as large as possible….Because of new entries, a relatively concentrated industry can remain highly dynamic. But, when entry is absent for some years, and expectations are premised on the future absence of entry, a relatively concentrated industry is likely to evolve into a tight oligopoly. In particular, when entry is long absent, managers are likely to be more and more narrowly selected; and they will probably engage in such parallel behaviour with respect to products and prices that it might seem that the entire industry is commanded by a single general!”

Again, it can’t be emphasised enough that this argument does not depend on incumbent firms leaving money on the table – on the contrary, they may redouble their attempts at static optimisation. From the perspective of each individual firm, innovation is an incredibly risky process even though the result of such dynamic competition from the perspective of the industry or macro-economy may be reasonably predictable. Of course, firms can and do mitigate this risk by various methods but this argument only claims that any single firm, however dominant cannot replicate the “risk-free” innovation dynamics of a vibrant industry in-house.

Micro-Fragility as the Hidden Hand of Macro-Resilience

In an environment free of irreducible uncertainty, evolvability suffers leading to reduced macro-resilience. “If firms could predict each others’ advances they would not have to insure themselves against uncertainty by taking risks. And no smooth progress would occur” (Klein 1977). Conversely, “because firms cannot predict each other’s discoveries, they undertake different approaches towards achieving the same goal. And because not all of the approaches will turn out to be equally successful, the pursuit of parallel paths provides the options required for smooth progress.”

The Aftermath of the Minsky Moment: A Problem of Micro-Resilience

Within the context of the current crisis, the pre-Minsky moment system was a homogeneous system with no slack which enabled the attainment of “conventional welfare optima” but at the cost of an incredibly fragile and unevolvable condition. The logical evolution of such a system post the Minsky moment is of course still a homogeneous system but with significant firm-level slack built in which is equally unsatisfactory. In such a situation, the kind of macro-economic intervention matters as much as the force of intervention. For example, in an ideal world, monetary policy aimed at reducing borrowing rates of incumbent banks and corporates will flow through into reduced borrowing rates for new firms. In a dynamically uncompetitive world, such a policy will only serve the interests of the incumbents.

The “Invisible Foot” and Employment

Vivek Wadhwa argues that startups are the main source of net job growth in the US economy and Mark Thoma links to research that confirms this thesis. Even if one disagrees with this thesis, the “invisible foot” thesis argues that if the old guard is to contribute to employment, they must be forced to give up their “slack” by the strength of dynamic competition and dynamic competition is maintained by preserving conditions that encourage entry of new firms.

MICRO-EVOLVABILITY AND MACRO-RESILIENCE IN BIOLOGY AND ECOLOGY

Note: The aim of this section is not to draw any false precise equivalences between economic resilience and ecological or biological resilience but simply to highlight the commonality of the micro-macro fallacy of composition across complex adaptive systems – a detailed comparison will hopefully be the subject of a future post. I have tried to keep the section on biological resilience as brief and simple as possible but an understanding of the genotype-phenotype distinction and neutral networks is essential to make sense of it.

Biology: Genotypic Variation and Phenotypic Robustness

In the specific context of biology, evolvability can be defined as “the capacity to generate heritable, selectable phenotypic variation. This capacity may have two components: (i) to reduce the potential lethality of mutations and (ii) to reduce the number of mutations needed to produce phenotypically novel traits” (Kirschner and Gerhart 1998). The apparent conflict between evolvability and robustness can be reconciled by distinguishing between genotypic and phenotypic robustness and evolvability. James Whitacre summarises Andrew Wagner’s work on RNA genotypes and their structure phenotypes as follows: “this conflict is unresolvable only when robustness is conferred in both the genotype and the phenotype. On the other hand, if the phenotype is robustly maintained in the presence of genetic mutations, then a number of cryptic genetic changes may be possible and their accumulation over time might expose a broad range of distinct phenotypes, e.g. by movement across a neutral network. In this way, robustness of the phenotype might actually enhance access to heritable phenotypic variation and thereby improve long-term evolvability.”

Ecology: Species-Level Variability and Functional Stability

The notion of micro-variability being consistent with and even being responsible for macro-resilience is an old one in ecology as Simon Levin and Jane Lubchenco summarise here: “That the robustness of an ensemble may rest upon the high turnover of the units that make it up is a familiar notion in community ecology. MacArthur and Wilson (1967), in their foundational work on island biogeography, contrasted the constancy and robustness of the number of species on an island with the ephemeral nature of species composition. Similarly, Tilman and colleagues (1996) found that the robustness of total yield in high-diversity assemblages arises not in spite of, but primarily because of, the high variability of individual population densities.”

The concept is also entirely consistent with the “Panarchy” thesis which views an ecosystem as a nested hierarchy of adaptive cycles: “Adaptive cycles are nested in a hierarchy across time and space which helps explain how adaptive systems can, for brief moments, generate novel recombinations that are tested during longer periods of capital accumulation and storage. These windows of experimentation open briefly, but the results do not trigger cascading instabilities of the whole because of the stabilizing nature of nested hierarchies. In essence, larger and slower components of the hierarchy provide the memory of the past and of the distant to allow recovery of smaller and faster adaptive cycles.”

Misc. Notes

1. It must be emphasised that micro-fragility is a necessary, but not a sufficient condition for an evolvable and robust macro-system. The role of not just redundancy but degeneracy is critical as is the size of the population.

2. Many commentators use resilience and robustness interchangeably. I draw a distinction primarily because my definitions of robustness and evolvability are borrowed from biology and my definition of resilience is borrowed from ecology which in my opinion defines a robust and evolvable system as a resilient one.

Bookmark and Share

Written by Ashwin Parameswaran

August 30th, 2010 at 8:38 am

Amar Bhide on “Robotic Finance”: An Adaptive Explanation

with 6 comments

In the HBR, Amar Bhide notes that models have replaced discretion in many areas of finance, particularly in banks’ mortgage lending decisions: “Over the past several decades, centralized, mechanistic finance elbowed aside the traditional model….Mortgages are granted or denied (and new mortgage products like option ARMs are designed) using complex models that are conjured up by a small number of faraway rocket scientists and take little heed of the specific facts on the ground.” For the most part, the description of the damage done by “robotic finance” is accurate but the article ignores why this mechanisation came about. It is easy to assume that the dominance of models over discretion may have been a grand error by the banking industry. But in reality, the “excessive” dependence on models was an entirely rational and logical evolution of the banking industry given the incentives and the environment that bankers faced.

An over-reliance on models over discretion cripples the adaptive capabilities of the firm: “No contract can anticipate all contingencies. But securitized financing makes ongoing adaptations infeasible; because of the great difficulty of renegotiating terms, borrowers and lenders must adhere to the deal that was struck at the outset. Securitized mortgages are more likely than mortgages retained by banks to be foreclosed if borrowers fall behind on their payments, as recent research shows.” But why would firms choose such rigid and inflexible solutions? There are many answers to this question but all of them depend on the obvious fact that adaptable solutions entail a higher cost than rigid solutions. It is far less expensive to analyse the creditworthiness of mortgages with standardised models than with people on the ground.

This increased efficiency comes at the cost of catastrophic losses in a crisis but long periods of stability inevitably select for efficient and rigid solutions rather than adaptable and flexible solutions. This may be a consequence of moral hazard or principal-agent problems as I have analysed many times on this blog but it does not depend on either. A preference for rigid routines may be an entirely rational response to a long period of stability under uncertainty – both from an individual’s perspective and an organisation’s perspective. Probably the best exposition of this problem was given by Brian Loasby in his book “Equilibrium and Evolution” (pages 56-7): “Success has its opportunity costs. People who know how to solve their problems can get to work at once, without considering whether some other method might be more effective; they thereby become increasingly efficient, but also increasingly likely to encounter problems which are totally unexpected and which are not amenable to their efficient routines…The patterns which people impose on phenomena have necessarily a limited range of application, and the very success with which they exploit that range tends to make them increasingly careless about its limits. This danger is likely to be exacerbated by formal information systems, which are typically designed to cope with past problems, and which therefore may be worse than useless in signalling new problems. If any warning messages do arrive, they are likely to be ignored, or force-fitted into familiar categories; and if a crisis breaks, the information needed to deal with it may be impossible to obtain.”

Now it is obvious why banks stuck with such rigid models during the “Great Moderation” but it is less obvious why banks don’t discard them voluntarily post the “Minsky Moment”. The answer lies in the difficulty that organisations and other social systems face in making dramatic systemic U-turns even when the logic for doing so is clear, thus the importance of mitigating the TBTF problem and enabling entry of new firms. As I have asserted before: “A crony capitalist economic system that protects the incumbent firms hampers the ability of the system to innovate and adapt to novelty. It is obvious how the implicit subsidy granted to our largest financial institutions via the Too-Big-To-Fail doctrine represents a transfer of wealth from the taxpayer to the financial sector. It is also obvious how the subsidy encourages a levered, homogenous and therefore fragile financial sector that is susceptible to collapse. What is less obvious is the paralysis that it induces in the financial sector and by extension the macroeconomy long after the bailouts and the Minsky moment have passed.”

Bookmark and Share

Written by Ashwin Parameswaran

August 23rd, 2010 at 4:34 am

Raghuram Rajan on Monetary Policy and Macroeconomic Resilience

with 16 comments

Amongst economic commentators, Raghuram Rajan has stood out recently for his consistent calls to raise interest rates from “ultra-low to the merely low”. Predictably, this suggestion has been met with outright condemnation by many economists, both of Keynesian and monetarist persuasion. Rajan’s case against ultra-low rates utilises many arguments but this post will focus on just one of these arguments that is straight out of the “resilience” playbook. In 2008, Raghu Rajan and Doug Diamond co-authored a paper, the conclusion of which Rajan summarises in his FT article: “the pattern of Fed policy over time builds expectations. The market now thinks that whenever the financial sector’s actions result in unemployment, the Fed will respond with ultra-low rates and easy liquidity. So even as the Fed has maintained credibility as an inflation fighter, it has lost credibility in fighting financial adventurism. This cannot augur well for the future.”

Much like he accused the Austrians, Paul Krugman accuses Rajan of being a “liquidationist”. This is not a coincidence – Rajan and Diamond’s thesis is quite explicit about its connections to Austrian Business Cycle Theory: “a central bank that promises to cut interest rates conditional on stress, or that is biased towards low interest rates favouring entrepreneurs, will induce banks to promise higher payouts or take more illiquid projects. This in turn can make the illiquidity crisis more severe and require a greater degree of intervention, a view reminiscent of the Austrian theory of cycles.” But as the summary hints, Rajan and Diamond’s thesis is fundamentally different from ABCT. The conventional Austrian story identifies excessive credit inflation and interest rates below the “natural” rate of interest as the driver of the boom/bust cycle but Rajan and Diamond’s thesis identifies the anticipation by economic agents of low rates and “liquidity” facilities every time there is an economic downturn as the driver of systemic fragility. The adaptation of banks and other market players to this regime makes the eventual bust all the more likely. As Rajan and Diamond note: “If the authorities are expected to reduce interest rates when liquidity is at a premium, banks will take on more short-term leverage or illiquid loans, thus bringing about the very states where intervention is needed.”

Rajan and Diamond’s thesis is limited to the impact of such policies on banks but as I noted in a previous post, market players also adapt to this implicit commitment from the central bank to follow easy money policies at the first hint of economic trouble. This thesis is essentially a story of the Greenspan-Bernanke era and the damage that the Greenspan Put has caused. It also explains the dramatically diminishing returns inherent in the Greenspan Put strategy as the stabilising policies of the central bank become entrenched in the expectations of market players and crucially banks – in each subsequent cycle, the central bank has to do more and more (lower rates, larger liquidity facilities) to achieve less and less.

Bookmark and Share

Written by Ashwin Parameswaran

August 3rd, 2010 at 6:30 am

Agent Irrationality and Macroeconomics

with 2 comments

In a recent post, Rajiv Sethi questions the tendency to find behavioural explanations for financial crises and argues for an ecological approach instead – a sentiment that I agree with and have touched upon in previous posts on this blog. This post expands upon some of these themes.

A More Realistic View of Rationality and Human Cognition, Not Irrationality

Much of the debate on rationality in economics focuses on whether we as human beings are rational in the “homo economicus” sense. The “heuristics and biases” program pioneered by Daniel Kahneman and Amos Tversky argues that we are not “rational” – however, it does not question whether the definition of rationality implicit in “rational choice theory” is valid or not. Many researchers in the neural and cognitive sciences now believe that the conventional definition of rationality needs to be radically overhauled.

Most heuristics/biases are not a sign of irrationality but an entirely rational form of decision-making when faced with uncertainty. In an earlier post, I explained how Ronald Heiner’s framework can explain our neglect of tail events as a logical response to an uncertain environment, but the best exposition of this viewpoint can be seen in Gerd Gigerenzer’s work which itself is inspired by Herbert Simon’s ideas on “bounded rationality”. In his aptly named book “Rationality for Mortals: How People Cope with Uncertainty”, Gigerenzer explains the two key building blocks of “the science of heuristics”:

  • The Adaptive Toolbox: “the building blocks for fast and frugal heuristics that work in real-world environments of natural complexity, where an optimal strategy is often unknown or computationally intractable”
  • Ecological Rationality: “the environmental structures in which a given heuristic is successful” and the “coevolution between heuristics and environments”

The irony of course is that many classical economists had a more accurate definition of rationality than the one implicit in “rational choice theory” (See Brian Loasby’s book which I discussed here). Much of the work done in the neural sciences confirms the more nuanced view of human cognition espoused in Hayek’s “The Sensory Order” or Ken Boulding’s “The Image” (See Joaquin Fuster on Hayek or the similarities between Ken Boulding’s views and V.S. Ramachandran’s work discussed here).

Macro-Rationality is consistent with Micro-Irrationality

Even a more realistic definition of rationality doesn’t preclude individual irrationality. However, as Michael Mauboussin pointed out: “markets can still be rational when investors are individually irrational. Sufficient investor diversity is the essential feature in efficient price formation. Provided the decision rules of investors are diverse—even if they are suboptimal—errors tend to cancel out and markets arrive at appropriate prices. Similarly, if these decision rules lose diversity, markets become fragile and susceptible to inefficiency. So the issue is not whether individuals are irrational (they are) but whether they are irrational in the same way at the same time. So while understanding individual behavioral pitfalls may improve your own decision making, appreciation of the dynamics of the collective is key to outperforming the market.”

Economies as Complex Adaptive Systems: Behavioural Heterogeneity, Selection Pressures and Emphasis on System Dynamics

In my view, the ecological approach to macroeconomics is essentially a systems approach with the emphasis on the “adaptive” nature of the system i.e. incentives matter and the actors in a system tend to find ways to work around imposed rules that try to fight the impact of misaligned incentives. David Merkel explained it well when he noted: “People hate having their freedom restrained, and so when arbitrary rules are imposed, even smart rules, they look for means of escape.” And many of the posts on this blog have focused on how rules can be subverted even when economic agents don’t actively intend to do so.

The ecological approach emphasises the diversity of behavioural preferences and the role of incentives/institutions/rules in “selecting” from this pool of possible agent behaviours or causing agent behaviour to adapt in reaction to these incentives. When a behaviourally homogeneous pool of agents is observed, the ecological approach focuses on the selection pressures and incentives that could have caused this loss of diversity rather than attempting to lay the blame on some immutable behavioural trait. Again, as Rajiv Sethi puts it here: “human behavior differs substantially across career paths because of selection both into and within occupations….[Regularities] identified in controlled laboratory experiments with standard subject pools have limited application to environments in which the distribution of behavioral propensities is both endogenous and psychologically rare. This is the case in financial markets, which are subject to selection at a number of levels. Those who enter the profession are unlikely to be psychologically typical, and market conditions determine which behavioral propensities survive and thrive at any point in historical time.”


Bookmark and Share

Written by Ashwin Parameswaran

June 24th, 2010 at 8:50 am