macroresilience

resilience, not stability

Archive for the ‘Financial Crisis’ Category

Innovation, Stagnation and Unemployment

with 18 comments

All economists assert that wants are unlimited. From this follows the view that technological unemployment is impossible in the long run. Yet there are a growing number of commentators (such as Brian Arthur) who insist that increased productivity from automation and improvements in artificial intelligence has a part to play in the current unemployment crisis. At the same time, a growing chorus laments the absence of innovation – Tyler Cowen’s thesis that the recent past has been a ‘Great Stagnation’ is compelling.

But don’t the two assertions contradict each other? Can we have an increase in technological unemployment as well as an innovation deficit? Is the concept of technological unemployment itself valid? Is there anything about the current phase of labour-displacing technological innovation that is different from the past 150 years? To answer these questions, we need a deeper understanding of the dynamics of innovation in a capitalist economy i.e. how exactly has innovation and productivity growth proceeded in a manner consistent with full employment in the past? In the process, I also hope to connect the long-run structural dynamic with the Minskyian business cycle dynamic. It is common to view the structural dynamic of technological change as a sort of ‘deus ex machine’ – if not independent, certainly as a phenomenon that is unconnected with the business cycle. I hope to convince some of you that our choices regarding business cycle stabilisation have a direct bearing on the structural dynamic of innovation. I have touched upon many of these topics in a scattered fashion in previous posts but this post is an attempt to present many of these thoughts in a coherent fashion with all my assumptions explicitly laid out in relation to established macroeconomic theory.

Micro-Foundations

Imperfectly competitive markets are the norm in most modern economies. In instances where economies of scale or network effects dominate, a market may even be oligopolistic or monopolistic (e.g. Google, Microsoft) This assumption is of course nothing new to conventional macroeconomic theory. Where my analysis differs is in viewing the imperfectly competitive process as one that is permanently in disequilibrium. Rents or “abnormal” profits are a persistent feature of the economy at the level of the firm and are not competed away even in the long run. The primary objective of incumbent rent-earners is to build a moat around their existing rents whereas the primary objective of competition from new entrants is not to drive rents down to zero, but to displace the incumbent rent-earner. It is not the absence of rents but the continuous threat to the survival of the incumbent rent-earner that defines a truly vibrant capitalist economy i.e. each niche must be continually contested by new entrants. This does not imply, even if the market for labour is perfectly competitive, that an abnormal share of GDP goes to “capital”. Most new entrants fail and suffer economic losses in their bid to capture economic rents and even a dominant incumbent may lose a significant proportion of past earned rents in futile attempts to defend its competitive position before its eventual demise.

This emphasis on disequilibrium points to the fact that the “optimum” state for a dynamically competitive capitalist economy is one of constant competitive discomfort and disorder. This perspective leads to a dramatically different policy emphasis from conventional theory which universally focuses on increasing positive incentives to economic players and relying on the invisible hand to guide the economy to a better equilibrium. Both Schumpeter and Marx understood the importance of this competitive discomfort for the constant innovative dynamism of a capitalist economy – my point is simply that a universal discomfort of capital is also important to maintain the distributive justice in a capitalist economy. in fact it is the only way to do so without sacrificing the innovative dynamism of the economy.

Competition in monopolistically competitive markets manifests itself through two distinct forms of innovation: exploitation and exploration. Exploitation usually takes the form of what James Utterback identified as process innovation with an emphasis on “real or potential cost reduction, improved product quality, and wider availability, and movement towards more highly integrated and continuous production processes.” As Utterback noted, such innovation is almost always driven by the incumbent firms. Exploitation is an act of optimisation under a known distribution i.e. it falls under the domain of homo economicus. In the language of fitness landscapes, exploitative process innovation is best viewed as competition around a local peak. On the other hand, exploratory product innovation (analogous to what Utterback identified as product innovation) occurs under conditions of significant irreducible uncertainty. Exploration is aimed at finding a significantly higher peak on the fitness landscape and as Utterback noted, is almost always driven by new entrants (For a more detailed explanation of incumbent preference for exploitation and organisational rigidity, see my earlier post).

An Investment Theory of the Business Cycle

Soon after publishing the ‘General Theory’, Keynes summarised his thesis as follows: “given the psychology of the public, the level of output and employment as a whole depends on the amount of investment. I put it in this way, not because this is the only factor on which aggregate output depends, but because it is usual in a complex system to regard as the causa causans that factor which is most prone to sudden and wide fluctuation.” In Keynes‘ view, the investment decision was undertaken in a condition of irreducible uncertainty, “influenced by our views of the future about which we know so little”. Just how critical the level of investment is in maintaining full employment is highlighted by GLS Shackle in his interpretation of Keynes’ theory: “In a money-using society which wishes to save some of the income it receives in payment for its productive efforts, it is not possible for the whole (daily or annual) product to be sold unless some of it is sold to investors and not to consumers. Investors are people who put their money on time-to-come. But they do not have to be investors. They can instead be liquidity-preferrers; they can sweep up their chips from the table and withdraw. If they do, they will give no employment to those who (in face of society’s propensity to save) can only be employed in making investment goods, things whose stream of usefulness will only come out over the years to come.”

If we accept this thesis, then it is no surprise that the post–2008 recovery has been quite so anaemic. Investment spending has remained low throughout the developed world, nowhere more so than in the United Kingdom. What makes this low level of investment even more surprising is the strength of the rebound in corporate profits and balance sheets – corporate leverage in the United States is as low as it has been for two decades and the proportion of cash in total assets as high as it has been for almost half a century. Specifically, the United States has also experienced an unusual increase in labour productivity during the recession which has exacerbated the disconnect between the recovery in GDP and employment. Some of these unusual patterns have been with us for a much longer time than the 2008 financial crisis. For example, the disconnect between GDP and employment in the United States has been obvious since atleast 1990, and the 2003 recession too saw an unusual rise in labour productivity. The labour market has been slack for at least a decade. It is hard to differ from Paul Krugman’s intuition that the character of post–1980 business cycles has changed. Europe and Japan are not immune from these “structural” patterns either – the ‘corporate savings glut’ has been a problem in the United Kingdom since atleast 2002, and Post-Keynesian economists have been pointing out the relationship between ‘capital accumulation’ and unemployment for a while, even attributing the persistently high unemployment in Europe to a lack of investment. Japan’s condition for the last decade is better described as a ‘corporate savings trap’ rather than a ‘liquidity trap’. Even in Greece, that poster child for fiscal profligacy, the recession is accompanied by a collapse in private sector investment.

A Theory of Business Investment

Business investments can typically either operate upon the scale of operations (e.g. capacity,product mix) or they can change the fundamental character of operations (e.g. changes in process, product). The degree of irreducible uncertainty in capacity and product mix decisions has reduced dramatically in the last half-century. The ability of firms to react quickly and effectively to changes in market conditions has improved dramatically with improvements in production processes and information technology – Zara being a well-researched example. Investments that change the very nature of business operations are what we typically identify as innovations. However, not all innovation decisions are subject to irreducible uncertainty either. In a seminal article, James March distinguished between “the exploration of new possibilities and the exploitation of old certainties. Exploration includes things captured by terms such as search, variation, risk taking, experimentation, play, flexibility, discovery, innovation. Exploitation includes such things as refinement, choice, production, efficiency, selection, implementation, execution.” Exploratory innovation operates under conditions of irreducible uncertainty whereas exploitation is an act of optimisation under a known distribution.

Investments in scaling up operations are most easily influenced by monetary policy initiatives which reduce interest rates and raise asset prices or direct fiscal policy initiatives which operate via the multiplier effect. In recent times, especially in the United States and United Kingdom, the reduction in rates has also directly facilitated the levering up of the consumer balance sheet and a reduction in the interest servicing burden of past consumer debt taken on. The resulting boost to consumer spending and demand also stimulates businesses to invest in expanding capacity. Exploitative innovation requires the presence of price competition within the industry i.e. monopolies or oligopolies have little incentive to make their operations more efficient beyond the price point where demand for their product is essentially inelastic. This sounds like an exceptional case but is in fact very common in critical industries such as finance and healthcare. Exploratory innovation requires not only competition amongst incumbent firms but competition from a constant and robust stream of new entrants into the industry. I outlined the rationale for this in a previous post:

Let us assume a scenario where the entry of new firms has slowed to a trickle, the sector is dominated by a few dominant incumbents and the S-curve of growth is about to enter its maturity/decline phase. To trigger off a new S-curve of growth, the incumbents need to explore. However, almost by definition, the odds that any given act of exploration will be successful is small. Moreover, the positive payoff from any exploratory search almost certainly lies far in the future. For an improbable shot at moving from a position of comfort to one of dominance in the distant future, an incumbent firm needs to divert resources from optimising and efficiency-increasing initiatives that will deliver predictable profits in the near future. Of course if a significant proportion of its competitors adopt an exploratory strategy, even an incumbent firm will be forced to follow suit for fear of loss of market share. But this critical mass of exploratory incumbents never comes about. In essence, the state where almost all incumbents are content to focus their energies on exploitation is a Nash equilibrium.
On the other hand, the incentives of any new entrant are almost entirely skewed in favour of exploratory strategies. Even an improbable shot at glory is enough to outweigh the minor consequences of failure. It cannot be emphasised enough that this argument does not depend upon the irrationality of the entrant. The same incremental payoff that represents a minor improvement for the incumbent is a life-changing event for the entrepreneur. When there exists a critical mass of exploratory new entrants, the dominant incumbents are compelled to follow suit and the Nash equilibrium of the industry shifts towards the appropriate mix of exploitation and exploration.

A Theory of Employment

My fundamental assertion is that a constant and high level of uncertain, exploratory investment is required to maintain a sustainable and resilient state of full employment. And as I mentioned earlier, exploratory investment driven by product innovation requires a constant threat from new entrants.

Long-run increases in aggregate demand require product innovation. As Rick Szostak notes:

While in the short run government spending and investment have a role to play, in the long run it is per capita consumption that must rise in order for increases in per capita output to be sustained…..the reason that we consume many times more than our great-grandparents is not to be found for the most part in our consumption of greater quantities of the same items which they purchased…The bulk of the increase in consumption expenditures, however, has gone towards goods and services those not-too-distant forebears had never heard of, or could not dream of affording….Would we as a society of consumers/workers have striven as hard to achieve our present incomes if our consumption bundle had only deepened rather than widened? Hardly. It should be clear to all that the tremendous increase in per capita consumption in the past century would not have been possible if not for the introduction of a wide range of different products. Consumers do not consume a composite good X. Rather, they consume a variety of goods, and at some point run into a steeply declining marginal utility from each. As writers as diverse as Galbraith and Marshall have noted, if declining marginal utility exists with respect to each good it holds over the whole basket of goods as well…..The simple fact is that, in the absence of the creation of new goods, aggregate demand can be highly inelastic, and thus falling prices will have little effect on output.

Therefore, when cost-cutting and process optimisation in an industry enables a product to be sold at a lower cost, the economy may not be able to reorganise back to full employment with simply an increased demand for that particular product. In the early stages of a product when demand is sufficiently elastic, process innovation can increase employment. But as the product ages, process improvements have a steadily negative effect on employment.

Eventually, a successful reorganisation back to full employment entails creating demand for new products. If such new products were simply an addition to the set of products that we consumed, disruption would be minimal. But almost any significant new product that arises from exploratory investment also destroys an old product. The tablet cannibalises the netbook, the smartphone cannibalises the camera etc. This of course is the destruction in Schumpeter’s creative destruction. It is precisely because of this cannibalistic nature of exploratory innovation that established incumbents rarely engage in it, unless compelled to do so by the force of new entrants. Burton Klein put it well: “ firms involved in such competition must compare two risks: the risk of being unsuccessful when promoting a discovery or bringing about an innovation versus the risk of having a market stolen away by a competitor: the greater the risk that a firm’s rivals take, the greater must be the risks to which must subject itself for its own survival.” Even when new firms enter a market at a healthy pace, it is rare that incumbent firms are successful at bringing about disruptive exploratory changes. When the pace of dynamic competition is slow, incumbents can choose to simply maintain slack and wait for any promising new technology to emerge which it can buy up rather than risking investment in some uncertain new technology.

We need exploratory investment because this expansion of the economy into its ‘adjacent possible’ does not derive its thrust from the consumer but from the entrepreneur. In other words, new wants are not demanded by the consumers but are instead created by entrepreneurs such as Steve Jobs. In the absence of dynamic competition from new entrants, wants remain limited.

In essence, this framework incorporates technological innovation into a distinctly “Chapter 12” Keynesian view of the business cycle. Although my views are far removed from macroeconomic orthodoxy, they are not quite so radical that they have no precedents whatsoever. My views can be seen as a simple extension of Burton Klein’s seminal work outlined in his books ‘Dynamic Economics’ and ‘Prices, wages, and business cycles: a dynamic theory’. But the closest parallels to this explanation can be found in Rick Szostak’s book ‘Technological innovation and the Great Depression’. Szostak uses an almost identical rationale to explain unemployment during the Great Depression, “how an abundance of labor-saving production technology coupled with a virtual absence of new product innovation could affect consumption, investment and the functioning of the labor market in such a way that a large and sustained contraction in employment would result.”

As I have hinted at in a previous post, this is not a conventional “structural” explanation of unemployment. Szostak explains the difference: “An alternative technological argument would be that the skills required of the workforce changed more rapidly in the interwar period than did the skills possessed by the workforce. Thus, there were enough jobs to go around; workers simply were not suited to them, and a painful decade of adjustment was required…I argue that in fact there simply were not enough jobs of any kind available.” In other words, this is a partly technological explanation for the shortfall in aggregate demand.

The Invisible Foot and New Firm Entry

The concept of the “Invisible Foot” was introduced by Joseph Berliner as a counterpoint to Adam Smith’s “Invisible Hand” to explain why innovation was so hard in the centrally planned Soviet economy:

Adam Smith taught us to think of competition as an “invisible hand” that guides production into the socially desirable channels….But if Adam Smith had taken as his point of departure not the coordinating mechanism but the innovation mechanism of capitalism, he may well have designated competition not as an invisible hand but as an invisible foot. For the effect of competition is not only to motivate profit-seeking entrepreneurs to seek yet more profit but to jolt conservative enterprises into the adoption of new technology and the search for improved processes and products. From the point of view of the static efficiency of resource allocation, the evil of monopoly is that it prevents resources from flowing into those lines of production in which their social value would be greatest. But from the point of view of innovation, the evil of monopoly is that it enables producers to enjoy high rates of profit without having to undertake the exacting and risky activities associated with technological change. A world of monopolies, socialist or capitalist, would be a world with very little technological change.” 

For disruptive innovation to persist, the invisible foot needs to be “applied vigorously to the backsides of enterprises that would otherwise have been quite content to go on producing the same products in the same ways, and at a reasonable profit, if they could only be protected from the intrusion of competition”Burton Klein’s great contribution along with Gunnar Eliasson was to highlight the critical importance of entry of new firms in maintaining the efficacy of the invisible foot. Klein believed that

the degree of risk taking is determined by the robustness of dynamic competition, which mainly depends on the rate of entry of new firms. If entry into an industry is fairly steady, the game is likely to have the flavour of a highly competitive sport. When some firms in an industry concentrate on making significant advances that will bear fruit within several years, others must be concerned with making their long-run profits as large as possible, if they hope to survive. But after entry has been closed for a number of years, a tightly organised oligopoly will probably emerge in which firms will endeavour to make their environments highly predictable in order to make their environments highly predictable in order to make their short-run profits as large as possible….Because of new entries, a relatively concentrated industry can remain highly dynamic. But, when entry is absent for some years, and expectations are premised on the future absence of entry, a relatively concentrated industry is likely to evolve into a tight oligopoly. In particular, when entry is long absent, managers are likely to be more and more narrowly selected; and they will probably engage in such parallel behaviour with respect to products and prices that it might seem that the entire industry is commanded by a single general!

This argument does not depend on incumbent firms leaving money on the table – on the contrary, they may redouble their attempts at cost reduction via process innovation in times of deficient demand. Rick Szostak documents how “despite the availability of a massive amount of inexpensive labour, process innovation would continue in the 1930s. Output per man-hour in manufacturing rose by 25% in the 1930s…..national output was higher in 1939 than in 1929, while employment was over two million less.”

Macroeconomic Policy and Exploratory Product Innovation

Monetary policy has been the preferred cure for insufficient aggregate demand throughout and since the Great Moderation. The argument goes that lower real rates, inflation and higher asset prices will increase investment via Tobin’s Q and increase consumption via the wealth effect and reduction in rewards to savings, all bound together in the virtuous cycle of the multiplier. If monetary policy is insufficient, fiscal policy may be deployed with a focus on either directly increasing aggregate demand or providing businesses with supply-side incentives such as tax cuts.

There is a common underlying theme to all of the above policy options – they focus on the question “how do we make businesses want to invest?” i.e. on positively incentivising incumbent business and startups and trusting that the invisible hand will do the rest. In the context of exploratory investments, the appropriate question is instead “how do we make businesses have to invest?” i.e. on compelling incumbent firms to invest in speculative projects in order to defend their rents or lose out to new entrants if they fail to do so. But the problem isn’t just that these policies are ineffectual. Many of the policies that focus on positive incentives weaken the competitive discomfort from the invisible foot by helping to entrench the competitive position of incumbent corporates and reducing their incentive to engage in exploratory investment. It is in this context that interventions such as central bank purchase of assets and fiscal stimulus measures that dole out contracts to the favoured do permanent harm to the economy.

The division that matters from the perspective of maintaining the appropriate level of exploratory investment and product innovation is not monetary vs fiscal but the division between existing assets and economic interests and new firms/entrepreneurs. Almost all monetary policy initiatives focus on purchasing existing assets from incumbent firms or reducing real rates for incumbent banks and their clients. A significant proportion of fiscal policy does the same. The implicit assumption is, as Nick Rowe notes, that there is “high substitutability between old and new investment projects, so the previous owners of the old investment projects will go looking for new ones with their new cash”. This assumption does not hold in the case of exploratory investments – asset-holders will likely chase after a replacement asset but this asset will likely be an existing investment project, not a new one. The result of the intervention will be an increase in prices of such assets but it will not feed into any “real” new investment activity. In other words, the Tobin’s q effect is negligible for exploratory investments in the short run and in fact negative in the long run as the accumulated effect of rents derived from monetary and fiscal intervention reduces the need for incumbent firms to engage in such speculative investment.

A Brief History of the Post-WW2 United States Macroeconomy

In this section, I’m going to use the above framework to make sense of the evolution of the macroeconomy in the United States after WW2. The framework is relevant for post–70s Europe and Japan as well which is why the ‘investment deficit problem’ afflicts almost the entire developed world today. But the details differ quite significantly especially with regards to the distributional choices made in different countries.

The Golden Age

The 50s and the 60s are best characterised as a period of “order for all” characterised by as Bill Lazonick put it, “oligopolistic competition, career employment with one company, and regulated financial markets”. The ‘Golden Age’ delivered prosperity for a few reasons:

  • As Minsky noted, the financial sector had only just begun the process of adapting to and circumventing regulations designed to constrain and control it. As a result, the Fed had as much control over credit creation and bank policies as it would ever have.
  • The pace of both product and process innovation had slowed down significantly in the real economy, especially in manufacturing. Much of the productivity growth came from product innovations that had already been made prior to WW2. As Alexander Field explains (on the slowdown in manufacturing TFP): “Through marketing and planned obsolescence, the disruptive force of technological change – what Joseph Schumpeter called creative destruction – had largely been domesticated, at least for a time. Whereas large corporations had funded research leading to a large number of important innovations during the 1930s, many critics now argued that these behemoths had become obstacles to transformative innovation, too concerned about the prospect of devaluing rent-yielding income streams from existing technologies. Disruptions to the rank order of the largest U.S. industrial corporations during this quarter century were remarkably few. And the overall rate of TFP growth within manufacturing fell by more than a percentage point compared with the 1930s and more than 3.5 percentage points compared with the 1920s.”
  • Apart from the fact that the economy had to catch up to earlier product innovation, the dominant position of the US in the global economy post WW2 limited the impact from foreign competition.

It was this peculiar confluence of factors that enabled a system of “order and stability for all” without triggering a complete collapse in productivity or financial instability – a system where both labour and capital were equally strong and protected and shared in the rents available to all.

Stagflation

The 70s are best described as the time when this ordered, stabilised system could not be sustained any longer.

  • By the late 60s, the financial sector had adapted to the regulatory environment. Innovations such as Fed Funds market and the Eurodollar market gradually came into being such that by the late 60s, credit creation and bank lending were increasingly difficult for the Fed to control. Reserves were no longer a binding constraint on bank operations.
  • The absence of real competition either on the basis of price or from new entrants meant that both process and product innovation were low just like during the Golden Age but the difference was that there were no more low-hanging fruit to pick from past product innovations. Therefore, a secular slowdown in productivity took hold.
  • The rest of world had caught up and foreign competition began to intensify.

As Burton Klein noted, “competition provides a deterrent to wage and price increases because firms that allow wages to increase more rapidly than productivity face penalties in the form of reduced profits and reduced employment”. In the absence of adequate competition, demand is inelastic and there is little pressure to reduce costs. As the level of price/cost competition reduces, more and more unemployment is required to keep inflation under control. Even worse, as Klein noted, it only takes the absence of competition in a few key sectors for the disease to afflict the entire economy. Controlling overall inflation in the macroeconomy when a few key sectors are sheltered from competitive discomfort requires monetary action that will extract a disproportionate amount of pain from the remainder of the economy. Stagflation is the inevitable consequence in a stabilised economy suffering from progressive competitive sclerosis.

The “Solution”

By the late 70s, the pressures and conflicts of the system of “order for all” meant that change was inevitable. The result was what is commonly known as the neoliberal revolution. There are many different interpretations of this transition. To right-wing commentators, neoliberalism signified a much-needed transition towards a free-market economy. Most left-wing commentators lament the resultant supremacy of capital over labour and rising inequality. For some, the neoliberal era started with Paul Volcker having the courage to inflict the required pain to break the back of inflationary forces and continued with central banks learning the lessons of the past which gave us the Great Moderation.

All these explanations are relevant but in my opinion, they are simply a subset of a larger and simpler explanation. The prior economic regime was a system where both the invisible hand and the invisible foot were shackled – firms were protected but their profit motive was also shackled by the protection provided to labour. The neoliberal transition unshackled the invisible hand (the carrot of the profit motive) without ensuring that all key sectors of the economy were equally subject to the invisible foot (the stick of failure and losses and new firm entry). Instead of tackling the root problem of progressive competitive and democratic sclerosis and cronyism, the neoliberal era provided a stop-gap solution. “Order for all” became “order for the classes and disorder for the masses”. As many commentators have noted, the reality of neoliberalism is not consistent with the theory of classical liberalism. Minsky captured the hypocrisy well: “Conservatives call for the freeing of markets even as their corporate clients lobby for legislation that would institutionalize and legitimize their market power; businessmen and bankers recoil in horror at the prospect of easing entry into their various domains even as technological changes and institutional evolution make the traditional demarcations of types of business obsolete. In truth, corporate America pays lip service to free enterprise and extols the tenets of Adam Smith, while striving to sustain and legitimize the very thing that Smith abhorred – state-mandated market power.”

The critical component of this doctrine is the emphasis on macroeconomic and financial sector stabilisation implemented primarily through monetary policy focused on the banking and asset price channels of policy transmission:
Any significant fall in asset prices (especially equity prices) has been met with a strong stimulus from the Fed i.e. the ‘Greenspan Put’. In his plea for increased quantitative easing via purchase of agency MBS, Joe Gagnon captured the logic of this policy: ““This avalanche of money would surely push up stock prices, push down bond yields, support real estate prices, and push up the value of foreign currencies. All of these financial developments would stimulate US economic activity.” In other words, prop up asset prices and the real economy will mend itself.
Similarly, Fed and Treasury policy has ensured that none of the large banks can fail. In particular, bank creditors have been shielded from any losses. The argument is that allowing banks to fail will cripple the flow of credit to the real economy and result in a deflationary collapse that cannot be offset by conventional monetary policy alone. This is the logic for why banks were allowed access to a panoply of Federal Reserve liquidity facilities at the height of the crisis. In other words, prop up the banks and the real economy will mend itself.

In this increasingly financialised economy, “the increased market-sensitivity combined with the macro-stabilisation commitment encourages low-risk process innovation and discourages uncertain and exploratory product innovation.” This tilt towards exploitation/cost-reduction without exploration kept inflation in check but it also implied a prolonged period of sub-par wage growth and a constant inability to maintain full employment unless the consumer or the government levered up. For the neo-liberal revolution to sustain a ‘corporate welfare state’ in a democratic system, the absence of wage growth necessitated an increase in household leverage for consumption growth to be maintained. The monetary policy doctrine of the Great Moderation exacerbated the problem of competitive sclerosis and the investment deficit but it also provided the palliative medicine that postponed the day of reckoning. The unshackling of the financial sector was a necessary condition for this cure to work its way through the economy for as long as it did.

It is this focus on the carrot of higher profits that also triggered the widespread adoption of high-powered incentives such as stock options and bonuses to align manager and stockholder incentives. When the risk of being displaced by innovative new entrants is low, high-powered managerial incentives help to tilt the focus of the firm towards a focus on process innovation and cost reduction, optimisation of leverage etc. From the stockholders and the managers’ perspective, the focus on short-term profits is a feature, not a bug.

The Dénouement

So long as unemployment and consumption could be propped up by increasing leverage from the consumer and/or the state, the long-run shortage in exploratory product innovation and the stagnation in wages could be swept under the rug and economic growth could be maintained. But there is every sign that the household sector has reached a state of peak debt and the financial system has reached its point of peak elasticity. The policy that worked so well during the Great Moderation is now simply focused on preventing the collapse of the cronyist and financialised economy. The system has become so fragile that Minsky’s vision is more correct than ever – an economy at full employment will yo-yo uncontrollably between a state of debt-deflation and high,variable inflation. Instead the goal of full employment seems to have been abandoned in order to postpone the inevitable collapse. This only substitutes an economic fragility with a deeper social fragility.

The aim of full employment is made even harder with the acceleration of process innovation due to advances in artificial intelligence and computerisation. Process innovation gives us technological unemployment while at the same time the absence of exploratory product innovation leaves us stuck in the Great Stagnation.

 

The solution preferred by the left is to somehow recreate the golden age of the 50s and the 60s i.e. order for all. Apart from the impossibility of retrieving the docile financial system of that age (which Minsky understood), the solution of micro-stability for all is an environment of permanent innovative stagnation. The Schumpeterian solution is to transform the system into one of disorder for all, masses and classes alike. Micro-fragility is the key to macro-resilience but this fragility must be felt by all economic agents, labour and capital alike. In order to end the stagnation and achieve sustainable full employment, we need to allow incumbent banks and financialised corporations to collapse and dismantle the barriers to entry of new firms that pervade the economy (e.g. occupational licensing, the patent system). But this does not imply that the macroeconomy should suffer from a deflationary contraction. Deflation can be prevented in a simple and effective manner with a system of direct transfers to individuals as Steve Waldman has outlined. This solution reverses the flow of rents that have exacerbated inequality over the past few decades, as well as tackling the cronyism and demosclerosis that is crippling innovation and preventing full employment.

Bookmark and Share

Written by Ashwin Parameswaran

November 2nd, 2011 at 7:29 pm

Operation Twist and the Limits of Monetary Policy in a Credit Economy

with 53 comments

The conventional cure for insufficient aggregate demand and the one that has been preferred throughout the Great Moderation is monetary easing. The argument goes that lower real rates, higher inflation and higher asset prices will increase investment via Tobin’s Q and increase consumption via the wealth effect and reduction in rewards to savings, all bound together in the virtuous cycle of the multiplier. As I discussed in a previous post, QE2 and now Operation Twist are not as unconventional as they seem. They simply apply the logic of interest rate cuts to the entire yield curve rather than restricting central bank interventions to the short-end of the curve as was the norm during the Great Moderation.

But despite asset prices and corporate profits having rebounded significantly from their crisis lows and real rates now negative till the 10y tenor in the United States, a rebound in investment or consumption has not been forthcoming in the current recovery. This lack of responsiveness of aggregate demand to monetary policy is not as surprising as it first seems:

  • The responsiveness of consumption to monetary policy is diminished when the consumer is as over-levered as he currently is. The “success” of monetary policy during the Great Moderation was primarily due to consumers’ ability to lever up to maintain consumption growth in the absence of any tangible real wage growth.
  • The empirical support for the impact of real rates and asset prices on investment is inconclusive. Drawing on Keynes’ emphasis on the uncertain nature of investment decisions, Shackle was skeptical about the impact of lower interest rates in stimulating business investment. He noted that businessmen when asked rarely noted at the level of interest rates as a critical determinant. In an uncertain environment, estimated profits “must greatly exceed the cost of borrowing if the investment in question is to be made”.

If the problem with reduced real rates was simply that they were likely to be ineffective, there could still be a case for pursuing monetary policy initiatives aimed at reducing real rates. One could argue that even a small positive effect is better than not trying anything. But this unfortunately is not the case. There is ample reason to believe that reduced real rates across the curve have perverse and counterproductive effects, especially when real rates are pushed to negative levels:

  • Prolonged periods of negative real rates may trigger increased savings and reduced consumption in an attempt to reach fixed real savings goals in the future, a tendency that may be exacerbated in an ageing population saving for retirement in an era where defined-benefit pensions have disappeared. An investor in a defined-contribution pension plan is unlikely to react to the absence of a truly risk-free investment alternative by taking on more risk or consuming more.
  • One of the arguments for how a program such as Operation Twist can provide economic stimulus is summarised here by Brad DeLong: “such policies work, to the extent that they work, by taking duration and other forms of risk onto the government’s balance sheet, leaving the private sector with extra risk-bearing capacity that it can then use to extend loans to risky private borrowers.” But duration is not a risk to a pension fund or life insurer, it is a hedge – one that it cannot shift out of in any meaningful manner without taking on other risks (equity,credit) in the process.
  • The ability of incumbent firms to hold their powder dry and hold cash as a defence against disruptively innovative threats is in fact enhanced by policies like ‘Operation Twist’ that flatten the yield curve. Firms find it worthwhile to issue bonds and hold cash due to the low negative carry of doing so when the yield curve is flat, a phenomenon that is responsible for the paradox of high corporate cash balances combined with simultaneous debt issuance.

There is an obvious monetarist objection to this post and to my previous post. Despite the fact that the Fed also views its actions as providing stimulus via “downward pressure on longer-term interest rates”, monetarists view this interest-rate view of monetary policy as fundamentally flawed. So why this interest rate approach rather than the monetarist money supply approach? In my opinion, the modern economy resembles a Wicksellian pure credit economy, a point that Claudio Borio and Piti Disyatat have made in a recent paper who point out that

The amount of cash holdings by the public, one form of outside money, is purely demand-determined; as such, it provides no external anchor. And banks’ reserves with the central bank – the other component of outside money – cannot provide an anchor either: Contrary to what is often believed, they do not constrain the amount of inside credit creation. Indeed, in a number of banking systems under normal conditions they are effectively zero, regardless of the level of the interest rate. Critically, the existence of a demand for banks’ reserves, arising from the need to settle transactions, is essential for the central bank to be able to set interest rates, by exploiting its monopoly over their supply. But that is where their role ends. The ultimate constraint on credit creation is the short-term rate set by the central bank and the reaction function that describes how this institution decides to set policy rates in response to economic developments.

In a typically perceptive note written more than a decade ago, Axel Leijonhufvud mapped out and anticipated the evolution of the US monetary system into a pure credit economy during the 20th century:

The situation that Wicksell saw himself as confronting, therefore, was the following. The Quantity Theory was the only monetary theory with any claim to scientific status. But it left out the influence on the price level of credit-financed demand. This omission had become a steadily more serious deficiency with time as the evolution of both “simple” (trade) and “organized” (bank-intermediated) credit practices reduced the role of metallic money in the economy. The issue of small denomination notes had displaced gold coin from circulation and almost all business transactions were settled by check or by giro; the resulting transfers on the books of banks did not involve “money” at all. The famous model of the pure credit economy, which everyone remembers as the original theoretical contribution of Geldzins und Giiterpreise, dealt with the hypothetical limiting case to this historical-evolutionary process……Wicksell’s “Day of Judgment” (if we may call it that) when the real demand for the reserve medium would shrink to epsilon was greatly postponed by regime changes already introduced before or shortly after his death. In particular, governments moved to monopolize the note issue and to impose reserve requirements on banks. The control over the banking system’s total liabilities that the monetary authorities gained in this way greatly reduced the potential for the kind of instability that preoccupied Wicksell. It also gave the Quantity Theory a new lease of life, particularly in the United States.
But although Judgment Day was postponed it was not cancelled….The monetary anchors on which 20th century central bank operating doctrines have relied are giving way. Technical developments are driving the process on two fronts. First, “smart cards” are circumventing the governmental note monopoly; the private sector is reentering the business of supplying currency. Second, banks are under increasing competitive pressure from nonbank financial institutions providing innovative payment or liquidity services; reserve requirements have become a discriminatory tax on banks that handicap them in this competition. The pressure to eliminate reserve requirements is consequently mounting.

Leijonhufvud’s account touches on a topic that is almost always left out in debates on the matter – the assertion that we are in a credit economy is not theoretical, it is empirical. In the environment immediately after WW2, reserves were most certainly a limitation on bank credit. But banks gradually “innovated” their way out of almost all restrictions that central banks and regulators could throw at them. The dominance of shadow-money in our current economic system is a culmination of a long series of bank “innovations” such as the Fed Funds market and the Eurodollar bond market.

As Borio and Disyatat note, in such a credit economy, “through the creation of deposits associated with credit expansion, banks can grant nominal purchasing power without reducing it for other agents in the economy. The banking system can both expand total nominal purchasing power and allocate it at terms different from those associated with full-employment saving-investment equilibrium. In the process, the system is able to stabilise interest rates at an arbitrary level. The quantity of credit adjusts to accommodate the demand at the prevailing interest rate.” In such a economy, the conventional savings-investment framework has very little to say about either market interest rates or the abrupt breakdown in financing that characterises the Minsky Moment. The notion that our economic malaise can be cured by solving the problem of “excess savings” is therefore invalid. In Borio and Disyatat’s words, “Investment, and expenditures more generally, require financing, not saving.” A flatter yield curve therefore encourages incumbent firms to monopolise the limited financing/risk-taking capacity of the system (limited typically by bank capital) simply to increase cash holdings and in effect crowding out small firms and new entrants.

The problem in a credit economy is not so much excess savings but as Borio and Disyatat put it, excess elasticity. Elasticity is defined as

the degree to which the monetary and financial regimes constrain the credit creation process, and the availability of external funding more generally. Weak constraints imply a high elasticity. A high elasticity can facilitate expenditures and production, much like a rubber band that stretches easily. But by the same token it can also accommodate the build-up of financial imbalances, whenever economic agents are not perfectly informed and their incentives are not aligned with the public good (“externalities”). The band stretches too far, and at some point inevitably snaps….In other words, to reduce the likelihood and severity of financial crises, the main policy issue is how to address the “excess elasticity” of the overall system, not “excess saving” in some jurisdictions.

If our financial system is a rubber band, the long arc of monetary system evolution from a metallic standard to a credit economy via the Bretton Woods regime has been largely a process of increasing the elasticity of this rubber band (excepting the period of financial repression post-WW2 when the trend reversed temporarily). Snap-backs are inevitable – the question is simply whether the snap-backs are “normal” or catastrophic. What is commonly referred to as the ‘Minsky Moment’ is the almost instantaneous process of the elastic snapping back. As Minsky has documented, the history of macroeconomic interventions post-WW2 has been the history of prevention of even the smallest snap-backs that are inherent to the process of creative destruction. The result is our current financial system which is as taut as it can be, in a state of fragility where any snap-back will be catastrophic.

The natural fix for the system as I have outlined is to allow small pull-backs and disturbances to play themselves out. But we have evolved far past the point where the system can be allowed to fail without any compensating actions. Just like in a forest where fire has been suppressed for too long or a river where floods have been avoided, it is not an option to let nature take its course.

So is there no way out that does not involve a deflationary collapse of the economy? I argue that there is but that this requires a radical change in focus. The deflationary collapse of the current shadow money and credit superstructure and correspondingly much of the incumbent corporate structure adapted to this “taut rubber-band” is inevitable and if anything needs to be encouraged and accelerated. But this does not imply that the macroeconomy should suffer from a deflationary contraction. The effects of this snap-back can be mitigated in a simple and effective manner with a system of direct transfers to individuals as Steve Waldman has outlined. In fact, it is the deflationary collapse of the incumbent system that provides the leeway for significant fiscal intervention to be undertaken without sacrificing the central bank’s inflation targets. This solution also has the benefit of reversing the flow of rents that have exacerbated inequality over the past few decades, as well as tackling the cronyism and demosclerosis that is crippling our system today. Of course, the collapse of incumbent crony interests inherent to this policy approach means that it will not be implemented anytime soon.

Note: hat tip to Yves Smith and Andrew Dittmer for directing me to the Borio-Disyatat paper.

Bookmark and Share

Written by Ashwin Parameswaran

September 22nd, 2011 at 5:24 pm

Bagehot’s Rule, Central Bank Incentives and Macroeconomic Resilience

with 11 comments

It is widely accepted that in times of financial crisis, central banks should follow Bagehot’s rule which can be summarised as: “Lend without limit, to solvent firms, against good collateral, at ‘high rates’.” However, as I noted a few months ago, the Fed and the ECB seem to be following quite a different rule which is best summarised as: “Lend freely even on junk collateral at ‘low rates’.”

The Fed’s response to allegations that they went beyond their mandate for liquidity provision is instructive. In the Fed’s eyes, the absence of credit losses signifies that the collateral was sound and the fact that nearly all the programs have now closed illustrates that the rate charged was clearly at a premium to ‘normal rates’. This argument gives the Fed a significant amount of flexibility as a rate that is at a premium to ‘normal rates’ can still quite easily be a bargain when offered in times of crisis. Nevertheless, the Fed can point to the absence of losses and claim that it only provided liquidity support. The absence of losses is also used to refute the claim that these programs create moral hazard. However, both these arguments ignore the fact that the creditworthiness of assets and the solvency of the banking system cannot be separated from the central banks’ actions during a crisis. As the Fed’s Brian Madigan notes: “In a crisis, the solvency of firms may be uncertain and even dependent on central bank actions.”

However, the Fed’s response does highlight just how important it is to any central bank that it avoid losses on its liquidity programs – not so much to avoid moral hazard but out of simple self-interest. If a central bank exposes itself to significant losses, it runs a significant reputational and political risk. Given the criticism that central banks receive even for programs which do not lose any money, it is quite conceivable that significant losses may even lead to a reduction in their independent powers. Whether or not these losses have any ‘real’ relevance in a fiat-currency economic system, they are undoubtedly relevant in a political context. The interaction of the central bank’s desire to avoid losses and its ability to influence asset prices and bank solvency has some important implications for its liquidity policy choices – In a nutshell, the central bank strongly prefers to backstop assets whose valuation is largely dependent on “macro” systemic risks. Also, when it embarks upon a program of liquidity provision it will either limit itself to extremely high-quality assets or it will backstop the entire spectrum of risky assets from high-grade to junk. It will not choose an intermediate threshold for its intervention.

The first point is easily explained – by choosing to backstop ‘macro’ assets whose prices and performance are strongly reflexive with respect to credit availability, the program minimises the probability of loss. For example, a decision to backstop housing loans has a significant impact on loan-flow and the ‘real’ housing market. A decision to backstop small-business loans on the other hand can only have a limited impact on the realised business outcomes experienced by small businesses given the idiosyncratic risk inherent in them. The negatively skewed payoff profile of such loans combined with their largely ‘macro’ risk profile makes them the ideal candidates for such programs – such assets are exposed to a tail risk of significant losses in the event of macroeconomic distress, which is the exact scenario that central banks are mandated to mitigate against. The coincidence of such distress with deflationary forces enables central banks to eliminate losses on these assets without risking any overshooting of its inflation mandate. This also explains why central banks are reluctant to explicitly backstop equities even at the index level – the less skewed risk profile of equities means that the risk of losses is impossible to reduce to an acceptable level.

The second point is less obvious – If the central bank can restrict itself to backstopping just extremely low-risk bonds and loans, it will do so. But in most crises, this is rarely enough. At the very least, the central bank is required to backstop average-quality assets which is where the impact of uncertainty is greatest and the line between solvency and liquidity risk is blurriest. But this is not the strategy that minimises the risk of losses to the central bank. The impact on the system from the contagious ripple effects of the losses incurred on the junk assets can cause moderate losses on higher-quality assets. This incentivises the Fed to go far beyond the level of commitment that may be optimal for the economy and backstop almost the entire sphere of “macro” assets even if many of them are junk. In other words, it is precisely the desire of the Fed to avoid any losses that incentivised it to expand the scope of its liquidity programs to as large a scale and scope as it did during the crisis.

These preferences of the central bank have implications for the portfolios that banks will choose to hold – banks will prefer ‘macro’ assets without excessive micro risk as these assets are more likely to be backstopped by the central bank during the crisis. This biases bank portfolios and lending towards large corporations, housing etc. and against small business loans and other idiosyncratic risks. The system also becomes less diverse and more highly correlated. The problem of homogeneity and inordinately high correlation is baked into the structural logic of a stabilised financial system. Such a system also carries a higher risk of asset price bubbles – it may be more ‘rational’ for a bank to hold an overpriced ‘macro’ asset and follow the herd than to invest in an underpriced ‘micro’ asset. Douglas Diamond and Raghuram Rajan identified the damaging effects of the implicit commitment by central banks to reduce rates when liquidity is at a premium: “If the authorities are expected to reduce interest rates when liquidity is at a premium, borrowers will take on more short-term leverage or invest in more illiquid projects, thus bringing about the very states where intervention is needed, even if they would not do so in the absence of intervention.” Similarly, the incentives of the central bank to avoid losses at all costs perversely end up making the financial system less diverse and fragile.

When viewed under this logic, the ECB’s actions also start to make sense and criticisms of its lack of courage seem misguided. In terms of liquidity support extended, the ECB has been at least as aggressive as the Fed. in fact, in terms of the risk of losses that it has chosen to bear, the ECB has been far more aggressive. Despite the losses it faces on its Greek debt holdings,it has nearly doubled its peripheral government bond holdings in recent times. This is despite the fact that the ECB runs a significant risk of losses on its government bond holdings in the absence of massive fiscal transfers from the core to the periphery, a policy for which there is little public or political appetite.

The ECB’s desire for the EFSF to take over the task of backstopping the periphery simply highlights the reality that the task is more fiscal than monetary in nature. Relying on the ECB to pick up the slack rather than constructing the fiscal solution also exacerbates the democratic deficit that is crippling the Eurozone. The ECB is not the first central bank that has pleaded to be relieved of duties that belong to the fiscal domain. Various Fed officials have made the same point regarding the Fed’s credit policies – drawing on Marvin Goodfriend’s research, Charles Plosser summarises this view as follows: “the Fed and the Treasury should agree that the Treasury will take the non-Treasury assets and non-discount window loans from the Fed’s balance sheet in exchange for Treasury securities. Such a new ”accord“ would transfer funding for these special credit programs to the Treasury — which would issue Treasury securities to fund the transfer — thus ensuring that these extraordinary credit policies are under the oversight of the fiscal authority, where such policies rightfully belong.” Of course, the incentives of the government are to preserve the status quo – what better than to let the central bank do the dirty work as well as reserving the right to criticise it for doing so!

This highlights a point that often gets lost in the monetary vs fiscal policy debate. Much of what has been implemented as monetary policy in recent times is not only not ‘neutral’ but is regressive in its distributional effects. In the current paradigm of central bank policy during crises, systemic fragility and inequality is an inescapable structural problem. On the other hand, it is perfectly possible to construct a fiscal policy that is close to neutral e.g. Steve Waldman’s excellent idea of simple direct transfers to individuals.

Bookmark and Share

Written by Ashwin Parameswaran

September 12th, 2011 at 4:41 pm

Forest Fire Suppression and Macroeconomic Stabilisation

with 24 comments

In an earlier post, I compared Minsky’s Financial Instability Hypothesis with Buzz Holling’s work on ecological resilience and briefly touched upon the consequences of wildfire suppression as an example of the resilience-stability tradeoff. This post expands upon the lessons we can learn from the history of fire suppression and its impact on the forest ecosystem in the United States and draws some parallels between the theory and history of forest fire management and macroeconomic management.

Origins of Stabilisation as the Primary Policy Objective and Initial Ease of Implementation

The impetus for both fire suppression and macroeconomic stabilisation came from a crisis. In economics, this crisis was the Great Depression which highlighted the need for stabilising fiscal and monetary policy during a crisis. Out of all the initiatives, the most crucial from a systems viewpoint was the expansion of lender-of-last-resort operations and bank bailouts which tried to eliminate all disturbances at their source. In Minsky’s words: “The need for lender-of-Iast-resort operations will often occur before income falls steeply and before the well nigh automatic income and financial stabilizing effects of Big Government come into play.” (Stabilizing an Unstable Economy pg 46)

SImilarly, the battle for complete fire suppression was won after the Great Idaho Fires of 1910. “The Great Idaho Fires of August 1910 were a defining event for fire policy and management, indeed for the policy and management of all natural resources in the United States. Often called the Big Blowup, the complex of fires consumed 3 million acres of valuable timber in northern Idaho and western Montana…..The battle cry of foresters and philosophers that year was simple and compelling: fires are evil, and they must be banished from the earth. The federal Weeks Act, which had been stalled in Congress for years, passed in February 1911. This law drastically expanded the Forest Service and established cooperative federal-state programs in fire control. It marked the beginning of federal fire-suppression efforts and effectively brought an end to light burning practices across most of the country. The prompt suppression of wildland fires by government agencies became a national paradigm and a national policy” (Sara Jensen and Guy McPherson). In 1935, the Forest Service implemented the ‘10 AM policy’, a goal to extinguish every new fire by 10 AM the day after it was reported.

In both cases, the trauma of a catastrophic disaster triggered a new policy that would try to stamp out all disturbances at the source, no matter how small. This policy also had the benefit of initially being easy to implement and cheap. In the case of wildfires, “the 10 am policy, which guided Forest Service wildfire suppression until the mid 1970s, made sense in the short term, as wildfires are much easier and cheaper to suppress when they are small. Consider that, on average, 98.9% of wildfires on public land in the US are suppressed before they exceed 120 ha, but fires larger than that account for 97.5% of all suppression costs” (Donovan and Brown). As Minsky notes, macroeconomic stability was helped significantly by the deleveraged nature of the American economy from the end of WW2 till the 1960s. Even in interventions by the Federal Reserve in the late 60s and 70s, the amount of resources needed to shore up the system was limited.

Consequences of Stabilisation

Wildfire suppression in forests that are otherwise adapted to regular, low-intensity fires (e.g. understory fire regimes) causes the forest to become more fragile and susceptible to a catastrophic fire. As Holling and Meffe note, “fire suppression in systems that would frequently experience low-intensity fires results in the systems becoming severely affected by the huge fires that finally erupt; that is, the systems are not resilient to the major fires that occur with large fuel loads and may fundamentally change state after the fire”. This increased fragility arises from a few distinct patterns and mechanisms:

Increased Fuel Load: Just like channelisation of a river results in increased silt load within the river banks, the absence of fires leads to a fuel buildup thus making the eventual fire that much more severe. In Minskyian terms, this is analogous to the buildup of leverage and ‘Ponzi finance’ within the economic system.

Change in Species Composition: Species compositions inevitably shift towards less fire resistant trees when fires are suppressed (Allen et al 2002). In an economic system, it is not simply that ‘Ponzi finance’ players thrive but that more prudently financed actors get outcompeted in the cycle. This has critical implications for the ability of the system to recover after the fire. This is an important problem in the financial sector where as Richard Fisher observed, “more prudent and better-managed banks have been denied the market share that would have been theirs if mismanaged big banks had been allowed to go out of business”.

Reduction in Diversity: As I mentioned here, “In an environment free of disturbances, diversity of competing strategies must reduce dramatically as the optimal strategy will outcompete all others. In fact, disturbances are a key reason why competitive exclusion is rarely observed in ecosystems”. Contrary to popular opinion, the post-disturbance environment is incredibly productive and diverse. Even after a fire as severe as the Yellowstone fires of 1988, the regeneration of the system was swift and effective as the ecosystem was historically adapted to such severe fires.

Increased Connectivity: This is the least appreciated impact of eliminating all disturbances in a complex adaptive system. Disturbances perform a critical role by breaking connections within a network. Frequent forest fires result in a “patchy” modularised forest where no one fire can cause catastrophic damage. As Thomas Bonnicksen notes: “Fire seldom spread over vast areas in historic forests because meadows, and patches of young trees and open patches of old trees were difficult to burn and forced fires to drop to the ground…..Unlike the popular idealized image of historic forests, which depicts old trees spread like a blanket over the landscape, a real historic forest was patchy. It looked more like a quilt than a blanket. It was a mosaic of patches. Each patch consisted of a group of trees of about the same age, some young patches, some old patches, or meadows depending on how many years passed since fire created a new opening where they could grow. The variety of patches in historic forests helped to contain hot fires. Most patches of young trees, and old trees with little underneath did not burn well and served as firebreaks. Still, chance led to fires skipping some patches. So, fuel built up and the next fire burned a few of them while doing little harm to the rest of the forest”. Suppressing forest fires converts the forest into one connected whole, at risk of complete destruction from the eventual fire that cannot be suppressed.

In the absence of disturbances, connectivity builds up within the network, both within and between scales. Increased within-scale connectivity increases the severity but between-scale connectivity increases the probability of a disturbance at a lower level propagating up to higher levels and causing systemic collapse. Fire suppression in forests adapted to frequent undergrowth fires can cause an accumulation of ladder fuels which connect the undergrowth to the crown of the forest. The eventual undergrowth ignition then risks a crown fire by a process known as “torching”. Unlike understory fires, crown fires can spread across firebreaks such as rivers by a process known as “spotting” where the wind carries burning embers through the air – the fire can spread in this manner even without direct connectivity. Such fires can easily cause systemic collapse and a state from which natural forces cannot regenerate the forest. In this manner, stabilisation can cause changes which cause a fundamental change in the nature of the system rather than simply an increased severity of disturbances. For example, “extensive stand-replacing fires are in many cases resulting in “type conversions” from ponderosa pine forest to other physiognomic types (for example, grassland or shrubland) that may be persistent for centuries or perhaps even millennia” (Allen 2007).

Long-Run Increase in Cost of Stabilisation and Area Burned: The initial low cost of suppression is short-lived and the cumulative effect of the fragilisation of the system has led to rapidly increasing costs of wildfire suppression and levels of area burned in the last three decades (Donovan and Brown 2007).

Dilemmas in the Management of a Stabilised System

In my post on river flood management, I claimed that managing a stabilised and fragile system is “akin to choosing between the frying pan and the fire”. This has been the case in many forests around the United States for the last few decades and is the condition into which the economies of the developed world are heading into. Once the forest ecosystem has become fragile, the resultant large fire exacerbates the problem thus triggering a vicious cycle. As Thomas Bonnicksen observed, “monster fires create even bigger monsters. Huge blocks of seedlings that grow on burned areas become older and thicker at the same time. When it burns again, fire spreads farther and creates an even bigger block of fuel for the next fire. This cycle of monster fires has begun”. The system enters an “unending cycle of monster fires and blackened landscapes”.

Minsky of course understood this end-state very well: “The success of a high-private-investment strategy depends upon the continued growth of relative needs to validate private investment. It also requires that policy be directed to maintain and increase the quasi-rents earned by capital – i.e.,rentier and entrepreneurial income. But such high and increasing quasi-rents are particularly conducive to speculation, especially as these profits are presumably guaranteed by policy. The result is experimentation with liability structures that not only hypothecate increasing proportions of cash receipts but that also depend upon continuous refinancing of asset positions. A high-investment, high-profit strategy for full employment – even with the underpinning of an active fiscal policy and an aware Federal Reserve system – leads to an increasingly unstable financial system, and an increasingly unstable economic performance. Within a short span of time, the policy problem cycles among preventing a deep depression, getting a stagnant economy moving again, reining in an inflation, and offsetting a credit squeeze or crunch….As high investment and high profits depend upon and induce speculation with respect to liability structures, the expansions become increasingly difficult to control; the choice seems to become whether to accomodate to an increasing inflation or to induce a debt-deflation process that can lead to a serious depression”. (John Maynard Keynes pg163–164)

The evolution of the system means that turning back the clock to a previous era of stability is not an option. As Minsky observed in the context of our financial system, “the apparent stability and robustness of the financial system of the 1950s and early 1960s can now be viewed as an accident of history, which was due to the financial residue of World War 2 following fast upon a great depression”. Re-regulation is not enough because it cannot undo the damage done by decades of financial “innovation” in a manner that does not risk systemic collapse.

At the same time, simply allowing an excessively stabilised system to burn itself out is a recipe for disaster. For example, on the role that controlled burns could play in restoring America’s forests to a resilient state, Thomas Bonnicksen observed: “Prescribed fire would come closer than any tool toward mimicking the effects of the historic Indian and lightning fires that shaped most of America’s native forests. However, there are good reasons why it is declining in use rather than expanding. Most importantly, the fuel problem is so severe that we can no longer depend on prescribed fire to repair the damage caused by over a century of fire exclusion. Prescribed fire is ineffective and unsafe in such forests. It is ineffective because any fire that is hot enough to kill trees over three inches in diameter, which is too small to eliminate most fire hazards, has a high probability of becoming uncontrollable”. The same logic applies to a fragile economic system.

Update: corrected date of Idaho fires from 2010 to 1910 in para 3 thanks to Dean.

Bookmark and Share

Written by Ashwin Parameswaran

June 8th, 2011 at 11:35 am

Monetary Policy and Financial Markets: A “Real Rates” Lens

with 13 comments

In recent years, central banks on both sides of the Atlantic have implemented a raft of monetary policy initiatives that many people view as having no precedent in history. This opinion is understandable when compared to recent history – During the Great Moderation, monetary policy was largely restricted to adjustments in short-term nominal rates. But when viewed in the context of the longer history of fiat currency monetary policy, almost every policy implemented by central banks during this crisis has a historical precedent. In this post, I analyse fiat currency monetary policy (conventional or unconventional) as an attempt to influence the real interest rate curve under the constraint of inflation and employment/GDP targets – this is not intended to be a comprehensive theory, simply a lens that I find useful in analysing the impact of monetary policy.

The primary dilemma faced by governments today is the tension between the need to rein in government indebtedness in the long run and stimulate economic growth in the short run. The task of stimulating growth is complicated by the high levels of consumer indebtedness. There are no easy solutions to this problem – reducing government indebtedness itself is critically dependent upon maintaining economic growth i.e. ensuring that the GDP in the debt/GDP ratio grows at a healthy rate. In the fiat currency era (post 1945), one solution has usually been preferred to all others – the enforcement of prolonged periods of low or even negative real interest rates. In an excellent paper, Carmen Reinhart and Belen Sbrancia have analysed the role of “financial repression” in engineering negative real interest rates and reducing real government debt burdens between 1945 and 1980. Given the similarity of our current problems to the post-WW2 situation, it is no coincidence that a reduction in real rates has been a key component of central banks’ response to the crisis.

As Paul Krugman notes, the textbook monetary policy response to a liquidity trap requires that central banks “credibly promise to be irresponsible…to commit to creating or allowing higher inflation…so as to get negative real interest rates”. In the context of the current crisis, the central bank response has involved two distinct phases. In the first phase of the crisis, the priority is to prevent a deflationary collapse. Short of untested schemes that try to enforce negative nominal rates, deflation is inconsistent with a reduced real interest rate. In trying to mitigate the collapse, short rates were rapidly reduced to near-zero levels but equally critically, a panoply of “liquidity” programs were introduced to refinance bank balance sheets and prevent a collapse in shadow money supply. It is fair to critique the expansion of the Fed balance sheet for the backdoor bailout and resulting incentive problems it engenders in the financial sector. But in purely monetary terms, the exercise simply brings hitherto privately funded assets into the publicly funded domain.

Even after the deflationary collapse had been averted, simply holding short rates at zero and even a promise to hold rates at near-zero levels may be insufficient to reduce real rates sufficiently at the long end of the treasury curve. The market may simply not believe that the central bank is being credible when it promises to be “irresponsible”. Therefore the focus shifts to reducing the interest rates on longer-dated government bonds or even chosen risky assets via direct market purchases – MBS in the case of QE1 but there is no reason why even corporate bonds and equities could not be used for this purpose. If a fiat-currency issuing central bank does not care about inflation, it can enforce any chosen nominal rate at any maturity on the risk-free yield curve. Of course, in reality, central banks do care about inflation and therefore, instead of phrasing QE as a binding yield target, central banks limit themselves by the quantity of long-term bonds bought. As Perry Mehrling notes, QE2 is most similar to war finance and differs only in the choice of a quantity rather than a yield target. During WW2 the Fed essentially fixed the price of the entire government bond yield curve. Perry Mehrling describes it well in his excellent book“Throughout the war, the interest rate on Treasury debt was fixed at 3/8 percent for three-month bills and between 2 and 2½ percent for long-term bonds, and it was the job of the Fed to support these prices by offering two-way convertibility into cash…it was not until the Fed-Treasury Accord of March 1951 that the Fed was released from its wartime responsibility to peg the price of government debt.” The Fed-Treasury accord in 1951 that signalled the end of this phase was a consequence of an outbreak of inflation brought upon by the Korean War.

Arbitrage and Negative Real Rates

The textbook arbitrage response to ex-ante negative real rates is to buy and store the goods comprising one’s future consumption basket. In the real world, this is often not a realistic option and negative real rates can prevail for significant periods of time. This is especially true if inflation only exceeds risk-free rates by small amounts. Maintaining risk-free rates at low levels while running double-digit inflation levels risks demonetisation and hyperinflation but a prolonged period of small negative real rates may achieve the dual objective of growth and reduced indebtedness without at any point running a significant risk of demonetisation. So long as the central bank’s “pocket picking” is not too aggressive, the risk of demonetisation is slim.

As Reinhart and Sbrancia note, the option of enforcing negative real rates was available in the post-1945 environment only because “debts were predominantly domestic and denominated in domestic currencies.” Therefore although the US and Britain may try to follow the same policy again, it is clear that this option is not available to the peripheral economies in the Eurozone. Reinhart and Sbrancia argue that “inflation is most effective in liquidating government debts (or debts in general), when interest rates are not able to respond to the rise in inflation and in inflation expectations. This disconnect between nominal interest rates and inflation can occur if: (i) the setting is one where interest rates are either administered or predetermined (via financial repression, as described); (ii) all government debts are fixed- rate and long maturities and the government has no new financing needs (even if there is no financial repression the long maturities avoid rising interest costs that would otherwise prevail if short maturity debts needed to be rolled over); and (iii) all (or nearly all) debt is liquidated in one “surprise” inflation spike.” Condition (ii) is not satisfied in either the US or Europe whereas attempting to liquidate debt with one surprise inflation spike risks losing the credibility that central banks have fought so hard to acquire. Which leaves only option (i).

But even if investors cannot store their future consumption basket, could they simply not move into commodities or currencies with higher real rates of return? As James Hamilton notes, “there’s an incentive to buy and hold those goods that are storable…..episodes of negative real interest rates have usually been associated with rapidly rising commodity prices.” But the investment implications of negative real rates regimes are not quite so straightforward.

Implications for Financial Markets and Investment Strategies

In Reinhart and Sbrancia’s words, “inflation is most effective in liquidating government debts when interest rates are not able to respond to the rise in inflation and in inflation expectations.” If interest rates track the rise in inflation and real rates are positive, then a risk-averse investor simply needs to be invested in short-duration bonds (e.g. floating rate bonds) to preserve his purchasing power. In countries such as Australia, floating-rate bonds and short-duration bonds may preserve purchasing power in the same manner that inflation linkers can. But this does not hold for a countries such as the United States or the United Kingdom where real rates at the short-end are negative. Ex-ante negative real interest rates ensure that there is no “risk-free” asset in the market that can preserve one’s purchasing power. As Bill Gross notes: “bond prices don’t necessarily have to go down for savers to get skunked during a process of debt liquidation.” The logical response is to move to real assets or, as Bill Gross suggests, “developing/emerging market debt at higher yields denominated in non-dollar currencies” with a positive real interest rate. But as always, there are no free lunches.

Let’s assume that the market expects no “real rate suppression” to start with and the Fed surprises the market with an announcement that it intends to suppress the rate to the extent of 20% over the next decade. Assuming that Australian and Brazilian monetary and fiscal policy expectation remains unchanged by this announcement, the market should immediately revalue the Australian Dollar (AUD) and the Brazilian Real (BRL) upwards by 20%, a revaluation that it will give back over the next decade. Anyone who invests in either currency afterwards will not earn a superior return to what is available to him in USD. This idealised example makes many assumptions e.g. currency parity, ignores risk premiums and abstracts away from uncertainty. But the point that I am trying to make is simply this: Once real rate suppression has commenced, all asset prices will necessarily adjust to reflect the expected amount of suppression. Even a cursory look at the extent of recent appreciation in AUD or BRL tells us that much of this adjustment may have already taken place. In other words, there is no free lunch in moving away from USD to any other asset – an investment in real assets or foreign currency bonds only makes sense if one believes that the actual extent of suppression will exceed the current estimate.

Risk premiums will not change the above analysis in any meaningful manner. The idea that one can earn higher returns simply by turning up a “risk” dial is tenuous at the best of times but in the absence of a truly risk-free asset that preserves purchasing power, the very idea of a “risk premium” is meaningless. In the language of Kahneman and Tversky, it is the category boundary between certainty and uncertainty that matters most to an investor.

But the key difference between the above idealised example and the real world is the uncertainty about the extent and the pace of real rate suppression that a central bank will follow through with. The critical source of this uncertainty is the inflation and employment target that guides the central bank. The central bank may change its plans midway for a variety of reasons – a spike in inflation may put pressure on it to hike rates even if growth remains sluggish, a revival in real GDP growth may also allow it to unwind the program early. Even worse, inflation may slip below target despite the CB’s best efforts to stimulate investment and consumption demand i.e. the Japan scenario.

The expectation and distribution of real rate suppression influences the valuation of every asset price and the changes in this expectation and distribution become a significant source of market volatility across asset classes. What is also clear that for many real assets and foreign currency bonds, the present scenario where the economy muddles through without falling into either the Japan scenario or managing a strong recovery is the “best of all worlds”. To put it in the language of derivatives, if we define the amount of “real rate suppression” as the risk variable, then many real assets represent a “short volatility” trade. Obviously, this does not take into account the sensitivity of some of these assets to the economic conditions but this does not make it irrelevant even for assets such as US equities. The expected valuation uplift in equities from a strong economy may easily be at least partially negated by the reduced expectation of real rate suppression. This also illustrates how a jobless recovery that doesn’t turn into the Japan scenario is the ideal environment for equities. So long as monetary policy is guided by the level of employment, GDP growth without a pickup in employment maximises the expectation of rate suppression and by extension, the valuation of equity markets.

Bookmark and Share

Written by Ashwin Parameswaran

May 10th, 2011 at 6:28 am

Financial Market Regulation and The Art of War

with 14 comments

“The interaction between the market participants, and for that matter between the market participants and the regulators, is not a game, but a war.”

Rick Bookstaber recently compared the complexity of the financial marketplace to that observed in military warfare. Bookstaber focuses primarily on the interaction between market participants but as he mentions, the same analogy also holds for the interaction between market participants and the regulator. In this post, I analyse the role of the financial market regulator within this context. Bookstaber primarily draws upon the work of John Boyd but I will focus on Sun Tzu’s ‘Art of War’.

Much like John Boyd, Sun Tzu emphasised the role of deception in war: “All warfare is based on deception”. In the context of regulation, “deception” is best understood as the need for the regulator to be unpredictable. This is not uncommon in other war-like economic domains. Google, for example, must maintain the secrecy and ambiguity of its search algorithms in order to stay one step ahead of the SEO firms’ attempts to game them. An unpredictable regulator may seem like a crazy idea but in fact it is a well-researched option in the central banking policy arsenal. In a paper for the Federal Reserve bank of Richmond in 1999, Jeffrey Lacker and Marvin Goodfriend analysed the merits of a regulator adopting a stance of ‘constructive ambiguity’. They concluded that a stance of constructive ambiguity was unworkable and could not prevent the moral hazard that arose from the central bank’s commitment to backstop banks in times of crisis. The reasoning was simple: constructive ambiguity is not time-consistent. As Lacker and Goodfriend note: “The problem with adding variability to central bank lending policy is that the central bank would have trouble sticking to it, for the same reason that central banks tend to overextend lending to begin with. An announced policy of constructive ambiguity does nothing to alter the ex post incentives that cause central banks to lend in the first place. In any particular instance the central bank would want to ignore the spin of the wheel.” Steve Waldman summed up the time-consistency problem in regulation well when he noted: “Given the discretion to do so, financial regulators will always do the wrong thing.” In fact, Lacker has argued that it was this stance of constructive ambiguity combined with the creditor bailouts since Continental Illinois that the market understood to be an implicit commitment to bailout TBTF banks.

As is clear from the war analogy, a predictable adversary is easily defeated. This of course is why Goodhart’s Law is such a big problem in regulation. Lacker’s suggestion that the regulator follow a “simple decision rule” is fatally flawed for the same reason. Lacker also suggests that “legal constraints limiting policymakers’ actions” could be imposed to mitigate the moral hazard problem. But attempting to lay out a comprehensive list of constraints suffers from the same problem i.e. they can be easily circumvented by a determined regulator. If the relationship between a regulator and the regulated is akin to war, then so is the relationship between the rule-making legislative body and the regulator. Bank bailouts can and have been carried out over the last thirty years under many different guises: explicit creditor bailouts, asset backstops a la Bear Stearns, “liquidity” support via expanded and lenient collateral standards, interest rate cuts as a bank recapitalisation mechanism etc.

Bookstaber asserts quite rightly that the military analogy stems from a view of human rationality that is at odds with both neoclassical and behavioural economics, a point that Gerd Gigerenzer has repeatedly emphasised. Homo economicus relies on a strangely simplistic version of the ‘computational theory of the mind’ that assumes man to be an optimising computer. Behavioural economics then compares the reality of human rationality to this computational ideal and finds man to be an inferior version of a computer, riddled with biases and errors. As Gigerenzer has argued, many heuristics and biases that appear to be irrational or illogical are entirely rational responses to an uncertain world. But clearly deception and unpredictability go beyond simply substituting the rationality of homo economicus with simple heuristics. In the ‘Art of War’, Sun Tzu insists that a successful general must “respond to circumstances in an infinite variety of ways”. Each battle must be fought in its unique context and “when victory is won, one’s tactics are not repeated”. To Sun Tzu, the expert general must be “serene and inscrutable”. In one of the most fascinating passages in the book, he describes the actions and decisions of the expert general: “How subtle and insubstantial, that the expert leaves no trace. How divinely mysterious, that he is inaudible.”

As Robert Wilkinson notes, in order to make any sense of these comments, one needs to appreciate the Taoist underpinnings of the ‘Art of War’. The “infinite variety” of tactics is not the variety that comes from making decisions based on the “spin of a roulette wheel” that Goodfriend and Lacker take to provide constructive ambiguity. It comes from an appreciation of the unique context in which each situation is placed and the flexibility, adaptability and novelty required to succeed. The “inaudibility” refers to the inability to translate such expertise into rules, algorithms or even heuristics. The ‘Taoist adept’ relies on the same intuitive tacit understanding that lies at the heart of what Hubert and Stuart Dreyfus call “expert know-how”1. In fact, rules and algorithms may paralyse the expert rather than aid him. Hubert/Stuart Dreyfus noticed of expert pilots that “rather  than  being  aware  that  they are  flying  an  airplane,  they  have  the  experience  that  they  are flying.  The  magnitude  and  importance  of  this  change  from  analytic  thought  to  intuitive  response  is  evident  to  any  expert pilot  who  has  had  the  experience  of  suddenly  reflecting  upon  what he is  doing,  with  an  accompanying  degradation  of  his  performance and  the  disconcerting  realization  that  rather  than  simply  flying, he  is  controlling  a  complicated  mechanism.” The same sentiment was expressed rather more succinctly by Laozi when he said:

“Having some knowledge
When walking the Great Tao
Only brings fear.”

I’m not suggesting that financial markets regulation would work well if only we could hire “expert” regulators. The regulatory capture and the revolving door between the government and Wall Street that is typical of late-stage Olsonian demosclerosis means that the real relationship between the regulator and the regulated is anything but adversarial. I’m simply asserting that there is no magical regulatory recipe or formula that will prevent Wall Street from gaming and arbitraging the system. This is the unresolvable tension in financial markets regulation: Discretionary policy falls prey to the time-consistency problem. The alternative, a systematic and predictable set of rules, is the worst possible way to fight a war.

  1. This Taoist slant to Hubert Dreyfus’ work is not a coincidence. Dreyfus was deeply influenced by the philosophy of Martin Heidegger who, although he never acknowledged it, was almost certainly influenced by Taoist thought []
Bookmark and Share

Written by Ashwin Parameswaran

April 4th, 2011 at 10:29 am

The Great Recession through a Crony Capitalist Lens

with 9 comments

In this post, I apply the framework outlined previously to some empirical patterns in the financial markets and the broader economy. The objective is not to posit crony capitalism as the sole explanation of the below patterns, but merely to argue that the below patterns are consistent with an increasingly crony capitalist economy.

The Paradox of Low Volatility and High Correlation

As many commentators have pointed out [1,2,3], the spike in volatility experienced during the depths of the financial crisis has largely reversed itself but correlation within equities and between various risky asset classes has kept on moving higher. The combination of high volatility and high correlation is associated with the process of collapse and typical of the Minsky moment when the system undergoes a rapid delevering. However the combination of high correlation and low volatility post the Minsky moment is unusual. In the absence of bailouts or protectionism, the economy should undergo a process of creative destruction and intense exploratory activity which by its diffuse nature results in low correlation. The combination of high correlation and low volatility instead signifies stasis and the absence of sufficient exploration in the economy, alongwith the presence of significant slack at firm level (micro-resilience).

As I mentioned in a previous post, financing constraints faced by small businesses hinder new firm entry across industries. Expanding lending to new firms is an act of exploration and incumbent banks are almost certainly content with exploiting their known and low-risk sources of income instead.

The Paradox of High Corporate Profitability, Rising Productivity and High Unemployment and The Paradox of High Cash Balances and High Debt Issuance

Although corporate profitability is not at an all-time high, it has recovered at an unusually rapid pace compared to the nonexistent recovery in employment and wages. The recovery in corporate profits has been driven by a rise in worker productivity and increased efficiency but the lag between an output recovery and an employment recovery seems to have increased dramatically. So far, this increased profitability has led not to increased business investment but to increased cash holdings by corporates. Big corporates with easy access to debt markets have even chosen to tap the debt markets simply for the purpose of increasing cash holdings.

Again, incumbent corporates are eager to squeeze efficiencies out of their current operations including downsizing the labour force but instead of channeling the savings from this increased efficiency into exploratory investment, they choose to increase holdings of liquid assets. In an environment where incumbents are under limited threat of being superceded by exploratory new entrants, holding cash is an extremely effective way to retain optionality (a strategy that is much less effective if the pace of exploratory innovation is high as an extended period of standing on the sidelines of exploratory activity can degrade the ability of the incumbent to rejoin the fray). Old jobs are being destroyed by the optimising activities of incumbents but the exploration required to create new jobs does not take place.

This discussion of profitability and unemployment echoes many of the common concerns of the far left. This is not a coincidence – one of the most damaging effects of Olsonian cronyism is its malformation of the economy from a positive-sum game into an increasingly zero-sum game. The dynamics of a predominantly crony capitalist economy are closer to a Marxian class struggle than they are to a competitive free-market economy. However, where I differ significantly from the left is in the proposed cure for the disease. For example, incumbent investment can be triggered by an increase in leverage by another sector – given the indebted state of the consumer, the government is the most likely candidate. But such a policy does nothing to tackle the reduced evolvability of the economy or the dominance of the incumbent special interest groups. Moreover, increased taxation and transfers of wealth to other organised groups such as labour only aggravate the ossification of the economic system into an increasingly zero-sum game. A sustainable solution must restore the positive-sum dynamics that are the essence of Schumpeterian capitalism. Such a solution involves reducing the power of the incumbent corporates and transferring wealth from incumbent corporates towards households not by taxation or protectionism but by restoring the invisible foot of new firm entry.

Bookmark and Share

Written by Ashwin Parameswaran

November 30th, 2010 at 7:27 am

The Cause and Impact of Crony Capitalism: the Great Stagnation and the Great Recession

with 23 comments

STABILITY AS THE PRIMARY CAUSE OF CRONY CAPITALISM

The core insight of the Minsky-Holling resilience framework is that stability and stabilisation breed fragility and loss of system resilience . TBTF protection and the moral hazard problem is best seen as a subset of the broader policy of stabilisation, of which policies such as the Greenspan Put are much more pervasive and dangerous.

By itself, stabilisation is not sufficient to cause cronyism and rent seeking. Once a system has undergone a period of stabilisation, the system manager is always tempted to prolong the stabilisation for fear of the short-term disruption or even collapse. However, not all crisis-mitigation strategies involve bailouts and transfers of wealth to the incumbent corporates. As Mancur Olson pointed out, society can confine its “distributional transfers to poor and unfortunate individuals” rather than bailing out incumbent firms and still hope to achieve the same results.

To fully explain the rise of crony capitalism, we need to combine the Minsky-Holling framework with Mancur Olson’s insight that extended periods of stability trigger a progressive increase in the power of special interests and rent-seeking activity. Olson also noted the self-preserving nature of this phenomenon.  Once rent-seeking has achieved sufficient scale, “distributional coalitions have the incentive and..the power to prevent changes that would deprive them of their enlarged share of the social output”.

SYSTEMIC IMPACT OF CRONY CAPITALISM

Crony capitalism results in a homogenous, tightly coupled and fragile macroeconomy. The key question is: Via which channels does this systemic malformation occur? As I have touched upon in some earlier posts [1,2], the systemic implications of crony capitalism arise from its negative impact on new firm entry. In the context of the exploration vs exploitation framework, absence of new firm entry tilts the system towards over-exploitation1 .

Exploration vs Exploitation: The Importance of New Firm Entry in Sustaining Exploration

In a seminal article, James March distinguished between “the exploration of new possibilities and the exploitation of old certainties. Exploration includes things captured by terms such as search, variation, risk taking, experimentation, play, flexibility, discovery, innovation. Exploitation includes such things as refinement, choice, production, efficiency, selection, implementation, execution.” True innovation is an act of exploration under conditions of irreducible uncertainty whereas exploitation is an act of optimisation under a known distribution.

The assertion that dominant incumbent firms find it hard to sustain exploratory innovation is not a controversial one. I do not intend to reiterate the popular arguments in the management literature, many of which I explored in a previous post. Moreover, the argument presented here is more subtle: I do not claim that incumbents cannot explore effectively but simply that they can explore effectively only when pushed to do so by a constant stream of new entrants. This is of course the “invisible foot” argument of Joseph Berliner and Burton Klein for which the exploration-exploitation framework provides an intuitive and rigorous rationale.

Let us assume a scenario where the entry of new firms has slowed to a trickle, the sector is dominated by a few dominant incumbents and the S-curve of growth is about to enter its maturity/decline phase. To trigger off a new S-curve of growth, the incumbents need to explore. However, almost by definition, the odds that any given act of exploration will be successful is small. Moreover, the positive payoff from any exploratory search almost certainly lies far in the future. For an improbable shot at moving from a position of comfort to one of dominance in the distant future, an incumbent firm needs to divert resources from optimising and efficiency-increasing initiatives that will deliver predictable profits in the near future. Of course if a significant proportion of its competitors adopt an exploratory strategy, even an incumbent firm will be forced to follow suit for fear of loss of market share. But this critical mass of exploratory incumbents never comes about. In essence, the state where almost all incumbents are content to focus their energies on exploitation is a Nash equilibrium.

On the other hand, the incentives of any new entrant are almost entirely skewed in favour of exploratory strategies. Even an improbable shot at glory is enough to outweigh the minor consequences of failure2 . It cannot be emphasised enough that this argument does not depend upon the irrationality of the entrant. The same incremental payoff that represents a minor improvement for the incumbent is a life-changing event for the entrepreneur. When there exists a critical mass of exploratory new entrants, the dominant incumbents are compelled to follow suit and the Nash equilibrium of the industry shifts towards the appropriate mix of exploitation and exploration.

The Crony Capitalist Boom-Bust Cycle: A Tradeoff between System Resilience and Full Employment

Due to insufficient exploratory innovation, a crony capitalist economy is not diverse enough. But this does not imply that the system is fragile either at firm/micro level or at the level of the macroeconomy. In the absence of any risk of being displaced by new entrants, incumbent firms can simply maintain significant financial slack3. If incumbents do maintain significant financial slack, sustainable full employment is impossible almost by definition.  However, full employment can be achieved temporarily in two ways: Either incumbent corporates can gradually give up their financial slack and lever up as the period of stability extends as Minsky’s Financial Instability Hypothesis (FIH) would predict, or the household or government sector can lever up to compensate for the slack held by the corporate sector.

Most developed economies went down the route of increased household and corporate leverage with the process aided and abetted by monetary and regulatory policy. But it is instructive that developing economies such as India faced exactly the same problem in their “crony socialist” days. In keeping with its ideological leanings pre-1990, India tackled the unemployment problem via increased government spending. Whatever the chosen solution, full employment is unsustainable in the long run unless the core problem of cronyism is tackled. The current over-leveraged state of the consumer in the developed world can be papered over by increased government spending but in the face of increased cronyism, it only kicks the can further down the road. Restoring corporate animal spirits depends upon corporate slack being utilised in exploratory investment, which as discussed above is inconsistent with a cronyist economy.

Micro-Fragility as the Key to a Resilient Macroeconomy and Sustainable Full Employment

At the appropriate mix of exploration and exploitation, individual incumbent and new entrant firms are both incredibly vulnerable. Most exploratory investments are destined to fail as are most firms, sooner or later. Yet due to the diversity of firm-level strategies, the macroeconomy of vulnerable firms is incredibly resilient. At the same time, the transfer of wealth from incumbent corporates to the household sector via reduced corporate slack and increased investment means that sustainable full employment can be achieved without undue leverage. The only question is whether we can break out of the Olsonian special interest trap without having to suffer a systemic collapse in the process.

  1. It cannot be emphasized enough that absence of new firm entry is simply the channel through which crony capitalism malforms the macroeconomy. Therefore, attempts to artificially boost new firm entry are likely to fail unless they tackle the ultimate cause of the problem which is stabilisation []
  2. It is critical that the personal consequences of firm failure are minor for the entrepreneur – this is not the case for cultural and legal reasons in many countries around the world but is largely still true in the United States. []
  3. It could be argued that incumbents could follow this strategy even when new entrants threaten them. This strategy however has its limits – an extended period of standing on the sidelines of exploratory activity can degrade the ability of the incumbent to rejoin the fray. As Brian Loasby remarked : “For many years, Arnold Weinberg chose to build up GEC’s reserves against an uncertain technological future in the form of cash rather than by investing in the creation of technological capabilities of unknown value. This policy, one might suggest, appears much more attractive in a financial environment where technology can often be bought by buying companies than in one where the market for corporate control is more tightly constrained; but it must be remembered that some, perhaps substantial, technological capability is likely to be needed in order to judge what companies are worth acquiring, and to make effective use of the acquisitions. As so often, substitutes are also in part complements.” []
Bookmark and Share

Written by Ashwin Parameswaran

November 24th, 2010 at 6:01 pm

Questioning the Benefits of Maturity Transformation

with 12 comments

William Dudley recounts the conventional story on how society benefits from maturity transformation here: “The need for maturity transformation arises from the fact that the preferred habitat of borrowers tends toward longer-term maturities used to finance long-lived assets such as a house or a manufacturing plant, compared with the preferred habitat of investors, who generally have a preference to be able to access their funds quickly. Financial intermediaries act to span these preferences, earning profits by engaging in maturity transformation—borrowing shorter-term in order to finance longer-term lending.” The debate on maturity transformation then focuses on comparing these benefits of maturity transformation with its role in creating fragility and moral hazard  in the financial system. This post explores a different tack and argues that even the purported benefits of maturity transformation are overstated – structural changes in the economy have drastically reduced and even possibly eliminated the need for society to promote and subsidise maturity transformation.

Ceteris paribus, most borrowers prefer to match the maturity of their liabilities to the maturity of their assets. For example, a corporate borrowing to fund a nuclear plant will seek to borrow long-term funds whose repayment schedule closely matches the expected cashflows from its project. It is important to realise that longer is not always better from the borrower’s perspective. A corporate that needs to fund its working capital will borrow on a short-term basis,  and many borrowers (such as homeowners) are willing to pay a premium to retain prepayment options. So if we sum up the demand from all borrowers in an economy, we face a term structure of loan demand rather than a simplistic need to borrow long-term funds. This term structure of loan demand primarily depends on the nature of investment opportunities available at any given point of time. For example, if the economy is undergoing a major upgradation in key infrastructure as in the case in many emerging markets, loan demand will be skewed towards longer-term funds.

Now what about investors’ preference for shorter term maturities? Again, it’s too naive and simplistic to state that all investors simply prefer shorter maturity investments – in particular, it ignores the increasing assets under management of pension funds and life insurers who strongly prefer longer-tenor investments that match their natural long-tenor liabilities. The growing role of pension funds in the long-end is of course a relatively recent phenomenon driven by many factors such as increased longevity and the phase-out of defined benefit and pay-as-you-go pension schemes, the result of which is to divert an increasing portion of investor funds into long-tenor investments. Indeed, pension funds and life insurers are the dominant player in the long-end of the interest rate curve in Europe even though Europe is behind the curve in the transition away from a defined-benefit, pay-as-you-go pension model – a situation that will be exacerbated by the adoption of Solvency 2. In the United Kingdom, pension demand in the long-end meant that the interest rate curve was perennially inverted until the financial crisis hit and short rates plummeted.

The obvious objection to the above story is as follows: even if there is significant investor demand for long-tenor investments, won’t removing bank demand for them still lead to a catastrophic increase in long-end interest rates? The answer is No – as I explained in a previous post, the most significant proportion of the difference between long-end and short-end rates comes from the interest rate differential which most banks hedge out to a large degree (ironically with pension funds and insurers). The part that is usually left unhedged is the credit risk and the liquidity risk. Removing maturity-transformers from the long-end will only lead to a small rise in rates to the extent of the quantum of this unhedged credit risk – what’s more, some of this lost demand may be made up for by pension funds who choose to allocate a higher proportion of their assets towards fixed income investments in response to the rise in rates. But more fundamentally, even if rates do rise at the long-end it is not at all clear that this reduces the welfare of society in any manner. Suppose financial intermediaries are forced to move away from the long-end to the short-end – the resultant reduction of rates at the short-end may even be beneficial if the natural distribution of investment opportunities is more skewed towards the short-end.

It’s worth reiterating that my preferred solution is not to ban or to artificially limit maturity transformation. As Rajiv Sethi points out, all firms can engage in maturity transformation and many do so after explicitly considering the risks involved in it – not surprising given that even on an interest rate hedged basis, short-tenor loans usually cost less than long-tenor loans due to the usually upward-sloping credit spread curve. The question is whether we need to protect maturity transforming banks against the liquidity risk inherent in their actions in order to prevent bank runs – Ideally not but in a second best world where the past weight of protected maturity-transforming actions by banks have made the system too fragile to remove this protection all at once, it is worth putting in place an explicit limitation on the practice of maturity transformation in the future.

Bookmark and Share

Written by Ashwin Parameswaran

October 21st, 2010 at 7:54 am

Posted in Financial Crisis

Evolvability, Robustness and Resilience in Complex Adaptive Systems

with 14 comments

In a previous post, I asserted that “the existence of irreducible uncertainty is sufficient to justify an evolutionary approach for any social system, whether it be an organization or a macro-economy.” This is not a controversial statement – Nelson and Winter introduced their seminal work on evolutionary economics as follows: “Our evolutionary theory of economic change…is not an interpretation of economic reality as a reflection of supposedly constant “given data” but a scheme that may help an observer who is sufficiently knowledgeable regarding the facts of the present to see a little further through the mist that obscures the future.”

In microeconomics, irreducible uncertainty implies a world of bounded rationality where many heuristics become not signs of irrationality but a rational and effective tool of decision-making. But it is the implications of human action under uncertainty for macro-economic outcomes that is the focus of this blog – In previous posts (1,2) I have elaborated upon the resilience-stability tradeoff and its parallels in economics and ecology. This post focuses on another issue critical to the functioning of all complex adaptive systems: the relationship between evolvability and robustness.

Evolvability and Robustness Defined

Hiroaki Kitano defines robustness as follows: “Robustness is a property that allows a system to maintain its functions despite external and internal perturbations….A system must be robust to function in unpredictable environments using unreliable components.” Kitano makes it explicit that robustness is concerned with the maintenance of functionality rather than specific components: “Robustness is often misunderstood to mean staying unchanged regardless of stimuli or mutations, so that the structure and components of the system, and therefore the mode of operation, is unaffected. In fact, robustness is the maintenance of specific functionalities of the system against perturbations, and it often requires the system to change its mode of operation in a flexible way. In other words, robustness allows changes in the structure and components of the system owing to perturbations, but specific functions are maintained.”

Evolvability is defined as the ability of the system to generate novelty and innovate thus enabling the system to “adapt in ways that exploit new resources or allow them to persist under unprecedented environmental regime shifts” (Whitacre 2010). At first glance, evolvability and robustness appear to be incompatible: Generation of novelty involves a leap into the dark, an exploration rather than an act of “rational choice” and the search for a beneficial innovation carries with it a significant risk of failure. It’s worth noting that in social systems, this dilemma vanishes in the absence of irreducible uncertainty. If all adaptations are merely a realignment to a known systemic configuration (“known” in either a deterministic or a probabilistic sense), then an inability to adapt needs other explanations such as organisational rigidity.

Evolvability, Robustness and Resilience

Although it is typical to equate resilience with robustness, resilient complex adaptive systems also need to possess the ability to innovate and generate novelty. As Allen and Holling put it : “Novelty and innovation are required to keep existing complex systems resilient and to create new structures and dynamics following system crashes”. Evolvability also enables the system to undergo fundamental transformational change – it could be argued that such innovations are even more important in a modern capitalist economic system than they are in the biological or ecological arena. The rest of this post will focus on elaborating upon how macro-economic systems can be both robust and evolvable at the same time – the apparent conflict between evolvability and robustness arises from a fallacy of composition where macro-resilience is assumed to arise from micro-resilience, when in fact it arises from the very absence of micro-resilience.

EVOLVABILITY, ROBUSTNESS AND RESILIENCE IN MACRO-ECONOMIC SYSTEMS

The pre-eminent reference on how a macro-economic system can be both robust and evolvable at the same time is the work of Burton Klein in his books “Dynamic Economics” and “Prices, Wages and Business Cycles: A Dynamic Theory”. But as with so many other topics in evolutionary economics, no one has summarised it better than Brian Loasby: “Any economic system which is to remain viable over a long period must be able to cope with unexpected change. It must be able to revise or replace policies which have worked well. Yet this ability is problematic. Two kinds of remedy may be tried, at two different system levels. One is to try to sensitize those working within a particular research programme to its limitations and to possible alternatives, thus following Menger’s principle of creating private reserves against unknown but imaginable dangers, and thereby enhancing the capacity for internal adaptation….But reserves have costs; and it may be better , from a system-wide perspective, to accept the vulnerability of a sub-system in order to exploit its efficiency, while relying on the reserves which are the natural product of a variety of sub-systems….
Research programmes, we should recall, are imperfectly specified, and two groups starting with the same research programme are likely to become progressively differentiated by their experience, if there are no strong pressures to keep them closely aligned. The long-run equilibrium of the larger system might therefore be preserved by substitution between sub-systems as circumstances change. External selection may achieve the same overall purpose as internal adaptation – but only if the system has generated adequate variety from which the selection may be made. An obvious corollary which has been emphasised by Klein (1977) is that attempts to preserve sub-system stability may wreck the larger system. That should not be a threatening notion to economists; it also happens to be exemplified by Marshall’s conception of the long-period equilibrium of the industry as a population equilibrium, which is sustained by continued change in the membership of that population. The tendency of variation is not only a chief cause of progress; it is also an aid to stability in a changing environment (Eliasson, 1991). The homogeneity which is conducive to the attainment of conventional welfare optima is a threat to the resilience which an economy needs.”

Uncertainty can be tackled at the micro-level by maintaining reserves and slack (liquidity, retained profits) but this comes at the price of slack at the macro-level in terms of lost output and employment. Note that this is essentially a Keynesian conclusion, similar to how individually rational saving decisions can lead to collectively sub-optimal outcomes. From a systemic perspective, it is more preferable to substitute the micro-resilience with a diverse set of micro-fragilities. But how do we induce the loss of slack at firm-level? And how do we ensure that this loss of micro-resilience occurs in a sufficiently diverse manner?

The “Invisible Foot”

The concept of the “Invisible Foot” was introduced by Joseph Berliner as a counterpoint to Adam Smith’s “Invisible Hand” to explain why innovation was so hard in the centrally planned Soviet economy: “Adam Smith taught us to think of competition as an “invisible hand” that guides production into the socially desirable channels….But if Adam Smith had taken as his point of departure not the coordinating mechanism but the innovation mechanism of capitalism, he may well have designated competition not as an invisible hand but as an invisible foot. For the effect of competition is not only to motivate profit-seeking entrepreneurs to seek yet more profit but to jolt conservative enterprises into the adoption of new technology and the search for improved processes and products. From the point of view of the static efficiency of resource allocation, the evil of monopoly is that it prevents resources from flowing into those lines of production in which their social value would be greatest. But from the point of view of innovation, the evil of monopoly is that it enables producers to enjoy high rates of profit without having to undertake the exacting and risky activities associated with technological change. A world of monopolies, socialist or capitalist, would be a world with very little technological change.” To maintain an evolvable macro-economy, the invisible foot needs to be “applied vigorously to the backsides of enterprises that would otherwise have been quite content to go on producing the same products in the same ways, and at a reasonable profit, if they could only be protected from the intrusion of competition.”

Entry of New Firms and the Invisible Foot

Burton Klein’s great contribution along with other dynamic economists of the time (notably Gunnar Eliasson) was to highlight the critical importance of entry of new firms in maintaining the efficacy of the invisible foot. Klein believed that “the degree of risk taking is determined by the robustness of dynamic competition, which mainly depends on the rate of entry of new firms. If entry into an industry is fairly steady, the game is likely to have the flavour of a highly competitive sport. When some firms in an industry concentrate on making significant advances that will bear fruit within several years, others must be concerned with making their long-run profits as large as possible, if they hope to survive. But after entry has been closed for a number of years, a tightly organised oligopoly will probably emerge in which firms will endeavour to make their environments highly predictable in order to make their environments highly predictable in order to make their short-run profits as large as possible….Because of new entries, a relatively concentrated industry can remain highly dynamic. But, when entry is absent for some years, and expectations are premised on the future absence of entry, a relatively concentrated industry is likely to evolve into a tight oligopoly. In particular, when entry is long absent, managers are likely to be more and more narrowly selected; and they will probably engage in such parallel behaviour with respect to products and prices that it might seem that the entire industry is commanded by a single general!”

Again, it can’t be emphasised enough that this argument does not depend on incumbent firms leaving money on the table – on the contrary, they may redouble their attempts at static optimisation. From the perspective of each individual firm, innovation is an incredibly risky process even though the result of such dynamic competition from the perspective of the industry or macro-economy may be reasonably predictable. Of course, firms can and do mitigate this risk by various methods but this argument only claims that any single firm, however dominant cannot replicate the “risk-free” innovation dynamics of a vibrant industry in-house.

Micro-Fragility as the Hidden Hand of Macro-Resilience

In an environment free of irreducible uncertainty, evolvability suffers leading to reduced macro-resilience. “If firms could predict each others’ advances they would not have to insure themselves against uncertainty by taking risks. And no smooth progress would occur” (Klein 1977). Conversely, “because firms cannot predict each other’s discoveries, they undertake different approaches towards achieving the same goal. And because not all of the approaches will turn out to be equally successful, the pursuit of parallel paths provides the options required for smooth progress.”

The Aftermath of the Minsky Moment: A Problem of Micro-Resilience

Within the context of the current crisis, the pre-Minsky moment system was a homogeneous system with no slack which enabled the attainment of “conventional welfare optima” but at the cost of an incredibly fragile and unevolvable condition. The logical evolution of such a system post the Minsky moment is of course still a homogeneous system but with significant firm-level slack built in which is equally unsatisfactory. In such a situation, the kind of macro-economic intervention matters as much as the force of intervention. For example, in an ideal world, monetary policy aimed at reducing borrowing rates of incumbent banks and corporates will flow through into reduced borrowing rates for new firms. In a dynamically uncompetitive world, such a policy will only serve the interests of the incumbents.

The “Invisible Foot” and Employment

Vivek Wadhwa argues that startups are the main source of net job growth in the US economy and Mark Thoma links to research that confirms this thesis. Even if one disagrees with this thesis, the “invisible foot” thesis argues that if the old guard is to contribute to employment, they must be forced to give up their “slack” by the strength of dynamic competition and dynamic competition is maintained by preserving conditions that encourage entry of new firms.

MICRO-EVOLVABILITY AND MACRO-RESILIENCE IN BIOLOGY AND ECOLOGY

Note: The aim of this section is not to draw any false precise equivalences between economic resilience and ecological or biological resilience but simply to highlight the commonality of the micro-macro fallacy of composition across complex adaptive systems – a detailed comparison will hopefully be the subject of a future post. I have tried to keep the section on biological resilience as brief and simple as possible but an understanding of the genotype-phenotype distinction and neutral networks is essential to make sense of it.

Biology: Genotypic Variation and Phenotypic Robustness

In the specific context of biology, evolvability can be defined as “the capacity to generate heritable, selectable phenotypic variation. This capacity may have two components: (i) to reduce the potential lethality of mutations and (ii) to reduce the number of mutations needed to produce phenotypically novel traits” (Kirschner and Gerhart 1998). The apparent conflict between evolvability and robustness can be reconciled by distinguishing between genotypic and phenotypic robustness and evolvability. James Whitacre summarises Andrew Wagner’s work on RNA genotypes and their structure phenotypes as follows: “this conflict is unresolvable only when robustness is conferred in both the genotype and the phenotype. On the other hand, if the phenotype is robustly maintained in the presence of genetic mutations, then a number of cryptic genetic changes may be possible and their accumulation over time might expose a broad range of distinct phenotypes, e.g. by movement across a neutral network. In this way, robustness of the phenotype might actually enhance access to heritable phenotypic variation and thereby improve long-term evolvability.”

Ecology: Species-Level Variability and Functional Stability

The notion of micro-variability being consistent with and even being responsible for macro-resilience is an old one in ecology as Simon Levin and Jane Lubchenco summarise here: “That the robustness of an ensemble may rest upon the high turnover of the units that make it up is a familiar notion in community ecology. MacArthur and Wilson (1967), in their foundational work on island biogeography, contrasted the constancy and robustness of the number of species on an island with the ephemeral nature of species composition. Similarly, Tilman and colleagues (1996) found that the robustness of total yield in high-diversity assemblages arises not in spite of, but primarily because of, the high variability of individual population densities.”

The concept is also entirely consistent with the “Panarchy” thesis which views an ecosystem as a nested hierarchy of adaptive cycles: “Adaptive cycles are nested in a hierarchy across time and space which helps explain how adaptive systems can, for brief moments, generate novel recombinations that are tested during longer periods of capital accumulation and storage. These windows of experimentation open briefly, but the results do not trigger cascading instabilities of the whole because of the stabilizing nature of nested hierarchies. In essence, larger and slower components of the hierarchy provide the memory of the past and of the distant to allow recovery of smaller and faster adaptive cycles.”

Misc. Notes

1. It must be emphasised that micro-fragility is a necessary, but not a sufficient condition for an evolvable and robust macro-system. The role of not just redundancy but degeneracy is critical as is the size of the population.

2. Many commentators use resilience and robustness interchangeably. I draw a distinction primarily because my definitions of robustness and evolvability are borrowed from biology and my definition of resilience is borrowed from ecology which in my opinion defines a robust and evolvable system as a resilient one.

Bookmark and Share

Written by Ashwin Parameswaran

August 30th, 2010 at 8:38 am