macroresilience

resilience, not stability

Archive for the ‘Resilience’ Category

A Simple Solution to the Eurozone Sovereign Funding Crisis

with 14 comments

In response to the sovereign funding crisis sweeping across the Eurozone, the ECB decided to “conduct two longer-term refinancing operations (LTROs) with a maturity of 36 months”. Combined with the commitment of the members of the Eurozone excluding the possibility of any more haircuts on private sector holders of Euro sovereign bonds, the aim of the current exercise is clear. As Nicholas Sarkozy put it rather bluntly,

Italian banks will be able to borrow [from the ECB] at 1 per cent, while the Italian state is borrowing at 6–7 per cent. It doesn’t take a finance specialist to see that the Italian state will be able to ask Italian banks to finance part of the government debt at a much lower rate.

In other words, the ECB will not finance fiscal deficits directly but will be more than happy to do so via the Eurozone banking system. But this plan still has a few critical flaws:

  • As Sony Kapoor notes, “By doing this, you are strengthening the link between banks and sovereigns, which has proven so dangerous in this crisis. Even if useful in the short term, it would seriously increase the vulnerability of both banks and sovereigns to future shocks.” In other words, if the promise to exclude the possibility of inflicting losses on sovereign debt-holders is broken at any point of time in the future, then sovereign default will coincide with a complete decimation of the incumbent banks in Europe.
  • European banks are desperately capital-constrained as the latest EBA estimates on the capital shortfall faced by European banks shows. In such a condition, banks will almost certainly take on increased sovereign debt exposures only at the expense of lending to the private sector and households. This can only exacerbate the recession in the Eurozone.
  • Sarkozy’s comment also hints at the deep unfairness of the current proposal. If default and haircuts are not on the table, then allowing banks to finance their sovereign debt holdings at a lower rate than the yield they earn on the sovereign bonds (at the same tenor) is simply a transfer of wealth from the Eurozone taxpayer to the banks. Such a privilege may only be extended to the banks if banking is a “perfectly competitive” sector which it is far from being even in a boom economy. In the midst of an economic crisis when so many banks are tottering, it is even further away from the ideal of perfect competition.

There is a simple solution that tackles all three of the above problems – extend the generous terms of refinancing sovereign debt to the entire populace of the Eurozone such that the market for the “support of sovereign debt” is transformed into something close to perfectly competitive. In practise, this simply requires undertaking a program of fast-track banking licenses to new banks with low minimum size requirements on the condition that they restrict their activities to a narrow mandate of buying sovereign debt. This plan can correct all the flaws of the current proposal:

  • Instead of being concentrated within the incumbent failing banks, the sovereign debt exposure of the Eurozone would be spread in a diversified manner within the population. This will also help in making the “no more haircuts” commitment more time-consistent. The wider base of sovereign debt holders will reduce the possibility that the commitment will be reversed by democratic means. The only argument against this plan is that such a concentrated new bank is too risky but that assumes that there is still default risk on Eurozone sovereign debt and that the commitment is not credible.
  • The plan effectively injects new capital into the banking sector allowing incumbent bank capital to be deployed towards lending to the private sector and households. If sovereign debt spreads collapse, then the plan will also shore up the financial position of the incumbent banks thus injecting further capital available to be deployed.
  • The plan is fair. If the current crisis is indeed just a problem of high interest rates fuelling an increased risk of default, then interest rates will rapidly fall to a level much closer to the refinancing rate. To the extent that rates stay elevated and spreads do not converge, it will provide a much more accurate reflection of the real risk of default. No one will earn a supra-normal rate of return.

On this blog, I have criticised the indiscriminate provision of “liquidity” backstops by central banks on many occasions. I have also asserted that key economic functions must be preserved, not the incumbent entities that provide such functions. In times of crisis, central banking interventions are only fair when they are effectively accessible to the masses. At this critical juncture, the socially just policy may also be the only option that can save the single currency project.

Bookmark and Share

Written by Ashwin Parameswaran

December 10th, 2011 at 12:57 am

The Great Recession, Business Investment and Crony Capitalism

with 8 comments

Paul Krugman points out that since 1985, business investment has been purely a demand story i.e. “a depressed economy led to low business investment” and vice versa. As he explains “The Great Recession, in particular, was led by housing and consumption, with business investment clearly responding rather than leading”. But this does not imply that low business investment does not have a causal role to play in the conditions that led to the Great Recession, or that increased business investment does not have a role to play in the recovery.

As Steve Roth notes, business investment has been anaemic throughout the neo-liberal era. JW Mason reminds us that the neo-liberal transition also coincided with a dramatically increased financialisation of the real economy. Throughout my series of posts on crony capitalism, I have argued that the structural and cyclical problems of the developed world are inextricably intertwined. The anaemic trend in business investment is the reason why the developed world has been in a ‘great stagnation’ for so long. This ‘investment deficit’ manifests itself as the ‘corporate savings glut’ and an increasingly financialised economy. The cause of the investment deficit is an increasingly financialised, cronyist, demosclerotic system where incumbent corporates do not face competitive pressure to engage in risky exploratory investment.

Business investments can typically either operate upon the scale of operations (e.g. capacity,product mix) or they can change the fundamental character of operations (e.g. changes in process, product). Investments in scaling up operations are most easily influenced by monetary policy initiatives which reduce interest rates and raise asset prices or direct fiscal policy initiatives which operate via the multiplier effect. Investments in process innovation require the presence of price competition within the industry. Investments in exploratory product innovation require not only competition amongst incumbent firms but competition from a constant and robust stream of new entrants into the industry.

In an economy where new entrants are stymied by an ever-growing ‘License Raj’ that costs the US economy an estimated $100 billion per year, a web of regulations that exist primarily to protect incumbent large corporates and a dysfunctional patent regime, it is not surprising that exploratory business investment has fallen so dramatically. A less cronyist and more dynamically competitive economy without the implicit asset-price protection of the Greenpan/Bernanke put will have lesser profits in aggregate but more investment. Incumbents need to be compelled to take on risky ventures by the threat of extinction and obsolescence. Increased investments in risky exploratory ventures will not only drag the economy out of the ‘Great Stagnation’ but it will result in a reduced share of GDP flowing to corporate profits and an increased proportion of GDP flowing towards wages. In turn, this enables the economy to achieve a sustainable state of full employment and even a higher level of sustainable consumption without households having to resort to increased leverage as they had to during the Great Moderation.

Alexander Field has illustrated how even the growth of the Golden Age of the 50s and the 60s was built upon the foundations of Pre-WW2 innovation. If this thesis is correct, the ‘Great Stagnation’ was inevitable and in fact understates how long ago the innovation deficit started. The Great Moderation far from being the cure was simply a palliative that postponed the inevitable end-point of the evolution of the macroeconomy through successive cycles of Minskyian stabilisation. As I noted in a previous post:

The neoliberal transition unshackled the invisible hand (the carrot of the profit motive) without ensuring that all key sectors of the economy were equally subject to the invisible foot (the stick of failure and losses and new firm entry)….“Order for all” became “order for the classes and disorder for the masses”….In this increasingly financialised economy, the increased market-sensitivity combined with the macro-stabilisation commitment encourages low-risk process innovation and discourages uncertain and exploratory product innovation. This tilt towards exploitation/cost-reduction without exploration kept inflation in check but it also implied a prolonged period of sub-par wage growth and a constant inability to maintain full employment unless the consumer or the government levered up. For the neo-liberal revolution to sustain a ‘corporate welfare state’ in a democratic system, the absence of wage growth necessitated an increase in household leverage for consumption growth to be maintained. 

When commentators such as James Livingston claim that tax cuts for businesses will not solve our problems and that we need a redistribution of income away from profits towards wages to trigger increased aggregate demand via aggregate consumption, I agree with them. But I disagree with the conclusion that the secular decline in business investment is inevitable, acceptable and unrelated to the current cyclical downturn. The fact that business investment during the Great Moderation only increased when consumption demand went up is a symptom of the corporatist nature of the economy. When the household sector has reached a state of peak debt and the financial system has reached its point of peak elasticity, simply running increased fiscal deficits without permitting the corporatist superstructure to collapse simply takes us to the end-state that Minsky himself envisioned: an economy that attempts to achieve full employment will yo-yo uncontrollably between a state of debt-deflation and high,variable inflation – somewhat similar to a broken shower that only runs either too hot or too cold. The only way in which the corporatist status quo can postpone collapse is to abandon the goal of full employment which is exactly the path that the developed world has taken.  This only substitutes an economic fragility with a deeper social fragility.

Stability for all is synonymous with an environment of permanent innovative stagnation. The Schumpeterian solution is to transform the system into one of instability for all. Micro-fragility is the key to macro-resilience but this fragility must be felt by all economic agents, labour and capital alike. In order to end the stagnation and achieve sustainable full employment, we need to allow incumbent banks and financialised corporations to collapse and dismantle the barriers to entry of new firms that pervade the economy. The risk of a deflationary contraction from allowing such a collapse be prevented in a simple and effective manner with a system of direct transfers to individuals as Steve Waldman has outlined. This solution also reverses the flow of rents that have exacerbated inequality over the past few decades.

Note: I went through a much longer version of the same argument with an emphasis on the relationship between employment and technology adapted to US economic history in a previous post. The above logic explains my disagreements with conventional Keynesian theory and my affinity with Post-Keynesian theory. Minsky viewed his theory as a  ‘investment theory of the cycle and a financial theory of investment’ and my views are simply a neo-Schumpeterian take on the same underlying framework.

Bookmark and Share

Written by Ashwin Parameswaran

December 7th, 2011 at 5:44 pm

Posted in Cronyism,Resilience

Rent-Seeking, The Progressive Agenda and Cash Transfers

with 20 comments

In my posts on the subject of cronyism and rent-seeking, I have drawn heavily on the work of Mancur Olson. My views are also influenced by my experiences of cronyism in India and comparing it to the Olsonian competitive sclerosis that afflicts most developed economies today. Although there are significant differences between cronyism in the developing and developed world, there is also a very significant common ground. In some respects, the rent-extraction apparatus in the developed world is just a more sophisticated version of the open corruption and looting that is common in many developing economies. This post explores some of this common ground.

Mancur Olson predicted the inexorable rise of rent seeking in a stable economy. But he also thought that once rent-seeking activities extracted too high a proportion of a nation’s GDP, the normal course of democracy and public anger may rein them in. Small rent seekers can fly under the radar but big rent-seekers are ultimately cut back to size. But is this necessarily true? Although there is some truth to this assertion, Olson was likely too optimistic about the existence of such limits. This post tries to provide an argument as to why this is not necessarily the case. After all, it can easily be argued that rents extracted by banks already swallow up a significant proportion of GDP. And there is no shortage of corrupt public programs that swallow up significant proportions of the public budget in the developing world. In a nutshell, my argument is that rent-extraction can avoid these limits by aligning itself to the progressive agenda – the very programs that purport to help the masses become the source of rents for the classes.

A transparent example of this phenomenon is the experience of the Mahatma Gandhi National Rural Employment Guarantee – a public program that guarantees 100 days of work for unskilled rural labourers in India. In a little more than half a decade since inception, it accounts for 3% of public spending and economists estimate that anywhere from a quarter to two-thirds of the expenditure does not reach those whom it is intended to help. So how does a program such as this not only survive but thrive? The answer is simple – despite the corruption, the scheme does disburse significant benefits to a large rural electorate. When faced with the choice of either tolerating a corrupt program or cancelling the program, the rural poor clearly prefer the status quo.

A rather more sophisticated example of this phenomenon is the endless black hole of losses that are Freddie Mac and Fannie Mae – $175 billion and counting. The press focuses on the comparatively small bonus payments to Freddie and Fannie executives but ignores their much larger role in the back door bailout of the banking sector. Again the reason why this goes relatively uncriticised is simple – despite the significant contribution made by Fannie and Freddie to the rents extracted by the “1%”, their operations also put money into the pockets of a vast cross-section of homeowners. Simply shutting them down would almost certainly constitute an act of political suicide.

Source (h/t to David Ruccio)

The masses become the shield for the very programs that enable a select few to extract significant rents out of the system. The same programs that are supposed to be part of the liberal social agenda like Fannie/Freddie become the weapons through which the cronyist corporate structure perpetuates itself, while the broad-based support for these programs makes them incredibly resilient and hard to reform once they have taken root.

Those who cherish the progressive agenda tend to argue that better implementation and regulation can solve the problem of rent extraction. But there is another option – complex programs with egalitarian aims should be replaced with direct cash transfers wherever feasible. This case has been argued persuasively in a recent book as an effective way to help the poor in developing countries and is already being implemented in India. There is no reason why the same approach cannot be implemented in the developed world either.

Bookmark and Share

Written by Ashwin Parameswaran

November 7th, 2011 at 2:25 am

Innovation, Stagnation and Unemployment

with 18 comments

All economists assert that wants are unlimited. From this follows the view that technological unemployment is impossible in the long run. Yet there are a growing number of commentators (such as Brian Arthur) who insist that increased productivity from automation and improvements in artificial intelligence has a part to play in the current unemployment crisis. At the same time, a growing chorus laments the absence of innovation – Tyler Cowen’s thesis that the recent past has been a ‘Great Stagnation’ is compelling.

But don’t the two assertions contradict each other? Can we have an increase in technological unemployment as well as an innovation deficit? Is the concept of technological unemployment itself valid? Is there anything about the current phase of labour-displacing technological innovation that is different from the past 150 years? To answer these questions, we need a deeper understanding of the dynamics of innovation in a capitalist economy i.e. how exactly has innovation and productivity growth proceeded in a manner consistent with full employment in the past? In the process, I also hope to connect the long-run structural dynamic with the Minskyian business cycle dynamic. It is common to view the structural dynamic of technological change as a sort of ‘deus ex machine’ – if not independent, certainly as a phenomenon that is unconnected with the business cycle. I hope to convince some of you that our choices regarding business cycle stabilisation have a direct bearing on the structural dynamic of innovation. I have touched upon many of these topics in a scattered fashion in previous posts but this post is an attempt to present many of these thoughts in a coherent fashion with all my assumptions explicitly laid out in relation to established macroeconomic theory.

Micro-Foundations

Imperfectly competitive markets are the norm in most modern economies. In instances where economies of scale or network effects dominate, a market may even be oligopolistic or monopolistic (e.g. Google, Microsoft) This assumption is of course nothing new to conventional macroeconomic theory. Where my analysis differs is in viewing the imperfectly competitive process as one that is permanently in disequilibrium. Rents or “abnormal” profits are a persistent feature of the economy at the level of the firm and are not competed away even in the long run. The primary objective of incumbent rent-earners is to build a moat around their existing rents whereas the primary objective of competition from new entrants is not to drive rents down to zero, but to displace the incumbent rent-earner. It is not the absence of rents but the continuous threat to the survival of the incumbent rent-earner that defines a truly vibrant capitalist economy i.e. each niche must be continually contested by new entrants. This does not imply, even if the market for labour is perfectly competitive, that an abnormal share of GDP goes to “capital”. Most new entrants fail and suffer economic losses in their bid to capture economic rents and even a dominant incumbent may lose a significant proportion of past earned rents in futile attempts to defend its competitive position before its eventual demise.

This emphasis on disequilibrium points to the fact that the “optimum” state for a dynamically competitive capitalist economy is one of constant competitive discomfort and disorder. This perspective leads to a dramatically different policy emphasis from conventional theory which universally focuses on increasing positive incentives to economic players and relying on the invisible hand to guide the economy to a better equilibrium. Both Schumpeter and Marx understood the importance of this competitive discomfort for the constant innovative dynamism of a capitalist economy – my point is simply that a universal discomfort of capital is also important to maintain the distributive justice in a capitalist economy. in fact it is the only way to do so without sacrificing the innovative dynamism of the economy.

Competition in monopolistically competitive markets manifests itself through two distinct forms of innovation: exploitation and exploration. Exploitation usually takes the form of what James Utterback identified as process innovation with an emphasis on “real or potential cost reduction, improved product quality, and wider availability, and movement towards more highly integrated and continuous production processes.” As Utterback noted, such innovation is almost always driven by the incumbent firms. Exploitation is an act of optimisation under a known distribution i.e. it falls under the domain of homo economicus. In the language of fitness landscapes, exploitative process innovation is best viewed as competition around a local peak. On the other hand, exploratory product innovation (analogous to what Utterback identified as product innovation) occurs under conditions of significant irreducible uncertainty. Exploration is aimed at finding a significantly higher peak on the fitness landscape and as Utterback noted, is almost always driven by new entrants (For a more detailed explanation of incumbent preference for exploitation and organisational rigidity, see my earlier post).

An Investment Theory of the Business Cycle

Soon after publishing the ‘General Theory’, Keynes summarised his thesis as follows: “given the psychology of the public, the level of output and employment as a whole depends on the amount of investment. I put it in this way, not because this is the only factor on which aggregate output depends, but because it is usual in a complex system to regard as the causa causans that factor which is most prone to sudden and wide fluctuation.” In Keynes‘ view, the investment decision was undertaken in a condition of irreducible uncertainty, “influenced by our views of the future about which we know so little”. Just how critical the level of investment is in maintaining full employment is highlighted by GLS Shackle in his interpretation of Keynes’ theory: “In a money-using society which wishes to save some of the income it receives in payment for its productive efforts, it is not possible for the whole (daily or annual) product to be sold unless some of it is sold to investors and not to consumers. Investors are people who put their money on time-to-come. But they do not have to be investors. They can instead be liquidity-preferrers; they can sweep up their chips from the table and withdraw. If they do, they will give no employment to those who (in face of society’s propensity to save) can only be employed in making investment goods, things whose stream of usefulness will only come out over the years to come.”

If we accept this thesis, then it is no surprise that the post–2008 recovery has been quite so anaemic. Investment spending has remained low throughout the developed world, nowhere more so than in the United Kingdom. What makes this low level of investment even more surprising is the strength of the rebound in corporate profits and balance sheets – corporate leverage in the United States is as low as it has been for two decades and the proportion of cash in total assets as high as it has been for almost half a century. Specifically, the United States has also experienced an unusual increase in labour productivity during the recession which has exacerbated the disconnect between the recovery in GDP and employment. Some of these unusual patterns have been with us for a much longer time than the 2008 financial crisis. For example, the disconnect between GDP and employment in the United States has been obvious since atleast 1990, and the 2003 recession too saw an unusual rise in labour productivity. The labour market has been slack for at least a decade. It is hard to differ from Paul Krugman’s intuition that the character of post–1980 business cycles has changed. Europe and Japan are not immune from these “structural” patterns either – the ‘corporate savings glut’ has been a problem in the United Kingdom since atleast 2002, and Post-Keynesian economists have been pointing out the relationship between ‘capital accumulation’ and unemployment for a while, even attributing the persistently high unemployment in Europe to a lack of investment. Japan’s condition for the last decade is better described as a ‘corporate savings trap’ rather than a ‘liquidity trap’. Even in Greece, that poster child for fiscal profligacy, the recession is accompanied by a collapse in private sector investment.

A Theory of Business Investment

Business investments can typically either operate upon the scale of operations (e.g. capacity,product mix) or they can change the fundamental character of operations (e.g. changes in process, product). The degree of irreducible uncertainty in capacity and product mix decisions has reduced dramatically in the last half-century. The ability of firms to react quickly and effectively to changes in market conditions has improved dramatically with improvements in production processes and information technology – Zara being a well-researched example. Investments that change the very nature of business operations are what we typically identify as innovations. However, not all innovation decisions are subject to irreducible uncertainty either. In a seminal article, James March distinguished between “the exploration of new possibilities and the exploitation of old certainties. Exploration includes things captured by terms such as search, variation, risk taking, experimentation, play, flexibility, discovery, innovation. Exploitation includes such things as refinement, choice, production, efficiency, selection, implementation, execution.” Exploratory innovation operates under conditions of irreducible uncertainty whereas exploitation is an act of optimisation under a known distribution.

Investments in scaling up operations are most easily influenced by monetary policy initiatives which reduce interest rates and raise asset prices or direct fiscal policy initiatives which operate via the multiplier effect. In recent times, especially in the United States and United Kingdom, the reduction in rates has also directly facilitated the levering up of the consumer balance sheet and a reduction in the interest servicing burden of past consumer debt taken on. The resulting boost to consumer spending and demand also stimulates businesses to invest in expanding capacity. Exploitative innovation requires the presence of price competition within the industry i.e. monopolies or oligopolies have little incentive to make their operations more efficient beyond the price point where demand for their product is essentially inelastic. This sounds like an exceptional case but is in fact very common in critical industries such as finance and healthcare. Exploratory innovation requires not only competition amongst incumbent firms but competition from a constant and robust stream of new entrants into the industry. I outlined the rationale for this in a previous post:

Let us assume a scenario where the entry of new firms has slowed to a trickle, the sector is dominated by a few dominant incumbents and the S-curve of growth is about to enter its maturity/decline phase. To trigger off a new S-curve of growth, the incumbents need to explore. However, almost by definition, the odds that any given act of exploration will be successful is small. Moreover, the positive payoff from any exploratory search almost certainly lies far in the future. For an improbable shot at moving from a position of comfort to one of dominance in the distant future, an incumbent firm needs to divert resources from optimising and efficiency-increasing initiatives that will deliver predictable profits in the near future. Of course if a significant proportion of its competitors adopt an exploratory strategy, even an incumbent firm will be forced to follow suit for fear of loss of market share. But this critical mass of exploratory incumbents never comes about. In essence, the state where almost all incumbents are content to focus their energies on exploitation is a Nash equilibrium.
On the other hand, the incentives of any new entrant are almost entirely skewed in favour of exploratory strategies. Even an improbable shot at glory is enough to outweigh the minor consequences of failure. It cannot be emphasised enough that this argument does not depend upon the irrationality of the entrant. The same incremental payoff that represents a minor improvement for the incumbent is a life-changing event for the entrepreneur. When there exists a critical mass of exploratory new entrants, the dominant incumbents are compelled to follow suit and the Nash equilibrium of the industry shifts towards the appropriate mix of exploitation and exploration.

A Theory of Employment

My fundamental assertion is that a constant and high level of uncertain, exploratory investment is required to maintain a sustainable and resilient state of full employment. And as I mentioned earlier, exploratory investment driven by product innovation requires a constant threat from new entrants.

Long-run increases in aggregate demand require product innovation. As Rick Szostak notes:

While in the short run government spending and investment have a role to play, in the long run it is per capita consumption that must rise in order for increases in per capita output to be sustained…..the reason that we consume many times more than our great-grandparents is not to be found for the most part in our consumption of greater quantities of the same items which they purchased…The bulk of the increase in consumption expenditures, however, has gone towards goods and services those not-too-distant forebears had never heard of, or could not dream of affording….Would we as a society of consumers/workers have striven as hard to achieve our present incomes if our consumption bundle had only deepened rather than widened? Hardly. It should be clear to all that the tremendous increase in per capita consumption in the past century would not have been possible if not for the introduction of a wide range of different products. Consumers do not consume a composite good X. Rather, they consume a variety of goods, and at some point run into a steeply declining marginal utility from each. As writers as diverse as Galbraith and Marshall have noted, if declining marginal utility exists with respect to each good it holds over the whole basket of goods as well…..The simple fact is that, in the absence of the creation of new goods, aggregate demand can be highly inelastic, and thus falling prices will have little effect on output.

Therefore, when cost-cutting and process optimisation in an industry enables a product to be sold at a lower cost, the economy may not be able to reorganise back to full employment with simply an increased demand for that particular product. In the early stages of a product when demand is sufficiently elastic, process innovation can increase employment. But as the product ages, process improvements have a steadily negative effect on employment.

Eventually, a successful reorganisation back to full employment entails creating demand for new products. If such new products were simply an addition to the set of products that we consumed, disruption would be minimal. But almost any significant new product that arises from exploratory investment also destroys an old product. The tablet cannibalises the netbook, the smartphone cannibalises the camera etc. This of course is the destruction in Schumpeter’s creative destruction. It is precisely because of this cannibalistic nature of exploratory innovation that established incumbents rarely engage in it, unless compelled to do so by the force of new entrants. Burton Klein put it well: “ firms involved in such competition must compare two risks: the risk of being unsuccessful when promoting a discovery or bringing about an innovation versus the risk of having a market stolen away by a competitor: the greater the risk that a firm’s rivals take, the greater must be the risks to which must subject itself for its own survival.” Even when new firms enter a market at a healthy pace, it is rare that incumbent firms are successful at bringing about disruptive exploratory changes. When the pace of dynamic competition is slow, incumbents can choose to simply maintain slack and wait for any promising new technology to emerge which it can buy up rather than risking investment in some uncertain new technology.

We need exploratory investment because this expansion of the economy into its ‘adjacent possible’ does not derive its thrust from the consumer but from the entrepreneur. In other words, new wants are not demanded by the consumers but are instead created by entrepreneurs such as Steve Jobs. In the absence of dynamic competition from new entrants, wants remain limited.

In essence, this framework incorporates technological innovation into a distinctly “Chapter 12” Keynesian view of the business cycle. Although my views are far removed from macroeconomic orthodoxy, they are not quite so radical that they have no precedents whatsoever. My views can be seen as a simple extension of Burton Klein’s seminal work outlined in his books ‘Dynamic Economics’ and ‘Prices, wages, and business cycles: a dynamic theory’. But the closest parallels to this explanation can be found in Rick Szostak’s book ‘Technological innovation and the Great Depression’. Szostak uses an almost identical rationale to explain unemployment during the Great Depression, “how an abundance of labor-saving production technology coupled with a virtual absence of new product innovation could affect consumption, investment and the functioning of the labor market in such a way that a large and sustained contraction in employment would result.”

As I have hinted at in a previous post, this is not a conventional “structural” explanation of unemployment. Szostak explains the difference: “An alternative technological argument would be that the skills required of the workforce changed more rapidly in the interwar period than did the skills possessed by the workforce. Thus, there were enough jobs to go around; workers simply were not suited to them, and a painful decade of adjustment was required…I argue that in fact there simply were not enough jobs of any kind available.” In other words, this is a partly technological explanation for the shortfall in aggregate demand.

The Invisible Foot and New Firm Entry

The concept of the “Invisible Foot” was introduced by Joseph Berliner as a counterpoint to Adam Smith’s “Invisible Hand” to explain why innovation was so hard in the centrally planned Soviet economy:

Adam Smith taught us to think of competition as an “invisible hand” that guides production into the socially desirable channels….But if Adam Smith had taken as his point of departure not the coordinating mechanism but the innovation mechanism of capitalism, he may well have designated competition not as an invisible hand but as an invisible foot. For the effect of competition is not only to motivate profit-seeking entrepreneurs to seek yet more profit but to jolt conservative enterprises into the adoption of new technology and the search for improved processes and products. From the point of view of the static efficiency of resource allocation, the evil of monopoly is that it prevents resources from flowing into those lines of production in which their social value would be greatest. But from the point of view of innovation, the evil of monopoly is that it enables producers to enjoy high rates of profit without having to undertake the exacting and risky activities associated with technological change. A world of monopolies, socialist or capitalist, would be a world with very little technological change.” 

For disruptive innovation to persist, the invisible foot needs to be “applied vigorously to the backsides of enterprises that would otherwise have been quite content to go on producing the same products in the same ways, and at a reasonable profit, if they could only be protected from the intrusion of competition”Burton Klein’s great contribution along with Gunnar Eliasson was to highlight the critical importance of entry of new firms in maintaining the efficacy of the invisible foot. Klein believed that

the degree of risk taking is determined by the robustness of dynamic competition, which mainly depends on the rate of entry of new firms. If entry into an industry is fairly steady, the game is likely to have the flavour of a highly competitive sport. When some firms in an industry concentrate on making significant advances that will bear fruit within several years, others must be concerned with making their long-run profits as large as possible, if they hope to survive. But after entry has been closed for a number of years, a tightly organised oligopoly will probably emerge in which firms will endeavour to make their environments highly predictable in order to make their environments highly predictable in order to make their short-run profits as large as possible….Because of new entries, a relatively concentrated industry can remain highly dynamic. But, when entry is absent for some years, and expectations are premised on the future absence of entry, a relatively concentrated industry is likely to evolve into a tight oligopoly. In particular, when entry is long absent, managers are likely to be more and more narrowly selected; and they will probably engage in such parallel behaviour with respect to products and prices that it might seem that the entire industry is commanded by a single general!

This argument does not depend on incumbent firms leaving money on the table – on the contrary, they may redouble their attempts at cost reduction via process innovation in times of deficient demand. Rick Szostak documents how “despite the availability of a massive amount of inexpensive labour, process innovation would continue in the 1930s. Output per man-hour in manufacturing rose by 25% in the 1930s…..national output was higher in 1939 than in 1929, while employment was over two million less.”

Macroeconomic Policy and Exploratory Product Innovation

Monetary policy has been the preferred cure for insufficient aggregate demand throughout and since the Great Moderation. The argument goes that lower real rates, inflation and higher asset prices will increase investment via Tobin’s Q and increase consumption via the wealth effect and reduction in rewards to savings, all bound together in the virtuous cycle of the multiplier. If monetary policy is insufficient, fiscal policy may be deployed with a focus on either directly increasing aggregate demand or providing businesses with supply-side incentives such as tax cuts.

There is a common underlying theme to all of the above policy options – they focus on the question “how do we make businesses want to invest?” i.e. on positively incentivising incumbent business and startups and trusting that the invisible hand will do the rest. In the context of exploratory investments, the appropriate question is instead “how do we make businesses have to invest?” i.e. on compelling incumbent firms to invest in speculative projects in order to defend their rents or lose out to new entrants if they fail to do so. But the problem isn’t just that these policies are ineffectual. Many of the policies that focus on positive incentives weaken the competitive discomfort from the invisible foot by helping to entrench the competitive position of incumbent corporates and reducing their incentive to engage in exploratory investment. It is in this context that interventions such as central bank purchase of assets and fiscal stimulus measures that dole out contracts to the favoured do permanent harm to the economy.

The division that matters from the perspective of maintaining the appropriate level of exploratory investment and product innovation is not monetary vs fiscal but the division between existing assets and economic interests and new firms/entrepreneurs. Almost all monetary policy initiatives focus on purchasing existing assets from incumbent firms or reducing real rates for incumbent banks and their clients. A significant proportion of fiscal policy does the same. The implicit assumption is, as Nick Rowe notes, that there is “high substitutability between old and new investment projects, so the previous owners of the old investment projects will go looking for new ones with their new cash”. This assumption does not hold in the case of exploratory investments – asset-holders will likely chase after a replacement asset but this asset will likely be an existing investment project, not a new one. The result of the intervention will be an increase in prices of such assets but it will not feed into any “real” new investment activity. In other words, the Tobin’s q effect is negligible for exploratory investments in the short run and in fact negative in the long run as the accumulated effect of rents derived from monetary and fiscal intervention reduces the need for incumbent firms to engage in such speculative investment.

A Brief History of the Post-WW2 United States Macroeconomy

In this section, I’m going to use the above framework to make sense of the evolution of the macroeconomy in the United States after WW2. The framework is relevant for post–70s Europe and Japan as well which is why the ‘investment deficit problem’ afflicts almost the entire developed world today. But the details differ quite significantly especially with regards to the distributional choices made in different countries.

The Golden Age

The 50s and the 60s are best characterised as a period of “order for all” characterised by as Bill Lazonick put it, “oligopolistic competition, career employment with one company, and regulated financial markets”. The ‘Golden Age’ delivered prosperity for a few reasons:

  • As Minsky noted, the financial sector had only just begun the process of adapting to and circumventing regulations designed to constrain and control it. As a result, the Fed had as much control over credit creation and bank policies as it would ever have.
  • The pace of both product and process innovation had slowed down significantly in the real economy, especially in manufacturing. Much of the productivity growth came from product innovations that had already been made prior to WW2. As Alexander Field explains (on the slowdown in manufacturing TFP): “Through marketing and planned obsolescence, the disruptive force of technological change – what Joseph Schumpeter called creative destruction – had largely been domesticated, at least for a time. Whereas large corporations had funded research leading to a large number of important innovations during the 1930s, many critics now argued that these behemoths had become obstacles to transformative innovation, too concerned about the prospect of devaluing rent-yielding income streams from existing technologies. Disruptions to the rank order of the largest U.S. industrial corporations during this quarter century were remarkably few. And the overall rate of TFP growth within manufacturing fell by more than a percentage point compared with the 1930s and more than 3.5 percentage points compared with the 1920s.”
  • Apart from the fact that the economy had to catch up to earlier product innovation, the dominant position of the US in the global economy post WW2 limited the impact from foreign competition.

It was this peculiar confluence of factors that enabled a system of “order and stability for all” without triggering a complete collapse in productivity or financial instability – a system where both labour and capital were equally strong and protected and shared in the rents available to all.

Stagflation

The 70s are best described as the time when this ordered, stabilised system could not be sustained any longer.

  • By the late 60s, the financial sector had adapted to the regulatory environment. Innovations such as Fed Funds market and the Eurodollar market gradually came into being such that by the late 60s, credit creation and bank lending were increasingly difficult for the Fed to control. Reserves were no longer a binding constraint on bank operations.
  • The absence of real competition either on the basis of price or from new entrants meant that both process and product innovation were low just like during the Golden Age but the difference was that there were no more low-hanging fruit to pick from past product innovations. Therefore, a secular slowdown in productivity took hold.
  • The rest of world had caught up and foreign competition began to intensify.

As Burton Klein noted, “competition provides a deterrent to wage and price increases because firms that allow wages to increase more rapidly than productivity face penalties in the form of reduced profits and reduced employment”. In the absence of adequate competition, demand is inelastic and there is little pressure to reduce costs. As the level of price/cost competition reduces, more and more unemployment is required to keep inflation under control. Even worse, as Klein noted, it only takes the absence of competition in a few key sectors for the disease to afflict the entire economy. Controlling overall inflation in the macroeconomy when a few key sectors are sheltered from competitive discomfort requires monetary action that will extract a disproportionate amount of pain from the remainder of the economy. Stagflation is the inevitable consequence in a stabilised economy suffering from progressive competitive sclerosis.

The “Solution”

By the late 70s, the pressures and conflicts of the system of “order for all” meant that change was inevitable. The result was what is commonly known as the neoliberal revolution. There are many different interpretations of this transition. To right-wing commentators, neoliberalism signified a much-needed transition towards a free-market economy. Most left-wing commentators lament the resultant supremacy of capital over labour and rising inequality. For some, the neoliberal era started with Paul Volcker having the courage to inflict the required pain to break the back of inflationary forces and continued with central banks learning the lessons of the past which gave us the Great Moderation.

All these explanations are relevant but in my opinion, they are simply a subset of a larger and simpler explanation. The prior economic regime was a system where both the invisible hand and the invisible foot were shackled – firms were protected but their profit motive was also shackled by the protection provided to labour. The neoliberal transition unshackled the invisible hand (the carrot of the profit motive) without ensuring that all key sectors of the economy were equally subject to the invisible foot (the stick of failure and losses and new firm entry). Instead of tackling the root problem of progressive competitive and democratic sclerosis and cronyism, the neoliberal era provided a stop-gap solution. “Order for all” became “order for the classes and disorder for the masses”. As many commentators have noted, the reality of neoliberalism is not consistent with the theory of classical liberalism. Minsky captured the hypocrisy well: “Conservatives call for the freeing of markets even as their corporate clients lobby for legislation that would institutionalize and legitimize their market power; businessmen and bankers recoil in horror at the prospect of easing entry into their various domains even as technological changes and institutional evolution make the traditional demarcations of types of business obsolete. In truth, corporate America pays lip service to free enterprise and extols the tenets of Adam Smith, while striving to sustain and legitimize the very thing that Smith abhorred – state-mandated market power.”

The critical component of this doctrine is the emphasis on macroeconomic and financial sector stabilisation implemented primarily through monetary policy focused on the banking and asset price channels of policy transmission:
Any significant fall in asset prices (especially equity prices) has been met with a strong stimulus from the Fed i.e. the ‘Greenspan Put’. In his plea for increased quantitative easing via purchase of agency MBS, Joe Gagnon captured the logic of this policy: ““This avalanche of money would surely push up stock prices, push down bond yields, support real estate prices, and push up the value of foreign currencies. All of these financial developments would stimulate US economic activity.” In other words, prop up asset prices and the real economy will mend itself.
Similarly, Fed and Treasury policy has ensured that none of the large banks can fail. In particular, bank creditors have been shielded from any losses. The argument is that allowing banks to fail will cripple the flow of credit to the real economy and result in a deflationary collapse that cannot be offset by conventional monetary policy alone. This is the logic for why banks were allowed access to a panoply of Federal Reserve liquidity facilities at the height of the crisis. In other words, prop up the banks and the real economy will mend itself.

In this increasingly financialised economy, “the increased market-sensitivity combined with the macro-stabilisation commitment encourages low-risk process innovation and discourages uncertain and exploratory product innovation.” This tilt towards exploitation/cost-reduction without exploration kept inflation in check but it also implied a prolonged period of sub-par wage growth and a constant inability to maintain full employment unless the consumer or the government levered up. For the neo-liberal revolution to sustain a ‘corporate welfare state’ in a democratic system, the absence of wage growth necessitated an increase in household leverage for consumption growth to be maintained. The monetary policy doctrine of the Great Moderation exacerbated the problem of competitive sclerosis and the investment deficit but it also provided the palliative medicine that postponed the day of reckoning. The unshackling of the financial sector was a necessary condition for this cure to work its way through the economy for as long as it did.

It is this focus on the carrot of higher profits that also triggered the widespread adoption of high-powered incentives such as stock options and bonuses to align manager and stockholder incentives. When the risk of being displaced by innovative new entrants is low, high-powered managerial incentives help to tilt the focus of the firm towards a focus on process innovation and cost reduction, optimisation of leverage etc. From the stockholders and the managers’ perspective, the focus on short-term profits is a feature, not a bug.

The Dénouement

So long as unemployment and consumption could be propped up by increasing leverage from the consumer and/or the state, the long-run shortage in exploratory product innovation and the stagnation in wages could be swept under the rug and economic growth could be maintained. But there is every sign that the household sector has reached a state of peak debt and the financial system has reached its point of peak elasticity. The policy that worked so well during the Great Moderation is now simply focused on preventing the collapse of the cronyist and financialised economy. The system has become so fragile that Minsky’s vision is more correct than ever – an economy at full employment will yo-yo uncontrollably between a state of debt-deflation and high,variable inflation. Instead the goal of full employment seems to have been abandoned in order to postpone the inevitable collapse. This only substitutes an economic fragility with a deeper social fragility.

The aim of full employment is made even harder with the acceleration of process innovation due to advances in artificial intelligence and computerisation. Process innovation gives us technological unemployment while at the same time the absence of exploratory product innovation leaves us stuck in the Great Stagnation.

 

The solution preferred by the left is to somehow recreate the golden age of the 50s and the 60s i.e. order for all. Apart from the impossibility of retrieving the docile financial system of that age (which Minsky understood), the solution of micro-stability for all is an environment of permanent innovative stagnation. The Schumpeterian solution is to transform the system into one of disorder for all, masses and classes alike. Micro-fragility is the key to macro-resilience but this fragility must be felt by all economic agents, labour and capital alike. In order to end the stagnation and achieve sustainable full employment, we need to allow incumbent banks and financialised corporations to collapse and dismantle the barriers to entry of new firms that pervade the economy (e.g. occupational licensing, the patent system). But this does not imply that the macroeconomy should suffer from a deflationary contraction. Deflation can be prevented in a simple and effective manner with a system of direct transfers to individuals as Steve Waldman has outlined. This solution reverses the flow of rents that have exacerbated inequality over the past few decades, as well as tackling the cronyism and demosclerosis that is crippling innovation and preventing full employment.

Bookmark and Share

Written by Ashwin Parameswaran

November 2nd, 2011 at 7:29 pm

A Simple Policy Program for Macroeconomic Resilience

with 49 comments

The core logic behind my critique of macroeconomic stabilisation is that stability (and stabilisation) breeds systemic fragility. But this does not imply an opposition to all macroeconomic intervention, especially in a scenario when past stabilisation has left the macroeconomy in a fragile state. It simply insists on restricting our interventions to actions that preserve the essential adaptive character and creative destruction of our economic system.

A resilient framework of macroeconomic interventions must satisfy the following conditions:

  • a focus on mitigating the most damaging consequences of disturbances on the macroeconomy rather than stamping out the disturbance at its source.
  • a focus on discretionary interventions targeted at individuals rather than corporate limited-liability entities and limited to times of systemic crises.
  • emphasis on maintaining general economic capacities and competences rather than protecting the specific incumbent entities that provide an economic function at any given point of time.

In theory monetary and fiscal policy interventions can easily fulfil all these criteria. In practise however, the history of both interventions is characterised by a systematic violation of all of them. The long history of propping up insolvent financial institutions via the TBTF guarantee and central bank ‘liquidity facilities’ combined with the doling out of fiscal favours to incumbent corporates has left us with a fragile and unequal economic system. As Michael Lewis puts it, we have “socialism for the capitalists and capitalism for everybody else” and the system shows no signs of changing despite the abysmal results so far. To paraphrase Robert Reich, behind every potential “resolution” of a debt crisis lies yet another bailout for the banks.

The pro-bailout proponents argue that there is no other option. According to them, allowing the banks to fail will bring about a certain economic collapse. In this post, I will argue against this notion that bank bailouts are inevitable and unavoidable. I will also lay out a coherent and simple alternative policy program to get us out of the mess that we’re currently in without having to undergo a systemic collapse to do so.

My policy proposal has three legs all of which need to be implemented simultaneously:

  • Allow Failure: Allow insolvent banks and financialised corporations to fail.
  • The Helicopter Drop: Institute a system of direct transfers to individuals (a helicopter drop) to mitigate the deflationary fallout from bank failure.
  • Entry of New Banks: Allow fast-track approvals of new banks to restore banking capacity in the economy.

The argument against allowing bank and corporate failure is that it will trigger off a catastrophic deflationary collapse in the economy while at the same time crippling the lending capacity available to businesses and households. The helicopter drop of direct transfers helps prevent a deflationary collapse and the entry of new banks helps maintain lending capacity thus negating both concerns.

The Helicopter Drop

In order to promote system resilience and minimise moral hazard, any system of direct transfers must be directed only at individuals and it must be a discretionary policy tool utilised only to mitigate against the risk of systemic crises. The discretionary element is crucial as tail risk protection directed at individuals has minimal moral hazard implications if it is uncertain even to the slightest degree. Transfers must not be directed to corporate entities – even uncertain tail-risk protection provided to corporates will eventually be gamed. The critical difference between individuals and corporates in this regard is the ability of stockholders and creditors to spread their bets across corporate entities and ensure that failure of any one bet has only a limited impact on the individual investors’ finances. In an individual’s case, the risk of failure is by definition concentrated and the uncertain nature of the transfer will ensure that moral hazard implications are minimal. This conception of transfers as a macro-intervention tool is very different from ideas that assume constant, regular transfers or a steady safety net such as an income guarantee, job guarantee or a social credit.

Entry of New Banks

I have discussed in a previous post why entry of new banks allows us to preserve bank lending capacity without bailing out the incumbent banks. A similar idea has been laid out by David Merkel as a more resilient way to undertake TARP-like interventions. The fundamental principle is quite simple – system resilience refers to the ability to retain the same function while adapting to a disturbance. It does not imply that the function must be provided by the same incumbent entities. In fact, we are already beginning to see an expansion in non-bank credit as the era of low borrowing costs due to the implicit guarantee to bank creditors comes to an end. New banks unencumbered by the need to make up their past losses will be much better positioned to meet the credit demand from the real economy.The process of new firm entry in banking can be encouraged in many ways:

  • Fast-track approvals
  • Reduced capital requirements
  • TARP-like seed capital participation as David Merkel has laid out.

 

 

Many commentators have criticised the ‘Occupy Wall Street’ movement for not having an agenda and a list of demands. But as Michael Lewis points out, their protests are not without merit. The slogan ‘We are the 99 percent’ captures the essence of the problem which is the explosion of the share of the national income captured by the richest 1% of the population. If this inequality was perceived to be fair or if it had occurred at a time of prosperity for the masses, it is unlikely that there would have been any protest at all. But as I have pointed out, the rise in income captured by the richest 1% is primarily driven by the rents captured by and through the financial sector. The same doctrine of macroeconomic stabilisation that acted as the source of these rents has also transformed the economy into a financialised and cronyist system unable to sustain a broad-based and sustainable recovery. Simply allowing the failure of insolvent banks and financialised corporations and putting an end to the flow of rents towards the banks will go a long way towards reducing the level of inequality in the economy. At the same time, the entry of new firms will restore the economy’s competitive and innovative dynamism.

Bookmark and Share

Written by Ashwin Parameswaran

October 5th, 2011 at 5:38 pm

Macroeconomic Stabilisation and Financialisation in The Real Economy

with 35 comments

The argument against stabilisation is akin to a broader, more profound form of the moral hazard argument. But the ecological ‘systems’ approach is much more widely applicable than the conventional moral hazard argument for a couple of reasons:

  • The essence of the Minskyian explanation is not that economic agents get fooled by the period of stability or that they are irrational. It is that there are sufficient selective forces (especially amongst principal-agent relationships) in the modern economy that the moral hazard outcome can be achieved even without any active intentionality on the part of economic agents to game the system.
  • The micro-prudential consequences of stabilisation and moral hazard are dwarfed by their macro-prudential systemic consequences. The composition of agents changes and becomes less diverse as those firms and agents that try to follow more resilient or less leveraged strategies will be outcompeted and weeded out – this loss of diversity is exacerbated by banks’ adaptation to the intervention strategies preferred by central banks in order to minimise their losses. And most critically, the suppression of disturbances increases the connectivity and reduces the ‘patchiness’ and modularity of the macroeconomic system. In the absence of disturbances, connectivity builds up within the network, both within and between scales. Increased within-scale connectivity increases the severity of disturbances and increased between-scale connectivity increases the probability that a disturbance at a lower level will propagate up to higher levels and cause systemic collapse.

Macro-stabilisation therefore breeds fragility in the financial sector. But what about the real economy? One could argue that in the long run, it is creative destruction in the real economy that drives economic growth and surely macro-stabilisation does not impede the pace of long-run innovation? Moreover, even if non-financial economic agents were ‘Ponzi borrowers’, wouldn’t real economic shocks be sufficient to deliver the “disturbances” consistent with macroeconomic resilience? Unfortunately, the assumption that nominal income stabilisation has no real impact is too simplistic. Macroeconomic stabilisation is one of the key drivers of the process of financialisation through which it transmits financial fragility throughout the real economy and hampers the process of exploratory innovation and creative destruction.

Financialisation is a term with many definitions. Since my focus is on financialisation in the corporate domain (rather than in the household sector), Greta Krippner’s definition of financialisation as a ““pattern of accumulation in which profit making occurs increasingly through financial channels rather than through trade and commodity production” is closest to the mark. But from a resilience perspective, it is more accurate to define financialisation as a “pattern of accumulation in which risk-taking occurs increasingly through financial channels rather than through trade and commodity production”.

In the long run, creating any source of stability in a capitalist economy incentivises economic agents to realign themselves to exploit that source of security and thereby reduce risk. Similar to how banks adaptation to the intervention strategies preferred by central banks by taking on more “macro” risks, macro-stabilisation incentivises real economy firms to shed idiosyncratic micro-risks and take on financial risks instead. Suppressing nominal volatility encourages economic agents to shed real risks and take on nominal risks. In the presence of the Greenspan/Bernanke put, a strategy focused on “macro” asset price risks and leverage outcompetes strategies focused on “risky” innovation.  Just as banks that exploit the guarantees offered by central banks outcompete those that don’t, real economy firms that realign themselves to become more bank-like outcompete those that choose not to.

The poster child for this dynamic is the transformation of General Electric during the Jack Welch Era, when “GE’s no-growth, blue-chip industrial businesses were run for profits and to maintain the AAA credit rating which was then used to expand GE Capital.” Again, the financialised strategy outcompetes all others and drives out “real economy” firms. As Doug Rushkoff observed, “the closer to the creation of value you get under this scheme, the farther you are from the money”. General Electric’s strategy is an excellent example of how financialisation is not just a matter of levering up the balance sheet. It could just as easily be focused on aggressively extending leverage to one’s clients, a strategy that is just as adept at delivering low-risk profits in an environment where the central bank is focused on avoiding even the smallest snap-back in an elastic, over-extended monetary system. When central bankers are focused on preventing significant pullbacks in equity prices (the Greenspan/Bernanke put), then real-economy firms are incentivised to take on more systematic risk and reduce their idiosyncratic risk exposure.

Some Post-Keynesian and Marxian economists also claim that this process of financialisation is responsible for the reluctance of corporates to invest in innovation. As Bill Lazonick puts it, “the financialization of corporate resource allocation undermines investment in innovation”. This ‘investment deficit’ has in turn led to the secular downturn in productivity growth across the Western world since the 1970s, a phenomenon that Tyler Cowen has coined as ‘The Great Stagnation’. This thesis, appealing though it is, is too simplistic. The increased market-sensitivity combined with the macro-stabilisation commitment encourages low-risk process innovation and discourages uncertain and exploratory product innovation. The collapse in high-risk, exploratory innovation is exacerbated by the rise in the influence of special interests that accompanies any extended period of stability, a dynamic that I discussed in an earlier post.

The easiest way to explain the above dynamic is to take a slightly provocative example. Let us assume that the Fed decides to make the ‘Bernanke Put’ more explicit by either managing a floor on equity prices or buying a significant amoubt of equities outright. The initial result may be positive but in the long run, firms will simply align their risk profile to that of the broader market. The end result will be a homogenous corporate sector free of any disruptive innovation – a state of perfect equilibrium but also a state of rigor mortis.

Bookmark and Share

Written by Ashwin Parameswaran

October 3rd, 2011 at 4:16 pm

Operation Twist and the Limits of Monetary Policy in a Credit Economy

with 53 comments

The conventional cure for insufficient aggregate demand and the one that has been preferred throughout the Great Moderation is monetary easing. The argument goes that lower real rates, higher inflation and higher asset prices will increase investment via Tobin’s Q and increase consumption via the wealth effect and reduction in rewards to savings, all bound together in the virtuous cycle of the multiplier. As I discussed in a previous post, QE2 and now Operation Twist are not as unconventional as they seem. They simply apply the logic of interest rate cuts to the entire yield curve rather than restricting central bank interventions to the short-end of the curve as was the norm during the Great Moderation.

But despite asset prices and corporate profits having rebounded significantly from their crisis lows and real rates now negative till the 10y tenor in the United States, a rebound in investment or consumption has not been forthcoming in the current recovery. This lack of responsiveness of aggregate demand to monetary policy is not as surprising as it first seems:

  • The responsiveness of consumption to monetary policy is diminished when the consumer is as over-levered as he currently is. The “success” of monetary policy during the Great Moderation was primarily due to consumers’ ability to lever up to maintain consumption growth in the absence of any tangible real wage growth.
  • The empirical support for the impact of real rates and asset prices on investment is inconclusive. Drawing on Keynes’ emphasis on the uncertain nature of investment decisions, Shackle was skeptical about the impact of lower interest rates in stimulating business investment. He noted that businessmen when asked rarely noted at the level of interest rates as a critical determinant. In an uncertain environment, estimated profits “must greatly exceed the cost of borrowing if the investment in question is to be made”.

If the problem with reduced real rates was simply that they were likely to be ineffective, there could still be a case for pursuing monetary policy initiatives aimed at reducing real rates. One could argue that even a small positive effect is better than not trying anything. But this unfortunately is not the case. There is ample reason to believe that reduced real rates across the curve have perverse and counterproductive effects, especially when real rates are pushed to negative levels:

  • Prolonged periods of negative real rates may trigger increased savings and reduced consumption in an attempt to reach fixed real savings goals in the future, a tendency that may be exacerbated in an ageing population saving for retirement in an era where defined-benefit pensions have disappeared. An investor in a defined-contribution pension plan is unlikely to react to the absence of a truly risk-free investment alternative by taking on more risk or consuming more.
  • One of the arguments for how a program such as Operation Twist can provide economic stimulus is summarised here by Brad DeLong: “such policies work, to the extent that they work, by taking duration and other forms of risk onto the government’s balance sheet, leaving the private sector with extra risk-bearing capacity that it can then use to extend loans to risky private borrowers.” But duration is not a risk to a pension fund or life insurer, it is a hedge – one that it cannot shift out of in any meaningful manner without taking on other risks (equity,credit) in the process.
  • The ability of incumbent firms to hold their powder dry and hold cash as a defence against disruptively innovative threats is in fact enhanced by policies like ‘Operation Twist’ that flatten the yield curve. Firms find it worthwhile to issue bonds and hold cash due to the low negative carry of doing so when the yield curve is flat, a phenomenon that is responsible for the paradox of high corporate cash balances combined with simultaneous debt issuance.

There is an obvious monetarist objection to this post and to my previous post. Despite the fact that the Fed also views its actions as providing stimulus via “downward pressure on longer-term interest rates”, monetarists view this interest-rate view of monetary policy as fundamentally flawed. So why this interest rate approach rather than the monetarist money supply approach? In my opinion, the modern economy resembles a Wicksellian pure credit economy, a point that Claudio Borio and Piti Disyatat have made in a recent paper who point out that

The amount of cash holdings by the public, one form of outside money, is purely demand-determined; as such, it provides no external anchor. And banks’ reserves with the central bank – the other component of outside money – cannot provide an anchor either: Contrary to what is often believed, they do not constrain the amount of inside credit creation. Indeed, in a number of banking systems under normal conditions they are effectively zero, regardless of the level of the interest rate. Critically, the existence of a demand for banks’ reserves, arising from the need to settle transactions, is essential for the central bank to be able to set interest rates, by exploiting its monopoly over their supply. But that is where their role ends. The ultimate constraint on credit creation is the short-term rate set by the central bank and the reaction function that describes how this institution decides to set policy rates in response to economic developments.

In a typically perceptive note written more than a decade ago, Axel Leijonhufvud mapped out and anticipated the evolution of the US monetary system into a pure credit economy during the 20th century:

The situation that Wicksell saw himself as confronting, therefore, was the following. The Quantity Theory was the only monetary theory with any claim to scientific status. But it left out the influence on the price level of credit-financed demand. This omission had become a steadily more serious deficiency with time as the evolution of both “simple” (trade) and “organized” (bank-intermediated) credit practices reduced the role of metallic money in the economy. The issue of small denomination notes had displaced gold coin from circulation and almost all business transactions were settled by check or by giro; the resulting transfers on the books of banks did not involve “money” at all. The famous model of the pure credit economy, which everyone remembers as the original theoretical contribution of Geldzins und Giiterpreise, dealt with the hypothetical limiting case to this historical-evolutionary process……Wicksell’s “Day of Judgment” (if we may call it that) when the real demand for the reserve medium would shrink to epsilon was greatly postponed by regime changes already introduced before or shortly after his death. In particular, governments moved to monopolize the note issue and to impose reserve requirements on banks. The control over the banking system’s total liabilities that the monetary authorities gained in this way greatly reduced the potential for the kind of instability that preoccupied Wicksell. It also gave the Quantity Theory a new lease of life, particularly in the United States.
But although Judgment Day was postponed it was not cancelled….The monetary anchors on which 20th century central bank operating doctrines have relied are giving way. Technical developments are driving the process on two fronts. First, “smart cards” are circumventing the governmental note monopoly; the private sector is reentering the business of supplying currency. Second, banks are under increasing competitive pressure from nonbank financial institutions providing innovative payment or liquidity services; reserve requirements have become a discriminatory tax on banks that handicap them in this competition. The pressure to eliminate reserve requirements is consequently mounting.

Leijonhufvud’s account touches on a topic that is almost always left out in debates on the matter – the assertion that we are in a credit economy is not theoretical, it is empirical. In the environment immediately after WW2, reserves were most certainly a limitation on bank credit. But banks gradually “innovated” their way out of almost all restrictions that central banks and regulators could throw at them. The dominance of shadow-money in our current economic system is a culmination of a long series of bank “innovations” such as the Fed Funds market and the Eurodollar bond market.

As Borio and Disyatat note, in such a credit economy, “through the creation of deposits associated with credit expansion, banks can grant nominal purchasing power without reducing it for other agents in the economy. The banking system can both expand total nominal purchasing power and allocate it at terms different from those associated with full-employment saving-investment equilibrium. In the process, the system is able to stabilise interest rates at an arbitrary level. The quantity of credit adjusts to accommodate the demand at the prevailing interest rate.” In such a economy, the conventional savings-investment framework has very little to say about either market interest rates or the abrupt breakdown in financing that characterises the Minsky Moment. The notion that our economic malaise can be cured by solving the problem of “excess savings” is therefore invalid. In Borio and Disyatat’s words, “Investment, and expenditures more generally, require financing, not saving.” A flatter yield curve therefore encourages incumbent firms to monopolise the limited financing/risk-taking capacity of the system (limited typically by bank capital) simply to increase cash holdings and in effect crowding out small firms and new entrants.

The problem in a credit economy is not so much excess savings but as Borio and Disyatat put it, excess elasticity. Elasticity is defined as

the degree to which the monetary and financial regimes constrain the credit creation process, and the availability of external funding more generally. Weak constraints imply a high elasticity. A high elasticity can facilitate expenditures and production, much like a rubber band that stretches easily. But by the same token it can also accommodate the build-up of financial imbalances, whenever economic agents are not perfectly informed and their incentives are not aligned with the public good (“externalities”). The band stretches too far, and at some point inevitably snaps….In other words, to reduce the likelihood and severity of financial crises, the main policy issue is how to address the “excess elasticity” of the overall system, not “excess saving” in some jurisdictions.

If our financial system is a rubber band, the long arc of monetary system evolution from a metallic standard to a credit economy via the Bretton Woods regime has been largely a process of increasing the elasticity of this rubber band (excepting the period of financial repression post-WW2 when the trend reversed temporarily). Snap-backs are inevitable – the question is simply whether the snap-backs are “normal” or catastrophic. What is commonly referred to as the ‘Minsky Moment’ is the almost instantaneous process of the elastic snapping back. As Minsky has documented, the history of macroeconomic interventions post-WW2 has been the history of prevention of even the smallest snap-backs that are inherent to the process of creative destruction. The result is our current financial system which is as taut as it can be, in a state of fragility where any snap-back will be catastrophic.

The natural fix for the system as I have outlined is to allow small pull-backs and disturbances to play themselves out. But we have evolved far past the point where the system can be allowed to fail without any compensating actions. Just like in a forest where fire has been suppressed for too long or a river where floods have been avoided, it is not an option to let nature take its course.

So is there no way out that does not involve a deflationary collapse of the economy? I argue that there is but that this requires a radical change in focus. The deflationary collapse of the current shadow money and credit superstructure and correspondingly much of the incumbent corporate structure adapted to this “taut rubber-band” is inevitable and if anything needs to be encouraged and accelerated. But this does not imply that the macroeconomy should suffer from a deflationary contraction. The effects of this snap-back can be mitigated in a simple and effective manner with a system of direct transfers to individuals as Steve Waldman has outlined. In fact, it is the deflationary collapse of the incumbent system that provides the leeway for significant fiscal intervention to be undertaken without sacrificing the central bank’s inflation targets. This solution also has the benefit of reversing the flow of rents that have exacerbated inequality over the past few decades, as well as tackling the cronyism and demosclerosis that is crippling our system today. Of course, the collapse of incumbent crony interests inherent to this policy approach means that it will not be implemented anytime soon.

Note: hat tip to Yves Smith and Andrew Dittmer for directing me to the Borio-Disyatat paper.

Bookmark and Share

Written by Ashwin Parameswaran

September 22nd, 2011 at 5:24 pm

Bagehot’s Rule, Central Bank Incentives and Macroeconomic Resilience

with 11 comments

It is widely accepted that in times of financial crisis, central banks should follow Bagehot’s rule which can be summarised as: “Lend without limit, to solvent firms, against good collateral, at ‘high rates’.” However, as I noted a few months ago, the Fed and the ECB seem to be following quite a different rule which is best summarised as: “Lend freely even on junk collateral at ‘low rates’.”

The Fed’s response to allegations that they went beyond their mandate for liquidity provision is instructive. In the Fed’s eyes, the absence of credit losses signifies that the collateral was sound and the fact that nearly all the programs have now closed illustrates that the rate charged was clearly at a premium to ‘normal rates’. This argument gives the Fed a significant amount of flexibility as a rate that is at a premium to ‘normal rates’ can still quite easily be a bargain when offered in times of crisis. Nevertheless, the Fed can point to the absence of losses and claim that it only provided liquidity support. The absence of losses is also used to refute the claim that these programs create moral hazard. However, both these arguments ignore the fact that the creditworthiness of assets and the solvency of the banking system cannot be separated from the central banks’ actions during a crisis. As the Fed’s Brian Madigan notes: “In a crisis, the solvency of firms may be uncertain and even dependent on central bank actions.”

However, the Fed’s response does highlight just how important it is to any central bank that it avoid losses on its liquidity programs – not so much to avoid moral hazard but out of simple self-interest. If a central bank exposes itself to significant losses, it runs a significant reputational and political risk. Given the criticism that central banks receive even for programs which do not lose any money, it is quite conceivable that significant losses may even lead to a reduction in their independent powers. Whether or not these losses have any ‘real’ relevance in a fiat-currency economic system, they are undoubtedly relevant in a political context. The interaction of the central bank’s desire to avoid losses and its ability to influence asset prices and bank solvency has some important implications for its liquidity policy choices – In a nutshell, the central bank strongly prefers to backstop assets whose valuation is largely dependent on “macro” systemic risks. Also, when it embarks upon a program of liquidity provision it will either limit itself to extremely high-quality assets or it will backstop the entire spectrum of risky assets from high-grade to junk. It will not choose an intermediate threshold for its intervention.

The first point is easily explained – by choosing to backstop ‘macro’ assets whose prices and performance are strongly reflexive with respect to credit availability, the program minimises the probability of loss. For example, a decision to backstop housing loans has a significant impact on loan-flow and the ‘real’ housing market. A decision to backstop small-business loans on the other hand can only have a limited impact on the realised business outcomes experienced by small businesses given the idiosyncratic risk inherent in them. The negatively skewed payoff profile of such loans combined with their largely ‘macro’ risk profile makes them the ideal candidates for such programs – such assets are exposed to a tail risk of significant losses in the event of macroeconomic distress, which is the exact scenario that central banks are mandated to mitigate against. The coincidence of such distress with deflationary forces enables central banks to eliminate losses on these assets without risking any overshooting of its inflation mandate. This also explains why central banks are reluctant to explicitly backstop equities even at the index level – the less skewed risk profile of equities means that the risk of losses is impossible to reduce to an acceptable level.

The second point is less obvious – If the central bank can restrict itself to backstopping just extremely low-risk bonds and loans, it will do so. But in most crises, this is rarely enough. At the very least, the central bank is required to backstop average-quality assets which is where the impact of uncertainty is greatest and the line between solvency and liquidity risk is blurriest. But this is not the strategy that minimises the risk of losses to the central bank. The impact on the system from the contagious ripple effects of the losses incurred on the junk assets can cause moderate losses on higher-quality assets. This incentivises the Fed to go far beyond the level of commitment that may be optimal for the economy and backstop almost the entire sphere of “macro” assets even if many of them are junk. In other words, it is precisely the desire of the Fed to avoid any losses that incentivised it to expand the scope of its liquidity programs to as large a scale and scope as it did during the crisis.

These preferences of the central bank have implications for the portfolios that banks will choose to hold – banks will prefer ‘macro’ assets without excessive micro risk as these assets are more likely to be backstopped by the central bank during the crisis. This biases bank portfolios and lending towards large corporations, housing etc. and against small business loans and other idiosyncratic risks. The system also becomes less diverse and more highly correlated. The problem of homogeneity and inordinately high correlation is baked into the structural logic of a stabilised financial system. Such a system also carries a higher risk of asset price bubbles – it may be more ‘rational’ for a bank to hold an overpriced ‘macro’ asset and follow the herd than to invest in an underpriced ‘micro’ asset. Douglas Diamond and Raghuram Rajan identified the damaging effects of the implicit commitment by central banks to reduce rates when liquidity is at a premium: “If the authorities are expected to reduce interest rates when liquidity is at a premium, borrowers will take on more short-term leverage or invest in more illiquid projects, thus bringing about the very states where intervention is needed, even if they would not do so in the absence of intervention.” Similarly, the incentives of the central bank to avoid losses at all costs perversely end up making the financial system less diverse and fragile.

When viewed under this logic, the ECB’s actions also start to make sense and criticisms of its lack of courage seem misguided. In terms of liquidity support extended, the ECB has been at least as aggressive as the Fed. in fact, in terms of the risk of losses that it has chosen to bear, the ECB has been far more aggressive. Despite the losses it faces on its Greek debt holdings,it has nearly doubled its peripheral government bond holdings in recent times. This is despite the fact that the ECB runs a significant risk of losses on its government bond holdings in the absence of massive fiscal transfers from the core to the periphery, a policy for which there is little public or political appetite.

The ECB’s desire for the EFSF to take over the task of backstopping the periphery simply highlights the reality that the task is more fiscal than monetary in nature. Relying on the ECB to pick up the slack rather than constructing the fiscal solution also exacerbates the democratic deficit that is crippling the Eurozone. The ECB is not the first central bank that has pleaded to be relieved of duties that belong to the fiscal domain. Various Fed officials have made the same point regarding the Fed’s credit policies – drawing on Marvin Goodfriend’s research, Charles Plosser summarises this view as follows: “the Fed and the Treasury should agree that the Treasury will take the non-Treasury assets and non-discount window loans from the Fed’s balance sheet in exchange for Treasury securities. Such a new ”accord“ would transfer funding for these special credit programs to the Treasury — which would issue Treasury securities to fund the transfer — thus ensuring that these extraordinary credit policies are under the oversight of the fiscal authority, where such policies rightfully belong.” Of course, the incentives of the government are to preserve the status quo – what better than to let the central bank do the dirty work as well as reserving the right to criticise it for doing so!

This highlights a point that often gets lost in the monetary vs fiscal policy debate. Much of what has been implemented as monetary policy in recent times is not only not ‘neutral’ but is regressive in its distributional effects. In the current paradigm of central bank policy during crises, systemic fragility and inequality is an inescapable structural problem. On the other hand, it is perfectly possible to construct a fiscal policy that is close to neutral e.g. Steve Waldman’s excellent idea of simple direct transfers to individuals.

Bookmark and Share

Written by Ashwin Parameswaran

September 12th, 2011 at 4:41 pm

Forest Fire Suppression and Macroeconomic Stabilisation

with 24 comments

In an earlier post, I compared Minsky’s Financial Instability Hypothesis with Buzz Holling’s work on ecological resilience and briefly touched upon the consequences of wildfire suppression as an example of the resilience-stability tradeoff. This post expands upon the lessons we can learn from the history of fire suppression and its impact on the forest ecosystem in the United States and draws some parallels between the theory and history of forest fire management and macroeconomic management.

Origins of Stabilisation as the Primary Policy Objective and Initial Ease of Implementation

The impetus for both fire suppression and macroeconomic stabilisation came from a crisis. In economics, this crisis was the Great Depression which highlighted the need for stabilising fiscal and monetary policy during a crisis. Out of all the initiatives, the most crucial from a systems viewpoint was the expansion of lender-of-last-resort operations and bank bailouts which tried to eliminate all disturbances at their source. In Minsky’s words: “The need for lender-of-Iast-resort operations will often occur before income falls steeply and before the well nigh automatic income and financial stabilizing effects of Big Government come into play.” (Stabilizing an Unstable Economy pg 46)

SImilarly, the battle for complete fire suppression was won after the Great Idaho Fires of 1910. “The Great Idaho Fires of August 1910 were a defining event for fire policy and management, indeed for the policy and management of all natural resources in the United States. Often called the Big Blowup, the complex of fires consumed 3 million acres of valuable timber in northern Idaho and western Montana…..The battle cry of foresters and philosophers that year was simple and compelling: fires are evil, and they must be banished from the earth. The federal Weeks Act, which had been stalled in Congress for years, passed in February 1911. This law drastically expanded the Forest Service and established cooperative federal-state programs in fire control. It marked the beginning of federal fire-suppression efforts and effectively brought an end to light burning practices across most of the country. The prompt suppression of wildland fires by government agencies became a national paradigm and a national policy” (Sara Jensen and Guy McPherson). In 1935, the Forest Service implemented the ‘10 AM policy’, a goal to extinguish every new fire by 10 AM the day after it was reported.

In both cases, the trauma of a catastrophic disaster triggered a new policy that would try to stamp out all disturbances at the source, no matter how small. This policy also had the benefit of initially being easy to implement and cheap. In the case of wildfires, “the 10 am policy, which guided Forest Service wildfire suppression until the mid 1970s, made sense in the short term, as wildfires are much easier and cheaper to suppress when they are small. Consider that, on average, 98.9% of wildfires on public land in the US are suppressed before they exceed 120 ha, but fires larger than that account for 97.5% of all suppression costs” (Donovan and Brown). As Minsky notes, macroeconomic stability was helped significantly by the deleveraged nature of the American economy from the end of WW2 till the 1960s. Even in interventions by the Federal Reserve in the late 60s and 70s, the amount of resources needed to shore up the system was limited.

Consequences of Stabilisation

Wildfire suppression in forests that are otherwise adapted to regular, low-intensity fires (e.g. understory fire regimes) causes the forest to become more fragile and susceptible to a catastrophic fire. As Holling and Meffe note, “fire suppression in systems that would frequently experience low-intensity fires results in the systems becoming severely affected by the huge fires that finally erupt; that is, the systems are not resilient to the major fires that occur with large fuel loads and may fundamentally change state after the fire”. This increased fragility arises from a few distinct patterns and mechanisms:

Increased Fuel Load: Just like channelisation of a river results in increased silt load within the river banks, the absence of fires leads to a fuel buildup thus making the eventual fire that much more severe. In Minskyian terms, this is analogous to the buildup of leverage and ‘Ponzi finance’ within the economic system.

Change in Species Composition: Species compositions inevitably shift towards less fire resistant trees when fires are suppressed (Allen et al 2002). In an economic system, it is not simply that ‘Ponzi finance’ players thrive but that more prudently financed actors get outcompeted in the cycle. This has critical implications for the ability of the system to recover after the fire. This is an important problem in the financial sector where as Richard Fisher observed, “more prudent and better-managed banks have been denied the market share that would have been theirs if mismanaged big banks had been allowed to go out of business”.

Reduction in Diversity: As I mentioned here, “In an environment free of disturbances, diversity of competing strategies must reduce dramatically as the optimal strategy will outcompete all others. In fact, disturbances are a key reason why competitive exclusion is rarely observed in ecosystems”. Contrary to popular opinion, the post-disturbance environment is incredibly productive and diverse. Even after a fire as severe as the Yellowstone fires of 1988, the regeneration of the system was swift and effective as the ecosystem was historically adapted to such severe fires.

Increased Connectivity: This is the least appreciated impact of eliminating all disturbances in a complex adaptive system. Disturbances perform a critical role by breaking connections within a network. Frequent forest fires result in a “patchy” modularised forest where no one fire can cause catastrophic damage. As Thomas Bonnicksen notes: “Fire seldom spread over vast areas in historic forests because meadows, and patches of young trees and open patches of old trees were difficult to burn and forced fires to drop to the ground…..Unlike the popular idealized image of historic forests, which depicts old trees spread like a blanket over the landscape, a real historic forest was patchy. It looked more like a quilt than a blanket. It was a mosaic of patches. Each patch consisted of a group of trees of about the same age, some young patches, some old patches, or meadows depending on how many years passed since fire created a new opening where they could grow. The variety of patches in historic forests helped to contain hot fires. Most patches of young trees, and old trees with little underneath did not burn well and served as firebreaks. Still, chance led to fires skipping some patches. So, fuel built up and the next fire burned a few of them while doing little harm to the rest of the forest”. Suppressing forest fires converts the forest into one connected whole, at risk of complete destruction from the eventual fire that cannot be suppressed.

In the absence of disturbances, connectivity builds up within the network, both within and between scales. Increased within-scale connectivity increases the severity but between-scale connectivity increases the probability of a disturbance at a lower level propagating up to higher levels and causing systemic collapse. Fire suppression in forests adapted to frequent undergrowth fires can cause an accumulation of ladder fuels which connect the undergrowth to the crown of the forest. The eventual undergrowth ignition then risks a crown fire by a process known as “torching”. Unlike understory fires, crown fires can spread across firebreaks such as rivers by a process known as “spotting” where the wind carries burning embers through the air – the fire can spread in this manner even without direct connectivity. Such fires can easily cause systemic collapse and a state from which natural forces cannot regenerate the forest. In this manner, stabilisation can cause changes which cause a fundamental change in the nature of the system rather than simply an increased severity of disturbances. For example, “extensive stand-replacing fires are in many cases resulting in “type conversions” from ponderosa pine forest to other physiognomic types (for example, grassland or shrubland) that may be persistent for centuries or perhaps even millennia” (Allen 2007).

Long-Run Increase in Cost of Stabilisation and Area Burned: The initial low cost of suppression is short-lived and the cumulative effect of the fragilisation of the system has led to rapidly increasing costs of wildfire suppression and levels of area burned in the last three decades (Donovan and Brown 2007).

Dilemmas in the Management of a Stabilised System

In my post on river flood management, I claimed that managing a stabilised and fragile system is “akin to choosing between the frying pan and the fire”. This has been the case in many forests around the United States for the last few decades and is the condition into which the economies of the developed world are heading into. Once the forest ecosystem has become fragile, the resultant large fire exacerbates the problem thus triggering a vicious cycle. As Thomas Bonnicksen observed, “monster fires create even bigger monsters. Huge blocks of seedlings that grow on burned areas become older and thicker at the same time. When it burns again, fire spreads farther and creates an even bigger block of fuel for the next fire. This cycle of monster fires has begun”. The system enters an “unending cycle of monster fires and blackened landscapes”.

Minsky of course understood this end-state very well: “The success of a high-private-investment strategy depends upon the continued growth of relative needs to validate private investment. It also requires that policy be directed to maintain and increase the quasi-rents earned by capital – i.e.,rentier and entrepreneurial income. But such high and increasing quasi-rents are particularly conducive to speculation, especially as these profits are presumably guaranteed by policy. The result is experimentation with liability structures that not only hypothecate increasing proportions of cash receipts but that also depend upon continuous refinancing of asset positions. A high-investment, high-profit strategy for full employment – even with the underpinning of an active fiscal policy and an aware Federal Reserve system – leads to an increasingly unstable financial system, and an increasingly unstable economic performance. Within a short span of time, the policy problem cycles among preventing a deep depression, getting a stagnant economy moving again, reining in an inflation, and offsetting a credit squeeze or crunch….As high investment and high profits depend upon and induce speculation with respect to liability structures, the expansions become increasingly difficult to control; the choice seems to become whether to accomodate to an increasing inflation or to induce a debt-deflation process that can lead to a serious depression”. (John Maynard Keynes pg163–164)

The evolution of the system means that turning back the clock to a previous era of stability is not an option. As Minsky observed in the context of our financial system, “the apparent stability and robustness of the financial system of the 1950s and early 1960s can now be viewed as an accident of history, which was due to the financial residue of World War 2 following fast upon a great depression”. Re-regulation is not enough because it cannot undo the damage done by decades of financial “innovation” in a manner that does not risk systemic collapse.

At the same time, simply allowing an excessively stabilised system to burn itself out is a recipe for disaster. For example, on the role that controlled burns could play in restoring America’s forests to a resilient state, Thomas Bonnicksen observed: “Prescribed fire would come closer than any tool toward mimicking the effects of the historic Indian and lightning fires that shaped most of America’s native forests. However, there are good reasons why it is declining in use rather than expanding. Most importantly, the fuel problem is so severe that we can no longer depend on prescribed fire to repair the damage caused by over a century of fire exclusion. Prescribed fire is ineffective and unsafe in such forests. It is ineffective because any fire that is hot enough to kill trees over three inches in diameter, which is too small to eliminate most fire hazards, has a high probability of becoming uncontrollable”. The same logic applies to a fragile economic system.

Update: corrected date of Idaho fires from 2010 to 1910 in para 3 thanks to Dean.

Bookmark and Share

Written by Ashwin Parameswaran

June 8th, 2011 at 11:35 am

Financial Market Regulation and The Art of War

with 14 comments

“The interaction between the market participants, and for that matter between the market participants and the regulators, is not a game, but a war.”

Rick Bookstaber recently compared the complexity of the financial marketplace to that observed in military warfare. Bookstaber focuses primarily on the interaction between market participants but as he mentions, the same analogy also holds for the interaction between market participants and the regulator. In this post, I analyse the role of the financial market regulator within this context. Bookstaber primarily draws upon the work of John Boyd but I will focus on Sun Tzu’s ‘Art of War’.

Much like John Boyd, Sun Tzu emphasised the role of deception in war: “All warfare is based on deception”. In the context of regulation, “deception” is best understood as the need for the regulator to be unpredictable. This is not uncommon in other war-like economic domains. Google, for example, must maintain the secrecy and ambiguity of its search algorithms in order to stay one step ahead of the SEO firms’ attempts to game them. An unpredictable regulator may seem like a crazy idea but in fact it is a well-researched option in the central banking policy arsenal. In a paper for the Federal Reserve bank of Richmond in 1999, Jeffrey Lacker and Marvin Goodfriend analysed the merits of a regulator adopting a stance of ‘constructive ambiguity’. They concluded that a stance of constructive ambiguity was unworkable and could not prevent the moral hazard that arose from the central bank’s commitment to backstop banks in times of crisis. The reasoning was simple: constructive ambiguity is not time-consistent. As Lacker and Goodfriend note: “The problem with adding variability to central bank lending policy is that the central bank would have trouble sticking to it, for the same reason that central banks tend to overextend lending to begin with. An announced policy of constructive ambiguity does nothing to alter the ex post incentives that cause central banks to lend in the first place. In any particular instance the central bank would want to ignore the spin of the wheel.” Steve Waldman summed up the time-consistency problem in regulation well when he noted: “Given the discretion to do so, financial regulators will always do the wrong thing.” In fact, Lacker has argued that it was this stance of constructive ambiguity combined with the creditor bailouts since Continental Illinois that the market understood to be an implicit commitment to bailout TBTF banks.

As is clear from the war analogy, a predictable adversary is easily defeated. This of course is why Goodhart’s Law is such a big problem in regulation. Lacker’s suggestion that the regulator follow a “simple decision rule” is fatally flawed for the same reason. Lacker also suggests that “legal constraints limiting policymakers’ actions” could be imposed to mitigate the moral hazard problem. But attempting to lay out a comprehensive list of constraints suffers from the same problem i.e. they can be easily circumvented by a determined regulator. If the relationship between a regulator and the regulated is akin to war, then so is the relationship between the rule-making legislative body and the regulator. Bank bailouts can and have been carried out over the last thirty years under many different guises: explicit creditor bailouts, asset backstops a la Bear Stearns, “liquidity” support via expanded and lenient collateral standards, interest rate cuts as a bank recapitalisation mechanism etc.

Bookstaber asserts quite rightly that the military analogy stems from a view of human rationality that is at odds with both neoclassical and behavioural economics, a point that Gerd Gigerenzer has repeatedly emphasised. Homo economicus relies on a strangely simplistic version of the ‘computational theory of the mind’ that assumes man to be an optimising computer. Behavioural economics then compares the reality of human rationality to this computational ideal and finds man to be an inferior version of a computer, riddled with biases and errors. As Gigerenzer has argued, many heuristics and biases that appear to be irrational or illogical are entirely rational responses to an uncertain world. But clearly deception and unpredictability go beyond simply substituting the rationality of homo economicus with simple heuristics. In the ‘Art of War’, Sun Tzu insists that a successful general must “respond to circumstances in an infinite variety of ways”. Each battle must be fought in its unique context and “when victory is won, one’s tactics are not repeated”. To Sun Tzu, the expert general must be “serene and inscrutable”. In one of the most fascinating passages in the book, he describes the actions and decisions of the expert general: “How subtle and insubstantial, that the expert leaves no trace. How divinely mysterious, that he is inaudible.”

As Robert Wilkinson notes, in order to make any sense of these comments, one needs to appreciate the Taoist underpinnings of the ‘Art of War’. The “infinite variety” of tactics is not the variety that comes from making decisions based on the “spin of a roulette wheel” that Goodfriend and Lacker take to provide constructive ambiguity. It comes from an appreciation of the unique context in which each situation is placed and the flexibility, adaptability and novelty required to succeed. The “inaudibility” refers to the inability to translate such expertise into rules, algorithms or even heuristics. The ‘Taoist adept’ relies on the same intuitive tacit understanding that lies at the heart of what Hubert and Stuart Dreyfus call “expert know-how”1. In fact, rules and algorithms may paralyse the expert rather than aid him. Hubert/Stuart Dreyfus noticed of expert pilots that “rather  than  being  aware  that  they are  flying  an  airplane,  they  have  the  experience  that  they  are flying.  The  magnitude  and  importance  of  this  change  from  analytic  thought  to  intuitive  response  is  evident  to  any  expert pilot  who  has  had  the  experience  of  suddenly  reflecting  upon  what he is  doing,  with  an  accompanying  degradation  of  his  performance and  the  disconcerting  realization  that  rather  than  simply  flying, he  is  controlling  a  complicated  mechanism.” The same sentiment was expressed rather more succinctly by Laozi when he said:

“Having some knowledge
When walking the Great Tao
Only brings fear.”

I’m not suggesting that financial markets regulation would work well if only we could hire “expert” regulators. The regulatory capture and the revolving door between the government and Wall Street that is typical of late-stage Olsonian demosclerosis means that the real relationship between the regulator and the regulated is anything but adversarial. I’m simply asserting that there is no magical regulatory recipe or formula that will prevent Wall Street from gaming and arbitraging the system. This is the unresolvable tension in financial markets regulation: Discretionary policy falls prey to the time-consistency problem. The alternative, a systematic and predictable set of rules, is the worst possible way to fight a war.

  1. This Taoist slant to Hubert Dreyfus’ work is not a coincidence. Dreyfus was deeply influenced by the philosophy of Martin Heidegger who, although he never acknowledged it, was almost certainly influenced by Taoist thought []
Bookmark and Share

Written by Ashwin Parameswaran

April 4th, 2011 at 10:29 am