resilience, not stability

The Reality of Abenomics: Qualitative Easing and Propping Up The Markets

with 6 comments

Noah Smith and David Andolfatto think that Abenomics conclusively proves that quantitative easing boosts inflation. But Abenomics has nothing to do with quantitative easing and everything to do with qualitative easing. Every week, the Bank of Japan (BoJ) purchases Topix and Nikkei 225 ETFs till it hits an annual limit of around 1 trillion yen (see table below for last month’s purchases). It also purchases a much smaller amount of real estate investment trusts (REITs). Abenomics has nothing to do with increasing the “money supply” and everything to do with propping up asset prices.

How does the BoJ decide when to buy ETFs? It is widely believed that they follow the ‘1% rule’ i.e. “it would buy ETFs when the Topix index of all issues on the first section of the Tokyo Stock Exchange fell more than 1% in the morning session”. Abenomics takes the Greenspan/Bernanke put to its logical conclusion – why restrict monetary policy to implicit protection of asset prices when it can serve as an explicit backstop to the stock market?

What is the end-game of Abenomics? Obviously buying ETFs and REITs will increase inflation. But advocates of qualitative easing argue that such purchases will be a temporary policy that will be unwound when the economy achieves ‘lift-off’. This is pure fantasy. The BoJ will have to keep upping the ante to maintain even a small positive rate of inflation in a demographically challenged economy such as Japan. Already the ‘1% rule’ is no longer sufficient: “In the latter half of 2013, the bank apparently relaxed that rule, sometimes buying even when the decline was less than 0.5%”. In the long run the BoJ will end up owning an ever-increasing proportion of the private sector financial assets in the Japanese economy.

There is no lift-off, just an ever-increasing cost of intervention for the central bank and a correspondingly increasing drug addiction for the private sector.

Bookmark and Share

Written by Ashwin Parameswaran

February 4th, 2014 at 12:48 pm

How to commit fraud and get away with it: A Guide for CEOs

with 14 comments

Shorter Version

A strategy to maximise bonuses and avoid personal culpability:

  • Don’t commit the fraud yourself.
  • Minimise information received about the actions of your employees.
  • Control employees through automated, algorithmic systems based on plausible metrics like Value at Risk.
  • Pay high bonuses to employees linked to “stretch” revenue/profit targets.
  • Fire employees when targets are not met.
  • …..Wait.

Longer Version

CEOs and senior managers of modern corporations possess the ability to engineer fraud on an organisational scale and capture the upside without running the risk of doing any jail time. In other words, they can reliably commit fraud and get away with it.

Imagine that you are the newly hired CEO of a large bank and by some improbable miracle your bank is squeaky clean and free of fraudulent practises. But you are unhappy about this. Your competitors are making more profits than you are by embracing fraud and coming out ahead of you even after paying tens of billions of dollars in fines to the regulators. And you want a piece of the action. But you’re a risk-averse person and don’t want to risk spending any time in jail for committing fraud. So how can you achieve this outcome?

Obviously you should not commit any fraudulent acts yourself. You want your junior managers to commit fraud in the pursuit of higher profits. One way to incentivise this behaviour is to adopt what are known as ‘high-powered incentives’. Pay your employees high bonuses tied to revenue/profits and maintain hard-to-meet ‘stretch’ targets. Fire ruthlessly if these targets are not met. And finally, ensure that you minimise the flow of information up to you about how exactly how your employees meet these targets.

There is one problem with this approach. As a CEO, this allows you to use the “I knew nothing!” defense and claim ignorance about all the “deplorable” fraud taking place lower down the organisational food chain. But it may fall foul of another legal principle that has been tailored for such situations – the principle of ‘wilful blindness’“if there is information that you could have know, and should have known, but somehow managed not to know, the law treats you as though you did know it”. In a recent essay, Judge Rakoff uses exactly this principle to criticise the failure of regulators in the United States in prosecuting senior bankers.

But wait – all hope is not lost yet. There is one way by which you as a CEO can not only argue that adequate controls and supervision were in place and at the same time make it easier for your employees to commit fraud. Simply perform the monitoring and control function through an automated system and restrict your role to signing off on the risk metrics that are the output of this automated system.

It is hard to explain how this can be done in the abstract so let me take a hypothetical example from the mortgage origination and securitisation industry. As a CEO of a mortgage originator in 2005, you are under a lot of pressure from your shareholders to increase subprime originations. You realise that the task would be a lot easier if your salespeople originated fraudulent loans where ineligible borrowers are given loans they can’t afford. You’ve followed all the steps laid out above but as discussed this is not enough. You may be accused of not having any controls in the organisation. Even if you try hard to ensure that no information regarding fraud filters through to you, you can never be certain. At the first sign of something unusual, a mortgage approval officer may raise an exception to his supervisor. Given that every person in the management hierarchy wants to cover his own back, how can you ensure that nothing filters up to you whilst at the same time providing a plausible argument that you aren’t wilfully blind?

The answer is somewhat counterintuitive – you should codify and automate the mortgage approval process. Have your salespeople input potential borrower details into a system that approves or rejects the loan application based on an algorithm without any human intervention. The algorithm does not have to be naive. In fact it would ideally be a complex algorithm, maybe even ‘learned from data’. Why so? Because the more complex the algorithm, the more opportunities it provides to the salespeople to ‘game’ and arbitrage the system in order to commit fraud. And the more complex the algorithm, the easier it is for you, the CEO, to argue that your control systems were adequate and that you cannot be accused of wilful blindness or even the ‘failure to supervise’.

In complex domains, this argument is impossible to refute. No regulator/prosecutor is going to argue that you should have installed a more manual control system. And no regulator can argue that you, the CEO, should have micro-managed the mortgage approval process.

Let me take another example – the use of Value at Risk (VaR) as a risk measure for control purposes in banks. VaR is not ubiquitous because traders and CEOs are unaware of its flaws. It is ubiquitous because it allows senior managers to project the facade of effective supervision without taking on the trouble or the legal risks of actually monitoring what their traders are up to. It is sophisticated enough to protect against the charge of wilful blindness and it allows ample room for traders to load up on the tail risks that fund the senior managers’ bonuses during the good times. When the risk blows up, the senior manager can simply claim that he was deceived and fire the trader.

What makes this strategy so easy to implement today compared to even a decade ago is the ubiquitousness of fully algorithmic control systems. When the control function is performed by genuine human domain experts, then obvious gaming of the control mechanism is a lot harder to achieve. Let me take another example to illustrate this. One of the positions that lost UBS billions of dollars during the 2008 financial crisis was called ‘AMPS’ where billions of dollars in super-senior tranche bonds were hedged with a tiny sliver of equity tranche bonds so that the portfolio showed a zero VaR and delta-neutral risk position. Even the most novice of controllers could have identified the catastrophic tail risk embedded in hedging a position where one can lose billions, with another position where one could only gain millions.

There is nothing new in what I have laid out in this essay – for example, Kenneth Bamberger has made much the same point on the interaction between technology and regulatory compliance:

automated systems—systems that governed loan originations, measured institutional risk, prompted investment decisions, and calculated capital reserve levels—shielded irresponsible decisions, unreasonably risky speculation, and intentional manipulation, with a façade of regularity….
Invisibility by design, allows engineering of fraudulent outcomes without being held responsible for them – the “I knew nothing!” defense. of course, they are also self-deceived so this is really true.

But although the automation that enables this risk-free fraud is a recent phenomenon, the principle behind this strategy is one that is familiar to managers throughout the modern era – “How do I get things done the way I want to without being held responsible for them?”.

Just as the algorithmic revolution is simply a continuation of the control revolution, the ‘accountability gap’ due to automation is simply an acceleration of trends that have been with us throughout the modern era. Theodore Porter has shown how the rise of objectivity and bureaucracy were as much driven by the desire to avoid responsibility as they were driven by the desire for superior results. Many features of the modern corporate world only make sense when we understand that one of their primary aims is the avoidance of responsibility and culpability. Why are external consulting firms so popular even when the CEO knows exactly what he wants to do? So that the CEO can avoid responsibility if the ‘strategic restructuring’ goes badly. Why do so many firms delegate their critical control processes to a hotpotch of outsourced software contractors? So that they can blame any failures on external counter-parties who have explicitly been granted exemption from any liability1.

Due to my experience in banking, my examples and illustrations are necessarily drawn from the world of finance. But it should be clear that nothing in what I’ve said is limited to banking. ‘Strategic ignorance’ is equally effective in many other domains. My arguments are also not a justification for not prosecuting bankers for fraud. It is an argument that CEOs of modern corporations can reap the benefits of fraud and get away with it. And they can do so very easily. Fraud is embedded within the very fabric of the modern economy.

Note: Venkat makes a similar point in his series on the ‘Gervais Principle’ on how sociopathic managers avoid responsibility for their actions. Much of what I have written above may make more sense if read in conjunction with his essay.

  1. Helen Nissenbaum makes this and many other relevant points in her paper about ‘accountability in a computerised society’.  ↩
Bookmark and Share

Written by Ashwin Parameswaran

December 4th, 2013 at 4:19 pm

Capitalism For The Masses

with 16 comments

The post-2008 economic recovery has been a recovery of the capitalists. Growth in employment and real wages has been sluggish whereas profits have rebounded well past pre-recession highs. However, the decline of the share of labour in GDP is not a localised post-crisis phenomenon. It is a global phenomenon that started at least three decades ago. The Great Moderation has been a period of stable prosperity for the capitalists and a period of stagnation for labour.

Corporate Profits to GDP

The declining share of labour in national income is not a problem by itself. The underlying problem is the stagnation of real income experienced by the masses. One way to tackle this problem is to redistribute income away from high wage-earners and large capitalists towards the low and medium wage-earners. But this approach is a losing battle against the march of technological progress and the inevitable substitution of labour by capital. Instead we should empower the low and medium wage earners of today to become the capitalists of tomorrow whilst protecting them with a safety net that protects them as individuals rather than protecting the firms and unions that they are members of.

There are two reasons why a ‘capitalism for the masses’ is a viable proposition today. Entrepreneurs need less capital to start a viable business today than they did in the twentieth century and whatever capital they do need is much more freely available today than it has been in the past.

Entrepreneurs need less capital

Economies of scale and scope are collapsing across the economy. This has been the case for many years in the world of software. But it is also increasingly true for hardware. The advantages of the small player can overcome the cost disadvantage of operating at a lower scale. The agile small player can experiment and iterate in a manner that the large incumbent player cannot. As Luke Johnson observes in the case of the craft beer industry, the customisation and unique character of the smaller producer is well worth the small premium for many customers.

Throughout the ‘Control Revolution’, larger firms enjoyed a significant cost advantage over smaller players and oligopolies were the norm In most manufacturing sectors. Chris Anderson narrates the story of his grandfather who was an inventor during the heyday of large conglomerates in the 20th century:

he was an inventor, but he could not become an entrepreneur because those additional steps of mass production, distribution, marketing, et cetera, were essentially inaccessible in those days. All you could do was patent, license and hope for the best. You had to lose control of your invention. You had to hand it off to somebody else.

On the other hand, an inventor today has a multitude of options to prototype and produce small quantities of his product. The logistics of selling and delivering the product to customers are also easily outsourced. As Luke Johnson identifies, the dynamics of capitalism “appear to be coming full circle and reverting to a structure that prevailed at the start of industrial capitalism” in the early part of the nineteenth century.

Capital is freely available

In our increasingly ageing and wealthy society, there is no shortage of capital available to fund new businesses. Household savings alone is sufficient to fund the required business investment1.

We imagine that risky businesses can only be funded by the alchemy of modern maturity-transforming banking system. But as I showed in my last essay, the explosion of peer-to-peer financing in the United Kingdom today shows us that speculative equity ventures and business loans can and are being funded by the man on the street.

Although significant progress has been made recently, many of the hurdles to achieving a genuinely decentralised financial system are regulatory. By trying to protect small investors from the consequences of investing in a failed venture, we end up denying financing to small businesses. It is insane that we allow small investors to “gift” as much money as they want on Kickstarter but we limit them from investing in the same venture as a part-owner with the legal protections against fraud afforded by being an owner.

Deregulate and Expand The Safety Net

Throughout the industrial era, most people did not have the option of becoming a capitalist. If you worked in a manufacturing plant, your best option was to join another firm. This is still the case today. But in many industries, the only thing stopping laid off employees from striking out on their own are regulatory economies of scale and protections (such as bailouts, patent protections and licensing requirements) that large incumbent firms enjoy. In order to enable every person to become a capitalist, we need to reduce the regulatory burden on all aspiring capitalists as well as removing the protections enjoyed by incumbent large firms.

At the same time that we eliminate these localised firm-focused safety nets, we need to implement a a broad-based safety net for individuals that will help mitigate the greater uncertainty of such an economic system. Every individual should be assured of access to an income that affords him the basic necessities of life, access to catastrophic healthcare protection and access to basic financial services such as the ability to hold a bank account and make payments.

However, no one is entitled to protection from the inherent instability of a competitive capitalist economy. Firms and workers should not be protected by bailouts. Individual investors should not be protected from the risks of investing their money in failed ventures. Everyone deserves a safety net but no one deserves a hammock.

The expanded safety net and increased deregulation go hand in hand. Increasing instability without a safety net will make the system more fragile. And a broad-based safety net by itself will simply dovetail with localised safety nets to reinforce an already sclerotic and stagnant economic system. By combining the two, we can achieve the best of both worlds – a robust economic system that can achieve disruptive economic progress whilst protecting individuals from the worst consequences of economic failure.

Note: For a more detailed explanation of my approach to economic policy and its rationale, see my earlier essay ‘Radical Centrism: Uniting the Radical Left and the Radical Right‘

  1. Data on household savings courtesy of Natixis referred to in previouspost ↩
Bookmark and Share

Written by Ashwin Parameswaran

November 8th, 2013 at 1:19 pm

Posted in Resilience

Financing Investment In A World Without Maturity Transformation

with 7 comments

Why do banks exist? The conventional wisdom goes like this – depositors prefer to hold liquid risk-free assets and borrowers prefer to borrow for the long-term to invest in risky projects. Banks sit in the middle of this process and perform a sort of alchemy. By performing this alchemy, banks leave themselves open to the risk of bank runs – if all the depositors seek to withdraw their money at the same time, even a bank with otherwise sound loans as assets can go bust. This perceived risk of a bank run is why governments and central banks provide deposit insurance and liquidity facilities to the banking sector, a privilege that is not typically available to other financial intermediaries. In other words, banks exist for the purpose of maturity transformation.

Maturity transformation is a nice catch-all phrase but it subsumes some very different lending activities conducted by banks. For example, banks can help businesses to finance their working capital needs and provide financing against customer invoices. This function is as old as banking itself. It is what bankers did in the prosperous city-states in Italy in the 15th century. But most of these debts are short-term debts with a maturity of less than a year. This is not the heroic maturity transformation of the bankers whom Schumpeter viewed as the ‘capitalists par excellence’. There is no reason to believe that society does not have the risk appetite to take on the default risk of such short-term debt and as I shall show later, there is significant evidence of this already happening in the United Kingdom today.

In the modern era, banks also provide a range of short-term lending options to consumers such as credit card loans. Again this is short-term debt that forms part of a well-diversified pool of loans. For well over a decade, these have been amongst the most easily securitised parts of a bank’s balance sheet. Again, although this is also technically maturity transformation it is typically not what most of us think as the primary purpose of maturity transformation.

When most of us think of maturity transformation we think of banks’ ability to provide long-term loans (at least 3-5 years in maturity and as long as 30 years in maturity). Again this is not a homogeneous category. The most significant component of bank lending on such a long maturity in many countries is mortgage lending. Mortgage lending is undoubtedly an important part of the financial landscape. But very little of the maturity risk of mortgages actually stays with the originating banks. The interest rate risk is often hedged away with willing counterparties such as pension funds and life insurers and the credit risk is often securitised away.

What most people think of when they think of the role of banks is their role in providing long-term loans to businesses. The popular press is rife with the inability of banks to lend more to small and medium enterprises (SMEs) and how this is holding back economic growth. It is an obvious truth that banks make very few unsecured loans to SMEs on even a 3-5 year maturity, let alone a 30 year maturity. But does this matter? And did banks ever engage in such lending?

To understand the modern mythology surrounding bank lending to businesses, we need to study the history of bank lending to industry. In the United Kingdom, bank lending has never formed a significant component of business funding for growth investment even during the high-growth periods of the 19th century. To the extent that banks have provided such loans in the modern era, it has typically been on the back of security in the form of owners’ property, company property or personal guarantees from company directors.

‘Heroic’ maturity transformation was born in the economies of mainland Europe that wanted to catch up to Britain in the middle of the 19th century. The first such bank was Crédit Mobilier which was founded in France in 1852 to finance the railroads that banking had not touched till then. Although Credit Mobilier collapsed in the financial crisis of 1867, the innovation took hold in the form of other banks in France such as Crédit Lyonnais and spread across Europe.

The quintessential example of banks as long-term investment institutions arose in Germany between 1870 and the First World War. In an era when financial crises were frequent and banking was risky, German banks figured out how to profitably fund long-term investment projects without bankrupting themselves in the process. Instead of just arranging share issuances, they bought a significant chunk of the equity of firms they lent to. German banks owned a significant stake in German industry and proceeded to engineer German industry such that the risk to them was minimised. This involved using their ownership stake and financing power to push through mergers, cartels and backward/forward integration. In other words, they de-risked German industry. Bankers may have been the driving force of capitalism in 19th century Germany but they were not risk-takers.

After Germany, this model of banking and development has been copied by a number of countries – Italy, Russia prior to 1917 and pretty much all of East Asia (starting with Japan) since World War Two. There are many aspects of this model that are not relevant to today’s world – what worked in the mass-production, heavy-industry dominant period of capitalism when economies of scale meant that most successful businesses were large will most likely not work in the world today. But the critical problem with this model is that it is a model suited to accelerating catch-up growth. It is also a fundamentally low-risk, low-reward model of banking and economic development. In the developed world that is going through the ‘Great Stagnation’, this model will not work.

Now you may argue – so what if we don’t need banks to conduct heroic maturity transformation that funds long-term investment? Surely we need maturity transformation to fund the more mundane activities that I described earlier – invoice financing, short-term business loans, mortgages etc. Until a couple of years ago, this question was literally unanswerable. The only honest answer would have been – who knows? But the explosion of activity in the peer-to-peer lending sector now enables us to arrive at some preliminary conclusions.

Intermediaries that facilitate peer-to-peer (P2P) lending are subject to very little regulation in the United Kingdom (unlike the process of starting a bank which can take years and land you with a seven-figure legal bill). Unsurprisingly, there has been an explosion in the number of peer-to-peer lending platforms in the UK. Conventional wisdom would suggest that individuals who lend through such platforms would lend their money at higher rates than banks would. After all, they have nowhere near as privileged a position as banks do – no ability to create money ex nihilo, no access to the central bank’s repo window. But the reality is exactly the opposite. The lending rates in the industry are, if anything, too low. Individual lenders are falling over themselves to lend money to risky individuals and companies at rates far lower than what banks would lend to them at (to take just one example, take a look at the borrowing rates at Zopa).

And P2P lending is not just a niche phenomenon – there are platforms that handle everything from invoice financing, bridge loans, longer-term loans to individuals and businesses and mortgages. The last couple of years have in effect given us a controlled experiment in what a non-maturity transforming lending system would look like. And the answer is that rates would be lower than they would be in a maturity-transforming system. Maturity-transforming banking is redundant – it only gives us recurrent financial crises. The idea that in the absence of bank maturity transformation, lending rates would explode has been disproven.

This still doesn’t give us any answers as to what we can do to stimulate genuine disruptive and risky investment that can drag us out of the ‘great stagnation’. The answer is simple – we need to do more to promote equity investment in disruptive new enterprises. The conventional wisdom states that there isn’t enough risk appetite for all the equity financing that new high-risk businesses require for their investment needs. Again, the growth in equity crowd-funding is slowly disproving this myth.

A common argument against opening up the possibility of SMEs funding their equity requirements from the masses is that the masses are ill-equipped to evaluate the quality of the SMEs that seek their financing. This may be true but the “Kickstarter” approach is much worse in this respect. Recently there have been some significant Kickstarter-funded “failures”. I have nothing against the kickstarter approach. But it is insane to allow individuals to collectively donate millions of dollars to ventures without any ownership stake while at the same time barring them from funding the same projects and receiving an ownership stake in return.

Enabling equity crowd funding has another benefit that rarely gets mentioned. Left-wing critics of capitalism frequently criticise the “selfish” nature of capitalism. What the growth of the kickstarter funding model shows us that on the individual level there is much more to capitalism than simple monetary interest. Almost all the criticisms of capitalism are derived from the pathologies of institutional fiduciary capitalism. The fact that “capitalism in the large” is selfish is a good thing – fund managers and venture capitalists have a fiduciary responsibility to their investors to focus exclusively on the monetary prospects of their investments and this is exactly how it should be.

But when we invest our money directly in ventures that we care about, we are motivated by much more than just the prospect of riches. However we can do better than allowing individuals to donate money on a hope and a prayer. Expecting everybody to move to a ‘gift economy’ is unrealistic. But we can enable a genuine capitalism for the masses, where individuals can fund projects that provide them with a non-monetary payoff but with all the legal protections afforded by “corporate” capitalism. Institutional fiduciary capitalism is selfish by definition and design. If we want capitalism to become less selfish, we need to enable each individual to become a capitalist.



Note: Most of this essay is drawn from my experience in the financial industry but the portion on the growth of ‘heroic’ banking in Europe from 1850 till the First World War is mostly drawn from ‘The Oxford History of Modern Europe’ (pg 64 onwards). Chapter 7 of Davis Blackbourn’s book ‘History of Germany 1780-1918: The Long Nineteenth Century’ is also excellent on the German model.

Bookmark and Share

Written by Ashwin Parameswaran

October 8th, 2013 at 11:56 am

Posted in Resilience

A Lesson From Lehman and Bear Stearns

with 5 comments

Five years on, what can we learn from the collapse of Lehman Brothers? The conventional opinion is that we should have saved Lehman Brothers just like we saved the rest of the financial sector in the immediate aftermath of the Lehman collapse. But some critics assert that the decision to save Bear Stearns convinced everybody that Lehman would be saved when push came to shove. When this expectation was not met, chaos ensued.

Market data from March 2008 to September 2008 supports the critics. Lehman’s credit spreads halved between March and June 2008. Even when Lehman’s stock price started falling in May and June, its credit spreads barely reacted. The below graph (courtesy the WSJ) captures just how dramatically Lehman credit spreads fell in the aftermath of the Bear Stearns bailout:

Lehman CDS

The Bear Stearns bailout convinced everybody that Lehman would be treated no differently as a Wall Street Journal article from June 2008 explains:

The ouster of two top executives at Lehman Brothers Holdings Inc., including the person responsible for keeping the company’s books, sent the bank’s share price tumbling to a new six-year low, but the normally jittery bond market shrugged off the move.

While Lehman’s stock price fell 4.4%, investors were bidding up some of Lehman’s bonds, and the price of protection against default on Lehman debt ultimately declined on the day. It costs an investor $280,000 annually to protect against default on $10 million of Lehman debt for five years – down from $285,000 Wednesday, according to Phoenix Partners Group.

The tempered reaction in the bond markets underscores investors’ conviction the Federal Reserve won’t let a major U.S. securities dealer collapse and that Lehman Brothers may be ripe for a takeover. In March, when Bear Stearns was collapsing, protection on Lehman’s bonds cost more than twice as much as it does now.

Of course allowing Bear Stearns’ creditors to take a loss may just have brought forward the chaos of September 2008 to March 2008. Given that bank creditors had been bailed out in the United States since Continental Illinois this is entirely plausible. Nevertheless, the policy actions of 2008 made things worse. If an evil genius had taken over the world in March 2008 with the sole aim of causing financial chaos, he could not have done any better  – bail out Bear Stearns, convince everyone that no failures will be allowed and then renege on this implicit promise six months later.

Bookmark and Share

Written by Ashwin Parameswaran

September 13th, 2013 at 12:34 pm

Macroeconomic Stimulus: Theory Vs Practise

with 7 comments

There are many schools of macroeconomic thought. Most people agree that some form of stimulus is needed during a recession but what should this stimulus look like? Is monetary stimulus sufficient or do we need fiscal stimulus as well? What should this monetary stimulus look like? Do we need quantitative easing? Or is effective monetary stimulus largely about conditional forward guidance as Michael Woodford and Mark Carney seem to think?

Macroeconomic stimulus in practise is however a one-trick pony that has almost no relation to the theoretical debate. In the developed world since the Great Moderation, macroeconomic policy can be boiled down to one simple rule – prop up asset prices. In the Anglo-Saxon world, the rule is even simpler – prop up house prices. Mark Carney may grab all the headlines regarding UK economic policy but the only policies that matters to the UK economy are George Osborne’s attempts to boost the housing market.

There is nothing novel about this. Alan Greenspan was always quite frank about his approach to monetary policy. For all the talk about Taylor rules and how central banking and monetary policy became a rule-based science in the 1980s, the reality of Greenspan-era monetary policy was much simpler and followed only one rule – do not allow asset prices to fall. Abenomics is simply the logical end-stage of Greenspan’s monetary policy doctrine. Greenspan only needed to cut rates when stock markets tanked but the Bank of Japan needs to literally buy equity ETFs and real estate investment trusts(REITs) every week to prop up the markets. The Bank of Japan would love to provide more support to the housing market but unfortunately its purchases are already too large for the REIT market. Maybe the BOJ could buy up the housing stock of the country and rent it back to the people of Japan? Maybe eventually all assets in capitalist “free-market” economies will be owned by the central bank? What could possibly be objectionable about such an economic system?

Bookmark and Share

Written by Ashwin Parameswaran

September 12th, 2013 at 8:31 pm

Posted in Monetary Policy

Minsky and Hayek: Connections

with one comment

As Tyler Cowen argues, there are many similarities between Hayek’s and Minsky’s views on business cycles. Fundamentally, they both describe the “fundamental impossibility in maintaining orderly credit relations over time”.

Minsky saw Keynes’ theory as an ‘investment theory of the business cycle’ and his contribution as being a ‘financial theory of investment’. This financial theory was based on the credit/financing-focused endogenous theory of money of Joseph Schumpeter, whom Minsky studied under. Schumpeter’s views are best described in Chapter 3 (’Credit and Capital’) of his book ‘Theory of Economic Development’. The gist of this view is that “investment, and expenditures more generally, require financing, not saving” (Borio and Disyatat).

Schumpeter viewed the ability of banks to create money ex nihilo as the differentia specifica of capitalism. He saw bankers as ‘capitalists par excellence’ and viewed this ‘elastic’ nature of credit as an unambiguously positive phenomenon. Many people see Schumpeter’s view of money and banking as the antithesis of the Austrian view. But as Agnes Festre has highlighted, Hayek had a very similar view on the empirical reality of the credit process. Hayek however saw this elasticity of the monetary supply as a negative phenomenon. The similarity between Hayek and Minksy comes from the fact that Minsky also focused on the downside of an elastic monetary system in which overextension of credit was inevitably brought back to a halt by the violent snapback of the Minsky Moment.

Where Hayek and Minsky differed was that Minsky favoured a comprehensive stabilisation of the financial and monetary system through fiscal and monetary intervention after the Minsky moment. Hayek only supported the prevention of secondary deflationary spirals. Minsky supported aggressive and early monetary interventions (e.g. lender-of-last-resort programs) as well as fiscal stimulus. However, although Minsky supported stabilisation he was well aware of the damaging long-run consequences of stabilising the economic system. He understood that such a system would inevitably deteriorate into crony capitalism if fundamental reforms did not follow the stabilisation. Minsky supported a “policy strategy that emphasizes high consumption, constraints upon income inequality, and limitations upon permissible liability structures”. He also advocated “an industrial-organization strategy that limits the power of institutionalized giant firms”. Minsky was under no illusions that a stabilised capitalist economy could carry on with business as usual.

I disagree with Minsky on two fundamental points – I believe that a capitalist economy with sufficient low-level instability is resilient. Allow small failures of banks and financial players, tolerate small recessions and we can dramatically reduce the impact and probability of large-scale catastrophic recessions such as the 2008 financial crisis. A little bit of chaos is an essential ingredient in a resilient capitalist economy. I also believe that we must avoid stamping out the disturbance at its source and instead focus our efforts on mitigating the wider impact of the disturbance on the masses. In other words, bail out the masses with helicopter drops rather than bailing out the banks.

But although I disagree with Minsky his ideas are coherent. The same cannot be said for the current popular interpretation of Minsky which believes that so long as we deal with sufficient force when the Minsky moment arrives, capitalism can carry on as usual. As Minsky has argued in his book ‘John Maynard Keynes’, and as I have argued based on experiences in stabilising other complex adaptive systems such as rivers, forest fires and our brain, stabilised capitalism is an oxymoron.

What about Hayek’s views on credit elasticity? As I argued in an earlier post, “we live in a world where maturity transformation is no longer required to meet our investment needs. The evolution and malformation of the financial system means that Hayek’s analysis is more relevant now than it probably was during his own lifetime”. An elastic credit system is no longer beneficial to economic growth in the modern economy. This does not mean that we should ban the process of endogenous credit creation – it simply means that we must allow the maturity-transforming entities to collapse when they get in trouble1.

  1. Because we do not need an elastic, maturity-transforming financial system, we can firewall basic deposit banking from risky finance. This will enable us to allow the banks to fail when the next crisis hits us. The solution is not to ban casino banking but to suck the lifeblood out of it by constructing an alternative 100% reserve-like system. I have advocated that each resident should be given a deposit account with the central bank which can be backed by Treasuries, a ‘public option’ for basic deposit banking. John Cochrane has also argued for a similar system. In his words, “the Federal Reserve should continue to provide abundant reserves to banks, paying market interest. The Treasury could offer reserves to the rest of us—floating-rate, fixed-value, electronically-transferable debt. There is no reason that the Fed and Treasury should artificially starve the economy of completely safe, interest-paying cash”. ↩
Bookmark and Share

Written by Ashwin Parameswaran

August 23rd, 2013 at 4:56 pm

Interest on Excess Reserves and Inflation

with 8 comments

Martin Feldstein tries to answer the question: “Why has the Federal Reserve’s printing of so much money not caused higher inflation?” and comes up with a seemingly obvious answer – because the Fed pays interest on excess reserves. Like many others, Feldstein sees the payment of interest on excess reserves (IOER) as a “fundamental” change in Fed policy. The reality however is that the payment of IOER is a necessary prerequisite for any regime in which the Fed wishes to sustain a positive Fed Funds rate in the presence of excess reserves. IOER is a red herring and there is simply no way in which the Fed can generate inflation by tinkering with it.

The easiest way to understand this is to look at all the possible configurations of the Fed Funds rate and IOER. Ignoring the ludicrous scenario in which IOER is greater than the Fed Funds rate, there are three other configurations:

  1. Fed Funds and IOER are both equal to 0%.
  2. Fed Funds is above 0% and IOER is equal to 0%.
  3. Fed Funds is equal to 0% and IOER is below 0%.

In the first configuration, payment of interest on reserves clearly does not matter. If the Fed Funds rate itself is at zero, then clearly banks have no incentive to try and get rid of excess reserves.

The second configuration is often invoked as a scenario that could generate inflation. But if the Fed Funds rate is above 0% and IOER is 0%, then there can be no excess reserves in the system. If the central bank wants to sustain a positive Fed Funds rate, it must either pay interest on reserves or mop up all excess reserves. If there are any excess reserves, the Fed Funds market rate immediately falls to 0%. And we’re back to configuration 1 where both the Fed Funds and IOER are equal to 0%. To put it differently, if IOER is equal to 0% and the Fed Funds rate is above 0%, there cannot be any excess reserves in the system.

The third configuration is more interesting. Even if we have hit the zero-bound, why can’t the Fed enforce a negative IOER to force banks to try and get rid of their excess reserves and trigger the monetarist ‘hot potato’ effect? If the central bank charges a small negative rate on reserves, the effects will be negligible. Banks will pass on this cost to deposit-holders in the form of negative deposit rates or extra fees. In the absence of any alternative liquid and nominally safe investment options, most depositors will pay this safety premium.

But what if the Fed charges a significantly negative interest rate on reserves? For example, what if it costs 5% to hold excess reserves? In a world where all money is electronic, this may just work. But in a world where bank depositors possess the option to take their cash out in the form of bank notes, highly negative interest rates on reserves are impossible to enforce. In other words, if IOER is -5%, then you and I can earn a higher interest rate of 0% by simply taking our money out of the bank and holding currency instead.

To summarise, there is no avalanche of inflation coming our way no matter what the Fed pays out as interest on excess reserves.

Bookmark and Share

Written by Ashwin Parameswaran

July 24th, 2013 at 11:55 am

Posted in Monetary Policy

Invention Is Not The Same As Innovation

with 8 comments

As Reihan Salam argues, economic innovation is not just about basic research and technological breakthroughs. As Amar Bhide has said, “the willingness and ability of lower-level players to create new know-how and products is at least as important to an economy as the scientific and technological breakthroughs on which they rest”. History in fact provides us with at least two prominent examples where basic scientific research and invention did not translate into adequate economic innovation.

The first is the experience of the Soviet economic system. In The Soviet Union, most of the research and development was conducted by designated research institutes who were also partially responsible for implementing the new discoveries and inventions within the relevant industrial enterprise. The Soviets were reasonably successful in coming up with new inventions in their research institutes. Yet even when new products and technologies had been invented, the Soviet research institutes struggled to convince incumbent firms to introduce them into production.

Now how is this example relevant to a capitalist economy? Some of you may argue that unlike the communist enterprises in the Soviet Union, capitalist enterprises are strongly incentivised to jump upon any innovation that would come out of a research institute. But in reality there was no shortage of positive incentives to innovate or increase production for managers of Soviet enterprises. Soviet managers were not motivated by the communist ideal but by that most capitalist of incentives, the bonus. The economist Joseph Berliner estimated that a director of a coal-mine could earn as much as 150% of his base salary as a bonus just for outperforming plan production targets by 5%. On top of this, Soviet managers were provided with ‘innovation’ bonuses as the Soviet planning authorities became increasingly concerned with the slow pace of productivity growth in the 1950s and 60s. But none of these bonuses worked. In fact the bonuses served to further discourage the rollout of any risky innovation that could endanger the fulfilment of short-term plan targets. Managers would focus on low-risk process innovation to fulfil their innovation targets and focused on maximising their short-term ‘plan fulfillment’ bonuses. Ultimately the Soviet system could not replicate the real threat of failure that compels firms in a free enterprise economy to chase disruptive innovation for fear that an upstart new entrant may overtake them.

The second prominent example is the history of modern capitalism itself. Invention and scientific research are not what define the modern era of rapid growth that started in Britain in the early 19th century. As Jack Goldstone has argued, the technical innovations underpinning the “engine revolution” that England underwent in the early 19th century were present elsewhere. Countries like France were even widely regarded to be more advanced in the sciences than England. Yet it was in England that these innovations were so effectively put into economic use.

None of this is meant to undermine the importance of basic research funded by the government. But disruptive economic innovation also requires a truly competitive private sector where incumbents are faced with the threat of failure and barriers to entry for new firms are minimal. The ‘Great Stagnation’ is not driven by the lack of basic research and invention. It is driven by the lack of competition for incumbent large firms and the excessive barriers to entry that new firms and small businesses have to face in the neoliberal era.

Bookmark and Share

Written by Ashwin Parameswaran

July 11th, 2013 at 1:01 pm

Posted in Resilience

Explaining The Neglect of Doug Engelbart’s Vision: The Economic Irrelevance of Human Intelligence Augmentation

with 8 comments

Doug Engelbart’s work was driven by his vision of “augmenting the human intellect”:

By “augmenting human intellect” we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.

Alan Kay summarised the most common argument as to why Engelbart’s vision never came to fruition1:

Engelbart, for better or for worse, was trying to make a violin…most people don’t want to learn the violin.

This explanation makes sense within the market for mass computing. Engelbart was dismissive about the need for computing systems to be easy-to-use. And ease-of-use is everything in the mass market. Most people do not want to improve their skills at executing a task. They want to minimise the skill required to execute a task. The average photographer would rather buy an easy-to-use camera than teach himself how to use a professional camera. And there’s nothing wrong with this trend.

But why would this argument hold for professional computing? Surely a professional barista would be incentivised to become an expert even if it meant having to master a difficult skill and operate a complex coffee machine? Engelbart’s dismissal of the need for computing systems to be easy-to-use was not irrational. As Stanislav Datskovskiy argues, Engelbart’s primary concern was that the computing system should reward learning. And Engelbart knew that systems that were easy to use the first time around did not reward learning in the long run. There is no meaningful way in which anyone can be an expert user of most easy-to-use mass computing systems. And surely professional users need to be experts within their domain?

The somewhat surprising answer is: No, they do not. From an economic perspective, it is not worthwhile to maximise the skill of the human user of the system. What matters and needs to be optimised is total system performance. In the era of the ‘control revolution’, optimising total system performance involves making the machine smarter and the human operator dumber. Choosing to make your computing systems smarter and your employees dumber also helps keep costs down. Low-skilled employees are a lot easier to replace than highly skilled employees.

The increasing automation of the manufacturing sector has led to the progressive deskilling of the human workforce. For example, below is a simplified version of the empirical relationship between mechanisation and human skill that James Bright documented in 1958 (via Harry Braverman’s ‘Labor and Monopoly Capital’). However, although human performance has suffered, total system performance has improved dramatically and the cost of running the modern automated system is much lower than the preceding artisanal system.


Since the advent of the assembly line, the skill level required by manufacturing workers has reduced. And in the era of increasingly autonomous algorithmic systems, the same is true of “information workers”. For example, since my time working within the derivatives trading businesses of investment banks, banks have made a significant effort to reduce the amount of skill and know-how required to price and trade financial derivatives. Trading systems have been progressively modified so that as much knowledge as possible is embedded within the software.

Engelbart’s vision runs counter to the overwhelming trend of the modern era. Moreover, as Thierry Bardini argues in his fascinating book, Engelbart’s vision was also neglected within his own field which was much more focused on ‘artificial intelligence’ rather than ‘intelligence augmentation’. The best description of the ‘artificial intelligence’ program that eventually won the day was given by J.C.R. Licklider in his remarkably prescient paper ‘Man-Computer Symbiosis’ (emphasis mine):

As a concept, man-computer symbiosis is different in an important way from what North has called “mechanically extended man.” In the man-machine systems of the past, the human operator supplied the initiative, the direction, the integration, and the criterion. The mechanical parts of the systems were mere extensions, first of the human arm, then of the human eye….

In one sense of course, any man-made system is intended to help man….If we focus upon the human operator within the system, however, we see that, in some areas of technology, a fantastic change has taken place during the last few years. “Mechanical extension” has given way to replacement of men, to automation, and the men who remain are there more to help than to be helped. In some instances, particularly in large computer-centered information and control systems, the human operators are responsible mainly for functions that it proved infeasible to automate…They are “semi-automatic” systems, systems that started out to be fully automatic but fell short of the goal.

Licklider also correctly predicted that the interim period before full automation would be long and that for the foreseeable future, man and computer would have to work together in “intimate association”. And herein lies the downside of the neglect of Engelbart’s program. Although computers do most tasks, we still need skilled humans to monitor them and take care of unusual scenarios which cannot be fully automated. And humans are uniquely unsuited to a role where they exercise minimal discretion and skill most of the time but nevertheless need to display heroic prowess when things go awry. As I noted in an earlier essay, “the ability of the automated system to deal with most scenarios on ‘auto-pilot’ results in a deskilled human operator whose skill level never rises above that of a novice and who is ill-equipped to cope with the rare but inevitable instances when the system fails”.

In other words, ‘people make poor monitors for computers’. I have illustrated this principle in the context of airplane pilots and derivatives traders but Atul Varma finds an equally relevant example in the ‘near fully-automated’ coffee machine which is “comparatively easy to use, and makes fine drinks at the push of a button—until something goes wrong in the opaque innards of the machine”. Thierry Bardini quips that arguments against Engelbart’s vision always boiled down to the same objection – let the machine do the work! But in a world where machines do most of the work, how do humans become skilled enough so that they can take over during the inevitable emergency when the machine breaks down?

Bookmark and Share

Written by Ashwin Parameswaran

July 8th, 2013 at 3:54 pm