resilience, not stability

Archive for the ‘Resilience’ Category

The Reality of Abenomics: Qualitative Easing and Propping Up The Markets

with 6 comments

Noah Smith and David Andolfatto think that Abenomics conclusively proves that quantitative easing boosts inflation. But Abenomics has nothing to do with quantitative easing and everything to do with qualitative easing. Every week, the Bank of Japan (BoJ) purchases Topix and Nikkei 225 ETFs till it hits an annual limit of around 1 trillion yen (see table below for last month’s purchases). It also purchases a much smaller amount of real estate investment trusts (REITs). Abenomics has nothing to do with increasing the “money supply” and everything to do with propping up asset prices.

How does the BoJ decide when to buy ETFs? It is widely believed that they follow the ‘1% rule’ i.e. “it would buy ETFs when the Topix index of all issues on the first section of the Tokyo Stock Exchange fell more than 1% in the morning session”. Abenomics takes the Greenspan/Bernanke put to its logical conclusion – why restrict monetary policy to implicit protection of asset prices when it can serve as an explicit backstop to the stock market?

What is the end-game of Abenomics? Obviously buying ETFs and REITs will increase inflation. But advocates of qualitative easing argue that such purchases will be a temporary policy that will be unwound when the economy achieves ‘lift-off’. This is pure fantasy. The BoJ will have to keep upping the ante to maintain even a small positive rate of inflation in a demographically challenged economy such as Japan. Already the ‘1% rule’ is no longer sufficient: “In the latter half of 2013, the bank apparently relaxed that rule, sometimes buying even when the decline was less than 0.5%”. In the long run the BoJ will end up owning an ever-increasing proportion of the private sector financial assets in the Japanese economy.

There is no lift-off, just an ever-increasing cost of intervention for the central bank and a correspondingly increasing drug addiction for the private sector.

Bookmark and Share

Written by Ashwin Parameswaran

February 4th, 2014 at 12:48 pm

Capitalism For The Masses

with 16 comments

The post-2008 economic recovery has been a recovery of the capitalists. Growth in employment and real wages has been sluggish whereas profits have rebounded well past pre-recession highs. However, the decline of the share of labour in GDP is not a localised post-crisis phenomenon. It is a global phenomenon that started at least three decades ago. The Great Moderation has been a period of stable prosperity for the capitalists and a period of stagnation for labour.

Corporate Profits to GDP

The declining share of labour in national income is not a problem by itself. The underlying problem is the stagnation of real income experienced by the masses. One way to tackle this problem is to redistribute income away from high wage-earners and large capitalists towards the low and medium wage-earners. But this approach is a losing battle against the march of technological progress and the inevitable substitution of labour by capital. Instead we should empower the low and medium wage earners of today to become the capitalists of tomorrow whilst protecting them with a safety net that protects them as individuals rather than protecting the firms and unions that they are members of.

There are two reasons why a ‘capitalism for the masses’ is a viable proposition today. Entrepreneurs need less capital to start a viable business today than they did in the twentieth century and whatever capital they do need is much more freely available today than it has been in the past.

Entrepreneurs need less capital

Economies of scale and scope are collapsing across the economy. This has been the case for many years in the world of software. But it is also increasingly true for hardware. The advantages of the small player can overcome the cost disadvantage of operating at a lower scale. The agile small player can experiment and iterate in a manner that the large incumbent player cannot. As Luke Johnson observes in the case of the craft beer industry, the customisation and unique character of the smaller producer is well worth the small premium for many customers.

Throughout the ‘Control Revolution’, larger firms enjoyed a significant cost advantage over smaller players and oligopolies were the norm In most manufacturing sectors. Chris Anderson narrates the story of his grandfather who was an inventor during the heyday of large conglomerates in the 20th century:

he was an inventor, but he could not become an entrepreneur because those additional steps of mass production, distribution, marketing, et cetera, were essentially inaccessible in those days. All you could do was patent, license and hope for the best. You had to lose control of your invention. You had to hand it off to somebody else.

On the other hand, an inventor today has a multitude of options to prototype and produce small quantities of his product. The logistics of selling and delivering the product to customers are also easily outsourced. As Luke Johnson identifies, the dynamics of capitalism “appear to be coming full circle and reverting to a structure that prevailed at the start of industrial capitalism” in the early part of the nineteenth century.

Capital is freely available

In our increasingly ageing and wealthy society, there is no shortage of capital available to fund new businesses. Household savings alone is sufficient to fund the required business investment1.

We imagine that risky businesses can only be funded by the alchemy of modern maturity-transforming banking system. But as I showed in my last essay, the explosion of peer-to-peer financing in the United Kingdom today shows us that speculative equity ventures and business loans can and are being funded by the man on the street.

Although significant progress has been made recently, many of the hurdles to achieving a genuinely decentralised financial system are regulatory. By trying to protect small investors from the consequences of investing in a failed venture, we end up denying financing to small businesses. It is insane that we allow small investors to “gift” as much money as they want on Kickstarter but we limit them from investing in the same venture as a part-owner with the legal protections against fraud afforded by being an owner.

Deregulate and Expand The Safety Net

Throughout the industrial era, most people did not have the option of becoming a capitalist. If you worked in a manufacturing plant, your best option was to join another firm. This is still the case today. But in many industries, the only thing stopping laid off employees from striking out on their own are regulatory economies of scale and protections (such as bailouts, patent protections and licensing requirements) that large incumbent firms enjoy. In order to enable every person to become a capitalist, we need to reduce the regulatory burden on all aspiring capitalists as well as removing the protections enjoyed by incumbent large firms.

At the same time that we eliminate these localised firm-focused safety nets, we need to implement a a broad-based safety net for individuals that will help mitigate the greater uncertainty of such an economic system. Every individual should be assured of access to an income that affords him the basic necessities of life, access to catastrophic healthcare protection and access to basic financial services such as the ability to hold a bank account and make payments.

However, no one is entitled to protection from the inherent instability of a competitive capitalist economy. Firms and workers should not be protected by bailouts. Individual investors should not be protected from the risks of investing their money in failed ventures. Everyone deserves a safety net but no one deserves a hammock.

The expanded safety net and increased deregulation go hand in hand. Increasing instability without a safety net will make the system more fragile. And a broad-based safety net by itself will simply dovetail with localised safety nets to reinforce an already sclerotic and stagnant economic system. By combining the two, we can achieve the best of both worlds – a robust economic system that can achieve disruptive economic progress whilst protecting individuals from the worst consequences of economic failure.

Note: For a more detailed explanation of my approach to economic policy and its rationale, see my earlier essay ‘Radical Centrism: Uniting the Radical Left and the Radical Right‘

  1. Data on household savings courtesy of Natixis referred to in previouspost ↩
Bookmark and Share

Written by Ashwin Parameswaran

November 8th, 2013 at 1:19 pm

Posted in Resilience

Financing Investment In A World Without Maturity Transformation

with 7 comments

Why do banks exist? The conventional wisdom goes like this – depositors prefer to hold liquid risk-free assets and borrowers prefer to borrow for the long-term to invest in risky projects. Banks sit in the middle of this process and perform a sort of alchemy. By performing this alchemy, banks leave themselves open to the risk of bank runs – if all the depositors seek to withdraw their money at the same time, even a bank with otherwise sound loans as assets can go bust. This perceived risk of a bank run is why governments and central banks provide deposit insurance and liquidity facilities to the banking sector, a privilege that is not typically available to other financial intermediaries. In other words, banks exist for the purpose of maturity transformation.

Maturity transformation is a nice catch-all phrase but it subsumes some very different lending activities conducted by banks. For example, banks can help businesses to finance their working capital needs and provide financing against customer invoices. This function is as old as banking itself. It is what bankers did in the prosperous city-states in Italy in the 15th century. But most of these debts are short-term debts with a maturity of less than a year. This is not the heroic maturity transformation of the bankers whom Schumpeter viewed as the ‘capitalists par excellence’. There is no reason to believe that society does not have the risk appetite to take on the default risk of such short-term debt and as I shall show later, there is significant evidence of this already happening in the United Kingdom today.

In the modern era, banks also provide a range of short-term lending options to consumers such as credit card loans. Again this is short-term debt that forms part of a well-diversified pool of loans. For well over a decade, these have been amongst the most easily securitised parts of a bank’s balance sheet. Again, although this is also technically maturity transformation it is typically not what most of us think as the primary purpose of maturity transformation.

When most of us think of maturity transformation we think of banks’ ability to provide long-term loans (at least 3-5 years in maturity and as long as 30 years in maturity). Again this is not a homogeneous category. The most significant component of bank lending on such a long maturity in many countries is mortgage lending. Mortgage lending is undoubtedly an important part of the financial landscape. But very little of the maturity risk of mortgages actually stays with the originating banks. The interest rate risk is often hedged away with willing counterparties such as pension funds and life insurers and the credit risk is often securitised away.

What most people think of when they think of the role of banks is their role in providing long-term loans to businesses. The popular press is rife with the inability of banks to lend more to small and medium enterprises (SMEs) and how this is holding back economic growth. It is an obvious truth that banks make very few unsecured loans to SMEs on even a 3-5 year maturity, let alone a 30 year maturity. But does this matter? And did banks ever engage in such lending?

To understand the modern mythology surrounding bank lending to businesses, we need to study the history of bank lending to industry. In the United Kingdom, bank lending has never formed a significant component of business funding for growth investment even during the high-growth periods of the 19th century. To the extent that banks have provided such loans in the modern era, it has typically been on the back of security in the form of owners’ property, company property or personal guarantees from company directors.

‘Heroic’ maturity transformation was born in the economies of mainland Europe that wanted to catch up to Britain in the middle of the 19th century. The first such bank was Crédit Mobilier which was founded in France in 1852 to finance the railroads that banking had not touched till then. Although Credit Mobilier collapsed in the financial crisis of 1867, the innovation took hold in the form of other banks in France such as Crédit Lyonnais and spread across Europe.

The quintessential example of banks as long-term investment institutions arose in Germany between 1870 and the First World War. In an era when financial crises were frequent and banking was risky, German banks figured out how to profitably fund long-term investment projects without bankrupting themselves in the process. Instead of just arranging share issuances, they bought a significant chunk of the equity of firms they lent to. German banks owned a significant stake in German industry and proceeded to engineer German industry such that the risk to them was minimised. This involved using their ownership stake and financing power to push through mergers, cartels and backward/forward integration. In other words, they de-risked German industry. Bankers may have been the driving force of capitalism in 19th century Germany but they were not risk-takers.

After Germany, this model of banking and development has been copied by a number of countries – Italy, Russia prior to 1917 and pretty much all of East Asia (starting with Japan) since World War Two. There are many aspects of this model that are not relevant to today’s world – what worked in the mass-production, heavy-industry dominant period of capitalism when economies of scale meant that most successful businesses were large will most likely not work in the world today. But the critical problem with this model is that it is a model suited to accelerating catch-up growth. It is also a fundamentally low-risk, low-reward model of banking and economic development. In the developed world that is going through the ‘Great Stagnation’, this model will not work.

Now you may argue – so what if we don’t need banks to conduct heroic maturity transformation that funds long-term investment? Surely we need maturity transformation to fund the more mundane activities that I described earlier – invoice financing, short-term business loans, mortgages etc. Until a couple of years ago, this question was literally unanswerable. The only honest answer would have been – who knows? But the explosion of activity in the peer-to-peer lending sector now enables us to arrive at some preliminary conclusions.

Intermediaries that facilitate peer-to-peer (P2P) lending are subject to very little regulation in the United Kingdom (unlike the process of starting a bank which can take years and land you with a seven-figure legal bill). Unsurprisingly, there has been an explosion in the number of peer-to-peer lending platforms in the UK. Conventional wisdom would suggest that individuals who lend through such platforms would lend their money at higher rates than banks would. After all, they have nowhere near as privileged a position as banks do – no ability to create money ex nihilo, no access to the central bank’s repo window. But the reality is exactly the opposite. The lending rates in the industry are, if anything, too low. Individual lenders are falling over themselves to lend money to risky individuals and companies at rates far lower than what banks would lend to them at (to take just one example, take a look at the borrowing rates at Zopa).

And P2P lending is not just a niche phenomenon – there are platforms that handle everything from invoice financing, bridge loans, longer-term loans to individuals and businesses and mortgages. The last couple of years have in effect given us a controlled experiment in what a non-maturity transforming lending system would look like. And the answer is that rates would be lower than they would be in a maturity-transforming system. Maturity-transforming banking is redundant – it only gives us recurrent financial crises. The idea that in the absence of bank maturity transformation, lending rates would explode has been disproven.

This still doesn’t give us any answers as to what we can do to stimulate genuine disruptive and risky investment that can drag us out of the ‘great stagnation’. The answer is simple – we need to do more to promote equity investment in disruptive new enterprises. The conventional wisdom states that there isn’t enough risk appetite for all the equity financing that new high-risk businesses require for their investment needs. Again, the growth in equity crowd-funding is slowly disproving this myth.

A common argument against opening up the possibility of SMEs funding their equity requirements from the masses is that the masses are ill-equipped to evaluate the quality of the SMEs that seek their financing. This may be true but the “Kickstarter” approach is much worse in this respect. Recently there have been some significant Kickstarter-funded “failures”. I have nothing against the kickstarter approach. But it is insane to allow individuals to collectively donate millions of dollars to ventures without any ownership stake while at the same time barring them from funding the same projects and receiving an ownership stake in return.

Enabling equity crowd funding has another benefit that rarely gets mentioned. Left-wing critics of capitalism frequently criticise the “selfish” nature of capitalism. What the growth of the kickstarter funding model shows us that on the individual level there is much more to capitalism than simple monetary interest. Almost all the criticisms of capitalism are derived from the pathologies of institutional fiduciary capitalism. The fact that “capitalism in the large” is selfish is a good thing – fund managers and venture capitalists have a fiduciary responsibility to their investors to focus exclusively on the monetary prospects of their investments and this is exactly how it should be.

But when we invest our money directly in ventures that we care about, we are motivated by much more than just the prospect of riches. However we can do better than allowing individuals to donate money on a hope and a prayer. Expecting everybody to move to a ‘gift economy’ is unrealistic. But we can enable a genuine capitalism for the masses, where individuals can fund projects that provide them with a non-monetary payoff but with all the legal protections afforded by “corporate” capitalism. Institutional fiduciary capitalism is selfish by definition and design. If we want capitalism to become less selfish, we need to enable each individual to become a capitalist.



Note: Most of this essay is drawn from my experience in the financial industry but the portion on the growth of ‘heroic’ banking in Europe from 1850 till the First World War is mostly drawn from ‘The Oxford History of Modern Europe’ (pg 64 onwards). Chapter 7 of Davis Blackbourn’s book ‘History of Germany 1780-1918: The Long Nineteenth Century’ is also excellent on the German model.

Bookmark and Share

Written by Ashwin Parameswaran

October 8th, 2013 at 11:56 am

Posted in Resilience

Minsky and Hayek: Connections

with one comment

As Tyler Cowen argues, there are many similarities between Hayek’s and Minsky’s views on business cycles. Fundamentally, they both describe the “fundamental impossibility in maintaining orderly credit relations over time”.

Minsky saw Keynes’ theory as an ‘investment theory of the business cycle’ and his contribution as being a ‘financial theory of investment’. This financial theory was based on the credit/financing-focused endogenous theory of money of Joseph Schumpeter, whom Minsky studied under. Schumpeter’s views are best described in Chapter 3 (’Credit and Capital’) of his book ‘Theory of Economic Development’. The gist of this view is that “investment, and expenditures more generally, require financing, not saving” (Borio and Disyatat).

Schumpeter viewed the ability of banks to create money ex nihilo as the differentia specifica of capitalism. He saw bankers as ‘capitalists par excellence’ and viewed this ‘elastic’ nature of credit as an unambiguously positive phenomenon. Many people see Schumpeter’s view of money and banking as the antithesis of the Austrian view. But as Agnes Festre has highlighted, Hayek had a very similar view on the empirical reality of the credit process. Hayek however saw this elasticity of the monetary supply as a negative phenomenon. The similarity between Hayek and Minksy comes from the fact that Minsky also focused on the downside of an elastic monetary system in which overextension of credit was inevitably brought back to a halt by the violent snapback of the Minsky Moment.

Where Hayek and Minsky differed was that Minsky favoured a comprehensive stabilisation of the financial and monetary system through fiscal and monetary intervention after the Minsky moment. Hayek only supported the prevention of secondary deflationary spirals. Minsky supported aggressive and early monetary interventions (e.g. lender-of-last-resort programs) as well as fiscal stimulus. However, although Minsky supported stabilisation he was well aware of the damaging long-run consequences of stabilising the economic system. He understood that such a system would inevitably deteriorate into crony capitalism if fundamental reforms did not follow the stabilisation. Minsky supported a “policy strategy that emphasizes high consumption, constraints upon income inequality, and limitations upon permissible liability structures”. He also advocated “an industrial-organization strategy that limits the power of institutionalized giant firms”. Minsky was under no illusions that a stabilised capitalist economy could carry on with business as usual.

I disagree with Minsky on two fundamental points – I believe that a capitalist economy with sufficient low-level instability is resilient. Allow small failures of banks and financial players, tolerate small recessions and we can dramatically reduce the impact and probability of large-scale catastrophic recessions such as the 2008 financial crisis. A little bit of chaos is an essential ingredient in a resilient capitalist economy. I also believe that we must avoid stamping out the disturbance at its source and instead focus our efforts on mitigating the wider impact of the disturbance on the masses. In other words, bail out the masses with helicopter drops rather than bailing out the banks.

But although I disagree with Minsky his ideas are coherent. The same cannot be said for the current popular interpretation of Minsky which believes that so long as we deal with sufficient force when the Minsky moment arrives, capitalism can carry on as usual. As Minsky has argued in his book ‘John Maynard Keynes’, and as I have argued based on experiences in stabilising other complex adaptive systems such as rivers, forest fires and our brain, stabilised capitalism is an oxymoron.

What about Hayek’s views on credit elasticity? As I argued in an earlier post, “we live in a world where maturity transformation is no longer required to meet our investment needs. The evolution and malformation of the financial system means that Hayek’s analysis is more relevant now than it probably was during his own lifetime”. An elastic credit system is no longer beneficial to economic growth in the modern economy. This does not mean that we should ban the process of endogenous credit creation – it simply means that we must allow the maturity-transforming entities to collapse when they get in trouble1.

  1. Because we do not need an elastic, maturity-transforming financial system, we can firewall basic deposit banking from risky finance. This will enable us to allow the banks to fail when the next crisis hits us. The solution is not to ban casino banking but to suck the lifeblood out of it by constructing an alternative 100% reserve-like system. I have advocated that each resident should be given a deposit account with the central bank which can be backed by Treasuries, a ‘public option’ for basic deposit banking. John Cochrane has also argued for a similar system. In his words, “the Federal Reserve should continue to provide abundant reserves to banks, paying market interest. The Treasury could offer reserves to the rest of us—floating-rate, fixed-value, electronically-transferable debt. There is no reason that the Fed and Treasury should artificially starve the economy of completely safe, interest-paying cash”. ↩
Bookmark and Share

Written by Ashwin Parameswaran

August 23rd, 2013 at 4:56 pm

Invention Is Not The Same As Innovation

with 8 comments

As Reihan Salam argues, economic innovation is not just about basic research and technological breakthroughs. As Amar Bhide has said, “the willingness and ability of lower-level players to create new know-how and products is at least as important to an economy as the scientific and technological breakthroughs on which they rest”. History in fact provides us with at least two prominent examples where basic scientific research and invention did not translate into adequate economic innovation.

The first is the experience of the Soviet economic system. In The Soviet Union, most of the research and development was conducted by designated research institutes who were also partially responsible for implementing the new discoveries and inventions within the relevant industrial enterprise. The Soviets were reasonably successful in coming up with new inventions in their research institutes. Yet even when new products and technologies had been invented, the Soviet research institutes struggled to convince incumbent firms to introduce them into production.

Now how is this example relevant to a capitalist economy? Some of you may argue that unlike the communist enterprises in the Soviet Union, capitalist enterprises are strongly incentivised to jump upon any innovation that would come out of a research institute. But in reality there was no shortage of positive incentives to innovate or increase production for managers of Soviet enterprises. Soviet managers were not motivated by the communist ideal but by that most capitalist of incentives, the bonus. The economist Joseph Berliner estimated that a director of a coal-mine could earn as much as 150% of his base salary as a bonus just for outperforming plan production targets by 5%. On top of this, Soviet managers were provided with ‘innovation’ bonuses as the Soviet planning authorities became increasingly concerned with the slow pace of productivity growth in the 1950s and 60s. But none of these bonuses worked. In fact the bonuses served to further discourage the rollout of any risky innovation that could endanger the fulfilment of short-term plan targets. Managers would focus on low-risk process innovation to fulfil their innovation targets and focused on maximising their short-term ‘plan fulfillment’ bonuses. Ultimately the Soviet system could not replicate the real threat of failure that compels firms in a free enterprise economy to chase disruptive innovation for fear that an upstart new entrant may overtake them.

The second prominent example is the history of modern capitalism itself. Invention and scientific research are not what define the modern era of rapid growth that started in Britain in the early 19th century. As Jack Goldstone has argued, the technical innovations underpinning the “engine revolution” that England underwent in the early 19th century were present elsewhere. Countries like France were even widely regarded to be more advanced in the sciences than England. Yet it was in England that these innovations were so effectively put into economic use.

None of this is meant to undermine the importance of basic research funded by the government. But disruptive economic innovation also requires a truly competitive private sector where incumbents are faced with the threat of failure and barriers to entry for new firms are minimal. The ‘Great Stagnation’ is not driven by the lack of basic research and invention. It is driven by the lack of competition for incumbent large firms and the excessive barriers to entry that new firms and small businesses have to face in the neoliberal era.

Bookmark and Share

Written by Ashwin Parameswaran

July 11th, 2013 at 1:01 pm

Posted in Resilience

Explaining The Neglect of Doug Engelbart’s Vision: The Economic Irrelevance of Human Intelligence Augmentation

with 8 comments

Doug Engelbart’s work was driven by his vision of “augmenting the human intellect”:

By “augmenting human intellect” we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.

Alan Kay summarised the most common argument as to why Engelbart’s vision never came to fruition1:

Engelbart, for better or for worse, was trying to make a violin…most people don’t want to learn the violin.

This explanation makes sense within the market for mass computing. Engelbart was dismissive about the need for computing systems to be easy-to-use. And ease-of-use is everything in the mass market. Most people do not want to improve their skills at executing a task. They want to minimise the skill required to execute a task. The average photographer would rather buy an easy-to-use camera than teach himself how to use a professional camera. And there’s nothing wrong with this trend.

But why would this argument hold for professional computing? Surely a professional barista would be incentivised to become an expert even if it meant having to master a difficult skill and operate a complex coffee machine? Engelbart’s dismissal of the need for computing systems to be easy-to-use was not irrational. As Stanislav Datskovskiy argues, Engelbart’s primary concern was that the computing system should reward learning. And Engelbart knew that systems that were easy to use the first time around did not reward learning in the long run. There is no meaningful way in which anyone can be an expert user of most easy-to-use mass computing systems. And surely professional users need to be experts within their domain?

The somewhat surprising answer is: No, they do not. From an economic perspective, it is not worthwhile to maximise the skill of the human user of the system. What matters and needs to be optimised is total system performance. In the era of the ‘control revolution’, optimising total system performance involves making the machine smarter and the human operator dumber. Choosing to make your computing systems smarter and your employees dumber also helps keep costs down. Low-skilled employees are a lot easier to replace than highly skilled employees.

The increasing automation of the manufacturing sector has led to the progressive deskilling of the human workforce. For example, below is a simplified version of the empirical relationship between mechanisation and human skill that James Bright documented in 1958 (via Harry Braverman’s ‘Labor and Monopoly Capital’). However, although human performance has suffered, total system performance has improved dramatically and the cost of running the modern automated system is much lower than the preceding artisanal system.


Since the advent of the assembly line, the skill level required by manufacturing workers has reduced. And in the era of increasingly autonomous algorithmic systems, the same is true of “information workers”. For example, since my time working within the derivatives trading businesses of investment banks, banks have made a significant effort to reduce the amount of skill and know-how required to price and trade financial derivatives. Trading systems have been progressively modified so that as much knowledge as possible is embedded within the software.

Engelbart’s vision runs counter to the overwhelming trend of the modern era. Moreover, as Thierry Bardini argues in his fascinating book, Engelbart’s vision was also neglected within his own field which was much more focused on ‘artificial intelligence’ rather than ‘intelligence augmentation’. The best description of the ‘artificial intelligence’ program that eventually won the day was given by J.C.R. Licklider in his remarkably prescient paper ‘Man-Computer Symbiosis’ (emphasis mine):

As a concept, man-computer symbiosis is different in an important way from what North has called “mechanically extended man.” In the man-machine systems of the past, the human operator supplied the initiative, the direction, the integration, and the criterion. The mechanical parts of the systems were mere extensions, first of the human arm, then of the human eye….

In one sense of course, any man-made system is intended to help man….If we focus upon the human operator within the system, however, we see that, in some areas of technology, a fantastic change has taken place during the last few years. “Mechanical extension” has given way to replacement of men, to automation, and the men who remain are there more to help than to be helped. In some instances, particularly in large computer-centered information and control systems, the human operators are responsible mainly for functions that it proved infeasible to automate…They are “semi-automatic” systems, systems that started out to be fully automatic but fell short of the goal.

Licklider also correctly predicted that the interim period before full automation would be long and that for the foreseeable future, man and computer would have to work together in “intimate association”. And herein lies the downside of the neglect of Engelbart’s program. Although computers do most tasks, we still need skilled humans to monitor them and take care of unusual scenarios which cannot be fully automated. And humans are uniquely unsuited to a role where they exercise minimal discretion and skill most of the time but nevertheless need to display heroic prowess when things go awry. As I noted in an earlier essay, “the ability of the automated system to deal with most scenarios on ‘auto-pilot’ results in a deskilled human operator whose skill level never rises above that of a novice and who is ill-equipped to cope with the rare but inevitable instances when the system fails”.

In other words, ‘people make poor monitors for computers’. I have illustrated this principle in the context of airplane pilots and derivatives traders but Atul Varma finds an equally relevant example in the ‘near fully-automated’ coffee machine which is “comparatively easy to use, and makes fine drinks at the push of a button—until something goes wrong in the opaque innards of the machine”. Thierry Bardini quips that arguments against Engelbart’s vision always boiled down to the same objection – let the machine do the work! But in a world where machines do most of the work, how do humans become skilled enough so that they can take over during the inevitable emergency when the machine breaks down?

Bookmark and Share

Written by Ashwin Parameswaran

July 8th, 2013 at 3:54 pm

Implementing The Helicopter Drop

with 17 comments

Why are helicopter drops off limits for modern central banks and governments? Why do central banks and governments prefer to buy assets from banks and the rich rather than send money to the masses? There are three major reasons:

  1. If the central bank simply prints money out of thin air and credits it to the people, then it suffers a loss. If the helicopter drop is sufficiently large, then the central bank may even become technically insolvent. Although this has very few technical implications for the functioning of a central bank, the political implications are significant. Opponents of the stimulus will latch on to the losses as a sign of monetary irresponsibility. The political implications and fear of loss of central banking independence may even have a negative impact on the economy. Understandably, central banks prefer to avoid such a situation. By buying financial assets, central bank governors can at least postpone losses for long enough that it becomes the next governor’s headache.
  2. If the helicopter drop is financed by a bond issuance by the government, then many market participants fear that the government debt will increase to unsustainable levels that cannot be paid back.
  3. If the helicopter drop is financed by a bond issued by the government and bought by the central bank, then some commentators fear that we will have crossed the rubicon into the dangerous world of monetised fiscal deficits.

I have written in the past on how these concerns are largely mistaken. But perceptions matter, not least because markets are reflexive. So the question is: How can we design a systematic program of helicopter drops that tackles the above concerns? The below is one possible solution:

  • The helicopter drop should be financed by a perpetual bond issued by the government and bought by the central bank. The perpetual bond pays an overnight floating interest rate equivalent to the Federal Funds rate.

The perpetual nature of the bond means that the government will never have to pay it back. The floating rate paid on the bond means that interest rate risk on the central bank’s balance sheet is negligible and it can hold the bond at par value on its balance sheet. The central bank’s inflation target is left unchanged which means that fiscal deficits cannot be monetised without limit.

Bookmark and Share

Written by Ashwin Parameswaran

July 2nd, 2013 at 3:35 pm

Creation, Destruction and Stagnation

with 2 comments

Many people think that Joseph Schumpeter was the first person to explore the idea of ‘creative destruction’. But as Hugo and Erik Reinert explain in a fascinating paper, the idea of ‘creative destruction’ has a long history. In the same paper, they also provide some subtle insights into the meaning of creative destruction.

First, creation implies destruction. When Apple created the smartphone, it necessarily annihilated the sales of dumb-phones and cameras. However “this relationship exists only in one direction and does not function when reversed. Denial does not imply affirmation, destruction itself does not lead to creation”. Nietzsche often explored this theme in aphorisms such as:

Whoever must be a creator always annihilates.

affirmation requires denial and annihilation.

You must wish to consume yourself in your own flame: how could you wish to become new unless you had first become ashes!

Second, the opposite of creative destruction is stagnation. The ‘Great Stagnation’ is the logical consequence of an economic environment where both job creation and destruction are falling.


Bookmark and Share

Written by Ashwin Parameswaran

July 1st, 2013 at 6:15 pm

Posted in Resilience

Asymmetric Nature of Greenspan/Bernanke Put Monetary Policy

with 2 comments

Via Mark Thoma I came across a post by Antonio Fatas who complains:

I find it surprising that those who argued that QE had very little effect in the economy are now ready to blame the central bank for all the damage they will do to the economy when they undo those measures. So they seem to have a model of the effectiveness of central banks that is very asymmetric – I would like to see that model.

One possible model that contains such an asymmetric response is the model of addiction. Let me provide an analogy that I have explored in detail in an earlier post – the history of psychotropic medication in the United States and its usage to combat an ever-increasing laundry list of mental “disorders”. You keep taking the pills and you hang on, you barely function albeit in a somewhat dysfunctional manner. If you increase the dosage the benefits are negligible. But if you stop taking the pills and do nothing else to break the fall then you risk a catastrophic collapse. That is the asymmetric response of addiction.

Unlike the critics that Fatas refers to, I’m not opposed to the withdrawal of monetary stimulus but the stimulus itself. In particular I am opposed to the nature of the stimulus which focuses all its efforts on propping up asset prices. However, unlike most Fed critics who tend to be conventional “austerians”, I’m a strong critic of asset-price based monetary policy and an equally strong advocate for combined monetary-fiscal stimulus in the form of direct cash transfers to households. I support helicopter drops not just because it is fairer and more “neutral” in its impact on income distribution than quantitative easing. I support helicopter drops because it is the parachute that prevents the hard landing if we stop quantitative easing. I support helicopter drops because it is the most free-market of all macro-stabilisation policies. Rather than bailing out banks and firms and propping up asset prices, helicopter drops simply mitigate the consequences of macroeconomic volatility upon the people. I support helicopter drops because it helps us build a resilient economic system as opposed to chasing the utopian aim of perfect macroeconomic stability.

Bookmark and Share

Written by Ashwin Parameswaran

June 27th, 2013 at 2:52 pm

Deskilling and The Cul-de-Sac of Near Perfect Automation

with 5 comments

One of the core ideas in my essay ‘People Make Poor Monitors For Computers’ was the deskilling of human operators whose sole responsibility is to monitor automated systems. The ability of the automated system to deal with most scenarios on ‘auto-pilot’ results in a deskilled human operator whose skill level never rises above that of a novice and who is ill-equipped to cope with the rare but inevitable instances when the system fails. As James Reason notes1 (emphasis mine) :

Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills. One of the consequences of automation, therefore, is that operators become de-skilled in precisely those activities that justify their marginalised existence. But when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions. Duncan (1987, p. 266) makes the same point: “The more reliable the plant, the less opportunity there will be for the operator to practise direct intervention, and the more difficult will be the demands of the remaining tasks requiring operator intervention.”

‘Humans monitoring near-autonomous systems’ is not just one way to make a system more automated. It is in fact the most common strategy to increase automation within complex domains. For example, drone warfare largely consists of providing robots with increasing autonomy such that “the human operator is only responsible for the most strategic decisions, with robots making every tactical choice”2.

But if this model of automation deskills the human operator, then why does anyone choose it in the first place? The answer is that the deskilling and the fragility that comes with it is not an instantaneous phenomenon. The first-generation automated system piggy backs upon the existing expertise of the human operators who have become experts by operating within a less-automated domain. In fact expert human operators are often the most eager to automate away parts of their role and are most comfortable with a monitoring role. The experience of having learnt on less automated systems gives them adequate domain expertise to manage only the strategic decisions and edge cases.

The fragility arises when the second-generation human operators who have no experience of ever having practised routine tactical activities and interventions have to take over the monitoring role. This problem can be mitigated by retaining the less-automated domain as a learning tool to train new human operators. But in many domains, there is no substitute for the real thing and most of the learning happens ‘on the job’. Certainly this is true of financial markets or trading and it is almost certainly true for combat/war. Derivative traders who have spent most of their careers hacking away at simple tool-like models can usually sense when their complex pricing/trading system is malfunctioning. But what about the novice trader who has spent his entire career working with a complex, illegible system?

In some domains like finance and airplane automation, this problem is already visible. But there are many other domains in which we can expect the same pattern to arise in the future. An experienced driver today is probably competent enough to monitor a self-driving car but what about a driver twenty years from today who will likely not have spent any meaningful amount of time driving a manual car? An experienced teacher today is probably good enough to extract good results from a classroom where so much of the process of instruction and evaluation are automated but what about the next generation of teachers? An experienced soldier or pilot with years of real combat experience is probably competent enough to manage a fleet of drones but what about the next generation of combat soldiers whose only experience of warfare is through a computer screen?

Near-autonomous systems are perfect for ‘machine learning’ but almost useless for ‘human learning’. The system generates increasing amounts of data to improve the performance of the automated component within the system. But the system cannot provide the practise and experience that are required to enable human expertise.

Automation is often seen as a way to avoid ‘irrational’ or sloppy human errors. By deskilling the human operator, this justification becomes a self-fulfilling prophecy. By making it harder for the human operator to achieve expertise, the proportion of apparently irrational errors increases. Failures are inevitably taken as evidence of human failure upon which the system is made even more automated thus further exacerbating the problem of deskilling.

The delayed deskilling of the human operators also means that the transition to a near-automated system is almost impossible to reverse. By definition, simply reverting back to the old less-automated, tool-like system actually makes things worse as the second-generation human operators have no experience with using these tools. Compared to carving out an increased role for the now-deskilled human operator, more automation always looks like the best option. If we eventually get to the dream of perfectly autonomous robotic systems, then the deskilling may be just a temporary blip. But what if we never get to the perfectly autonomous robotic system?

Note: Apart from ‘People Make Poor Monitors For Computers’, ‘The Control Revolution And Its Discontents’ also touches upon similar topics but within the broader context of how this move to near-perfectly algorithmic systems fits into the ‘Control Revolution’.

  1. ‘Human Error’ by James Reason (1990), pg 180. ↩

  2. ‘Robot Futures’ by Illah Reza Nourbakhsh (2013), pg 76. ↩

Bookmark and Share

Written by Ashwin Parameswaran

May 9th, 2013 at 5:35 pm