Heuristics and Robustness in Asset Allocation: The 1/N Rule, “Hard” Constraints and Fractional Kelly Strategies
Harry Markowitz received the Nobel Prize in Economics in 1990 for his work on mean-variance optimisation that provided the foundations for Modern Portfolio Theory (MPT). Yet as Gerd Gigerenzer notes, when it came to investing his own money, Markowitz relied on a simple heuristic, the “1/N Rule” which simply allocates equally across all N funds under consideration. At first glance, this may seem to be an incredibly irrational strategy. Yet, there is compelling empirical evidence backing even such a simple heuristic as the 1/N Rule. Gigerenzer points to a study conducted by DeMiguel, Garlappi and Uppal (DMU) which after comparing many asset-allocation strategies including Markowitz mean-variance optimisation concludes that “there is no single model that consistently delivers a Sharpe ratio or a CEQ return that is higher than that of the 1/ N portfolio, which also has a very low turnover.”
Before exploring exactly what the DMU study and Gigerenzer’s work implies, it is worth emphasizing what it does not imply. First, as both DMU and Gigerenzer stress, the purpose of this post is not to argue for the superiority of the 1/N Rule over all other asset-allocation strategies. The aim is just to illustrate how simple heuristics can outperform apparently complex optimisation strategies under certain circumstances. Second, the 1/N Rule does not apply when allocating across securities with excessive idiosyncratic risk e.g. single stocks. In the DMU study for example, the N assets are equity portfolios constructed on the basis of industry classification, countries, firm characteristics etc.
So in what circumstances does the 1/N Rule outperform? Gigerenzer provides a few answers here as do DMU in the above-mentioned study but in my opinion, all of them come down to “the predictive uncertainty of the problem“. When faced with significant irreducible uncertainty, the robustness of the approach is more relevant to its future performance than its optimality. As Gigerenzer notes, this is not about computational intractability – indeed, a more uncertain environment requires a simpler approach, not a more complex one. In his words: “The optimization models performed better than the simple heuristic in data fitting but worse in predicting the future.”
Again, it’s worth reiterating that both studies do not imply that we should abandon all attempts at asset allocation – the DMU study essentially evaluates the 1/N Rule and all other strategies based on their risk-adjusted returns as defined under MPT i.e. by their Sharpe Ratio. Given that most active asset management implies a certain absence of faith in the canonical assumptions underlying MPT, some strategies could outperform if evaluated differently. Nevertheless, the fundamental conclusion regarding the importance of a robust approach holds and a robust asset allocation can be achieved in other ways. For example, when allocating across 20 asset categories, any preferred asset-allocation algorithm could be used with a constraint that the maximum allocation to any category cannot exceed 10%. Such “hard limits” are commonly used by fund managers and although they may not have any justifying rationale under MPT, this does not mean that they are “irrational”.
The need to increase robustness over optimisation when faced with uncertainty is also one of the reasons why the Kelly Criterion is so often implemented in practise as a “Fractional Kelly” strategy. The Kelly Criterion is used to determine the optimal size of sequential bets/investments that maximises the expected growth rate of the portfolio. It depends crucially upon the estimate of the “edge” that the trader possesses. In an uncertain environment, this estimate is less reliable and as Ed Thorp explains here, the edge will most likely be overestimated. In Ed Thorp’s words: “Estimates….in the stock market have many uncertainties and, in cases of forecast excess return, are more likely to be too high than too low. The tendency is to regress towards the mean….The economic situation can change for companies, industries, or the economy as a whole. Systems that worked may be partly or entirely based on data mining….Systems that do work attract capital, which tends to push exceptional [edge] down towards average values.”