Bias-variance Trade-off in Portfolio Optimization

According to current international regulation, financial institutions are obliged to calculate the risk in their trading book on the basis of expected shortfall (ES), a risk measure which aims to capture risk from rare, low-probability events more effectively than earlier measures. It is in the interest of financial institutions to optimize their ES, in order to reduce their capital requirements. But managing this risk optimisation calculation is particularly hard for ES, as it means using only a small fraction of the data corresponding to large market movements. The optimisation suffers from an instability: the statistical estimation error increases rapidly with the ratio r = N/T, N being the dimension of the portfolio (the number of different assets or risk factors) and T the sample size (the length of the available time series). As the dimension of institutional portfolios is large, the estimation error can easily reach such a high value that renders the whole optimisation exercise meaningless. Moreover, at a certain critical ratio rc the problem undergoes a phase transition where the estimation error becomes infinitely large. Beyond rc  it is impossible to carry out the optimization.
In a new paper, LML Fellows Imre Kondor and Fabio Caccioli, working with Gábor Papp of Eötvös University, Budapest, study how adding a regulariser to the ES risk measure alters the optimisation procedure. In the financial context, the regularization they study may be seen either as a diversification pressure or as a convenient trick to already take into account, when constructing a portfolio, the future market impact of its eventual liquidation. Assuming independent and identical Gaussian distributed returns, and working in the limit where both the dimension and the sample size are large, but their ratio r = N/T is fixed, the authors show that the regulariser prevents the phase transition from taking place. The results reveal a clear distinction between the region in parameter space where data dominate and the regulariser plays a minor role, from another regime corresponding to insufficient data where the regulariser stabilizes the estimate, but does this by effectively suppressing the data. These results, the authors note, provide an explicit example of what statisticians call the “bias-variance trade-off” that lies at the heart of the regularization procedure. By tuning the strength of the regularization it is possible to identify the parameter values where this trade-off is the most efficient. It turns out that the regulariser allows a gain of a factor about four in the acceptable portfolio ratio r, a fairly satisfactory achievement.
With the spread of machine learning methods also in the financial industry, various regularization schemes have recently been put forward in the context of portfolio optimization. Most of these are of the nature of numerical experiments, whose success is often not completely understood. The analytic approach in the Papp-Caccioli-Kondor paper provides this understanding in a concrete example, and the group is working toward extending it to a number of related problems.
The paper is available at http://iopscience.iop.org/article/10.1088/1742-5468/aaf108/pdf
LML is a charity. If you enjoyed reading this, please consider supporting us.

Leave a Reply

Your email address will not be published. Required fields are marked *