Empirical limits in the estimation of financial risk measures

Following the financial crisis of 2007-2008, governments and financial regulators all over the globe pledged to reduce risk in the financial system. Many have raised capital requirements on banks, while others have tried to force financial institutions to be less opaque in communications with their clients. In two recent essays – here and here – looking at European regulations, economist John Kay expresses concerns about their effectiveness. Some of the new rules, he suggests, seem almost designed to make investors more confused.
 
Kay looks at the Packaged Retail and Insurance-based Investment Products regulations, in force since the start of 2018, and aimed at improving the way financial institutions communicate risks to their clients. They set strict guidelines for communications, and require firms to provide “a short and consumer-friendly Key Information Document” to convey the realistic risks of various possible investments. Superficially, that seems sensible. Practically, not so much. As a board director of a £6bn investment trust based in the UK, Kay notes:
 
“… You can receive material about the trust through its newly published key information document (Kid). But please, please, do not Google or download this document. And if you have received a hard copy, burn it before reading. Above all, keep it out of the hands of widows and orphans.
 
The Kid tells you that if you invest in the shares … you might in a “moderate” scenario earn more than 20 per cent a year over the next five years, and over 30 per cent in a “favourable” one. Even in “unfavourable” circumstances, you could anticipate an annual return of over 10 per cent.
 
The Kid document does not explain what “moderate”, “favourable” and “unfavourable” mean, but a reasonable person might infer that “moderate” would not be as good for investors as the past few years have been and that “unfavourable” might describe a market downturn — perhaps similar to that experienced in 2000-2 or 2008-9.
 
And the icing on the cake is that these returns can be expected with only moderate risk. … Irresistible though the prospect might seem, do not on any account max out your credit card to invest. As any but the most inexperienced investor should understand, the Kid’s assessment of risk is thoroughly misleading.”
 
What’s the problem? As Kay goes on to describe, it is simply the misuse of statistics. The scenarios described arise from a simple projection into the future of statistics describing what has happened in the very recent past, i.e. the past few years, a period over which markets have done exceptionally well. This short sample of course offers no real perspective at all on what the real future might look like, and so the sunny optimism of the communication is wholly illusory.
 
Unfortunately, similar problems trouble other areas of financial regulation also, although it takes a lot more work to make this clear. In a series of recent mathematical studies, LML researchers Imre Kondor, Fabio Caccioli and colleagues have shown that a similar problem afflicts other new regulatory measures. Prior to the financial crisis, financial institutions had come to rely on a risk measured known as Value at Risk (VaR) to give a crude estimate of how much they might lose in a given period. Because this measure turned out to be easy to manipulate to hide risks, regulators have more recently pushed to replace VaR with another measure known as Expected Shortfall (ES), which more effectively captures risk from rare, low-probability events.
 
Yet both of these measures suffer from a lack-of-data problem much like identified by John Kay. In this paper, for example, the researchers give some explicit estimates in the case of Expected Shortfall (ES). Their approach is logically straightforward – assume a simple market in which the return distribution is Gaussian, and calculate, in this case, the “true” optimal portfolio that minimizes the value of ES. Then, compare this baseline result with estimates made from finite samples of data on returns over a period of time to see how close these estimates come to the known true result. Of course, the returns in real markets aren’t Gaussian, which makes the estimation even more difficult. Hence, the analysis presents a rather more optimistic view of the estimates than is realistic. But even this isn’t very optimistic.
 
A key part of the paper is Section 6, which explores some practical numbers for the relative estimation error of ES vs a parameter r, this being the ratio of the number of components in the portfolio, N, to the length of the data sample, T. For r close to zero – in the regime when data is plentiful and the portfolio not too complex – the relative error in ES turns out to be fairly small, less than a few percent. In this regime, practical estimation of ES is possible and gives meaningful results. But as r increases – for more complex portfolios, or for estimates based on shorter data samples – the error gets much larger. A table of figures (see below) illustrates the problem.
table higher resolution
As the authors summarise:
“This table (Table 1) demonstrates that in order to have a 10% or 5% relative error in the estimated Expected Shortfall of a moderately large portfolio of, say, N=100 stocks at (a regulatory confidence level of) α = 0.975 we must have time series of length T = 3500 resp. T = 7200. These figures are totally unrealistic: they would correspond to 14 resp. 28.8 years even if the time step were taken as a day (rather than a week or month).”
 
Indeed, perusal of the figures from their Table 1 shows that achieving even a relative error of 50% would require data over a period of T=500, corresponding to two years of business days for daily data, and something like 10 years for weekly data. Hence, given the limits on data, it appears that practical estimates of ES probably don’t give much accurate information about the true value of ES, at least not at the level of a few percent. Belief that estimates of ES give a highly accurate picture of risk is mostly illusion. The estimated figures may well give a rough picture of how much is at risk, as in “probably within a factor of 2 or so.”
 
In a series of papers, the authors have considered other risk measures in addition to ES to see if the estimation task is easier. They find much the same result in all cases. Estimations can be made more easily if the risk measure is simply the old, text-book example variance, while the data problem for Value at Risk is much worse and Expected Shortfall the worst. But the differences across measures are quantitative, rather than qualitative. The amount of data needed to make reasonably accurate estimation of risk measures for a modest portfolio of 100 assets is beyond anything that is achievable in real markets.
 
Caccioli, I. Kondor and G. Papp: Portfolio optimization under Expected Shortfall: Contour maps of estimation error, Quantitative Finance (2017) 1-19, available at http://arxiv.org/abs/1510.04943
I. Varga-Haszonits, F. Caccioli, I. Kondor: Replica approach to mean-variance portfolio optimization, J. Stat. Mech. (2016) 123404, available at http://arxiv.org/abs/1606.08679
I. Kondor, G. Papp and F. Caccioli: Analytic solution to variance optimization with no shortpositions, J. Stat. Mech. (2017) 123402 online at stacks.iop.org/JSTAT/2017/123402, https://doi.org/10.1088/1742-5468/aa9684, also available at https://arxiv.org/pdf/1612.07067

Leave a Reply

Your email address will not be published. Required fields are marked *