Insights

The tales of VaR

This is one of a series of columns that were produced for Moneyweb Investor in which Stuart Theobald explores the intersection of philosophy of science and finance. This followed an earlier series for Business Day Investors Monthly on the same theme. This column was first published in March 2014. 

If there is any one equation that caused the financial crisis it is value at risk modelling. Its spectacular failure has triggered a rethink about the basic statistical philosophy at work in all of financial theory.

Called VaR for short, it was developed by JP Morgan in the 1990s as a way of capturing how much market risk an investment bank was exposed to at any time. The idea spread to other investment banks and before long regulators were using it too. Our own banks also cottoned on to it – by the time of the financial crisis Standard Bank and Rand Merchant Bank had taken to including a slide on VaR in their results presentations.

There are various techniques to measure it, but they basically work by taking the historic price movements of the assets in a portfolio and the correlations between those assets to create a synthetic price history for that portfolio. This “history” could then be examined to see what the biggest daily losses had been. The real magic was creating a full probability distribution by fitting that sample to a “normal” curve – a common statistical technique with a hump in the middle around a mean and tails extending either direction from it. The same can be done, for instance, with people’s heights in a population, with lots of people about average height and increasingly less likely heights extending out from that mean.

By looking at this curve we could supposedly determine how likely a portfolio is to suffer certain losses and make statements like “the bank has a 95% VaR of R10m” which meant that on 95 days out of 100, the portfolio would lose no more than R10m. Of course this is logically equivalent to saying “the bank has a 5% chance of losing more than R10m on any day”, with no upper limit, but it was seldom put that way.

It was this kind of thinking that was behind statements you’d hear during the financial crisis like “this is a 1-in-10 000-year event”, usually spoken by a sleepless hedge fund manager. They were thinking of that curve, from which they could find the 1-in-10 000-years confidence level (the 99.99% annual VaR).

VaR was a logical extension of some of the early ideas in finance theory that were developed in the 1950s at a time when huge swathes of market data were being gathered for the first time. Statistical analysis seemed to make finance much more scientific and credible than it had been. This statistical approach is often called “frequentism” because it focuses on determining frequencies in a sample of data in order to come up with probabilities. It’s like determining the probability of throwing a 6 with a dice by doing 1 000 throws and then discovering that 6 came up 1/6th of the time. While Harry Markowitz’s first 1952 paper on portfolio theory that sparked the movement was actually quite cagey about probabilities, as computers developed that could crunch big data this anxiety disappeared. Before long the financial world was behaving as if sampling historic data really did reveal the chance of any price movement in the future.

The role of frequentism has, in my view, been underappreciated. The one exception is the Turner Review, Britain’s effort to explain the crisis, which declared the crisis had raised “fundamental questions about our ability in principle to infer future risk from past observed patterns” and dissed VaR specifically.

The response is often that there is nothing better available. That’s wrong. In fact, there is another whole approach to probability called Bayesianism. At its heart, Bayesianism sees risk not as some hard feature in the world, but a problem of knowledge about the world.  The question then becomes one of what it is rational to believe. Bayesianism is so called because its key formula was proposed by a 18th century priest named Thomas Bayes, that tells us how we should update our beliefs when we get new information (google it). Probability is then something that changes over time as new information comes to us so we constantly experiment and test to improve our probability beliefs. A key assumption, though, is that we should always be modest about them.

Bayesianism is seeing significant growth in academia, with some even arguing that frequentism should be stripped out of undergraduate statistics courses entirely. Changing the frequentist culture of finance would go a long way to eliminating the overconfidence that set us up for the crisis.