LONDON (Reuters) - Professor Zari Rachev scorns the idea that market cataclysms cannot be forecast. He says his statistical models have predicted them, and his customers agree.
His daughter is now president of New York-based company FinAnalytica, which uses his models to provide investors and risk managers with a risk indicator that takes into account the worst-case scenarios.
As those who have so far survived the financial crisis pick over the wreckage to develop enhanced predictors of market risk, Rachev’s are among the offerings for people who believe statistical models can help.
“This past year was very important for us, because it validated everything that we worked for,” Boryana Racheva-Iotova told Reuters by telephone from Bulgaria.
Her firm’s risk measure, fat-tailed expected tail loss or ETL, gave investors advance notice of a sharp fall in the Dow Jones Industrial Average among other markets: the Dow fell from a life high in November 2007 to a 12-year low in March 2009, sliding sharpest after Lehman Brothers failed in September 2008.
Fat-tailed ETL builds on the statistical phenomenon popularized by former options trader Nassim Nicholas Taleb’s focus on the massively unexpected.
Think of a bell curve on a statistician’s chart that reflects “normal distribution.” It is tall and wide in the middle — where most events fall — and drops and flattens out at the edges, where fewer things happen, making a shape on a graph like a bell. When the edges or tails swell, instead of nearly vanishing, they are called “heavy” or “fat.”
Streams of financial commentators have over the past year reveled in a desire to present the crash as coming out of the blue to math whizzes paid a fortune to study the statistical stars and presage such events.
But Rachev is one of those who say they saw it coming — because his models took the worst possible events into account.
In the case of the Dow Jones index, his fat-tailed expected tail loss — a measure of the potential daily average loss in the worst 1 percent of scenarios — gave investors notice of rising risk.
Boston-based Henderson Capital Management, which does not disclose its funds under management, said it used FinAnalytica models to help identify investment funds with significant downside risk through 2008.
“Some of these managers subsequently suffered large losses,” Managing Partner Mark Pearl said in a statement for Reuters. One is now closing down, he added, declining to name it.
With the Dow Jones in the three years to late 2006, the fat-tailed ETL was below 2 percent, but it started to rise in 2007 to 4 percent in March, then 8 percent in April 2008 and 10 percent in mid-September 2008.
By contrast, a typical Wall Street indicator that takes into account all cases but the worst 1 percent — known as value at risk (VaR) 99 — was signaling potential daily losses of no more than 2 percent until April 2008 and 5 percent in late-September.
Racheva-Iotova highlights the gap between the two indicators from late 2006 to mid-September 2008.
“Now consider a market that understands the extreme risk for a key market driver such as the Dow Jones is in fact 40 percent higher for a period of more than a year and a half,” she said, implying investors would have been much more cautious.
London-based Aviva Investors, with 236.5 billion pounds ($359.6 billion) of assets under management as of end-2008, started using the model two years ago.
“Particularly during the recent market crisis, this approach has delivered value by helping us to proactively manage tail risk and mitigate the potential for extreme loss in crisis conditions,” said Julie Griffiths, its head of portfolio risk.
Today, the fat-tailed ETL figure is at about the same level as it was in May 2008, and it and VaR 99 have come closer together, indicating reduced risk of another imminent crash.
“The fact that the two methods converge may indicate that the markets are calming down,” Racheva-Iotova said.
Of course, any statistical model of financial markets faces skepticism. Mathematical sophistication, as experts said in a March review for British regulators, “ended up not containing risk but providing false assurance that other prime facie indicators of increasing risk could be safely ignored.”
Drawing a distinction between risk and uncertainty, which arises from shifts in society and natural resources, the review commissioned by Financial Services Authority head Adair Turner questioned our ability to infer future risk from past patterns.
At a very basic level is the relevance of data from 1929 for the internet age, when currencies are no longer tied to gold.
Paul Wilmott, who founded Oxford University’s diploma in mathematical finance and is a well-known critic of Wall Street models, said that while it is possible to forecast a market crash, “it’s difficult, statistically speaking, to decide whether you are lucky or not. These things happen very rarely.”
He himself warned in 2000 of the risk of a mathematician-led market meltdown, and in 2006 of the dangers of credit derivatives — blamed in part for skewing measures of risk in the run-up to 2008. “But I could have just been lucky.”
However powerful such skepticism, it is not stopping quantitative analysts from creating new models to replace VaR, which has become widely discredited.
The flaws of VaR are well known, said Rohan Douglas, chief executive of Quantifi Inc., which supplies models to analyze and price debt instruments and does not compete with FinAnalytica.
“Nobody using VaR thought it was perfect, but it was a clear, simple way of communicating risk that everybody understood,” he said, adding that now a few people are coming up with alternative risk measures, and the question is whether any will be adopted as an industry standard.
Henderson said it had picked FinAnalytica after a comprehensive search. Rachev, a professor at the University of Karlsruhe and University of California at Santa Barbara, says simply that his models provide “a better magnifying glass.
“You see some indications the market is starting to behave in a more volatile way before you see the largest negative shock.”
Rachev first turned to the problem of calculating the probabilities of extreme events after the 1987 market crash.
No model then could explain the crash, the probability of which “was computed at slightly more than the life of the universe,” he said. The FinAnalytica model now predicts a crash of that size can occur every 30 years on average.
The Bulgarian-born mathematician has tackled the problem from four different directions, including theories on fractals and clustering of volatility, and produced his models by 2002.
Racheva-Iotova said investors will still need to exercise judgment: “Even with the best output, the model must be used properly in the decision-making process.”
And the model needs data that give as full a risk picture as possible: “The problem of data-entry implementation should not be underestimated.”
Where would you put the chance of market cataclysm being sparked by human error in data entry? The tail thickens.
Editing by Sara Ledwith