The most interesting — and potentially promising — reaction to the current financial crisis has been the clamoring to fix, or even replace, one of the basic tenets underlying our understanding of economics, markets and risk management: the Efficient Market Hypothesis. The problems with the assumptions made by the EMH — which is the notion that market prices incorporate information instantaneously and rationally — have been well documented. But if markets don’t have instantaneous access to perfectly correct information, if the behavior of all market participants is not totally rational and if price movements are not totally independent of all previous movements, then why use the EMH when it makes such obviously wrong assumptions? Regardless of how the answer is worded, it usually boils down to: “Without these assumptions we couldn’t do the math.”
Fortunately, several new areas of research are showing promise in filling the gaps in the EMH, and, if successful, they may pave the way to creating a truly useful framework for our understanding of markets. Among these ideas is MIT Sloan School of Management professor Andrew Lo’s Adaptive Market Hypothesis, which incorporates practical lessons from behavioral economics and evolution to provide a much more realistic model — one that assumes, for example, that individuals not only make mistakes from time to time but that they can learn and adapt too.
Recent discoveries in two seemingly unrelated fields are also influencing next-generation economists. Cognitive science provides insight into how the human brain interprets — and misinterprets — information. Complexity theory explains how quasi-independent agents exhibit collective behaviors that are impossible to deduce from their individual motives and actions — a topic that has direct impact on markets and risk management. Although it may take years or decades before these fields coalesce and provide tools for practitioners, we are already beginning to glimpse what the future holds. Two important lessons we’ve learned: Models’ predictions come with varying degrees of accuracy, and humans are not particularly good at estimating probabilities of future events.
All model makers know qualitatively that their models work better at certain times than at others. They also know quantitatively how much to rely on the numbers from a particular model in a particular situation: Confidence intervals, goodness of fit, variances of outputs and false-positive or false-negative rates are all quantitative ways to assess the quality of a given model’s output. Imagine a physician telling you that the results of a blood test are 100 percent accurate or an engineer suggesting that a bridge or airplane he has designed has a 0.00 percent margin of error. Nonsense. Similarly, every risk management model has an associated degree of accuracy that can be calculated and that should be known by its users. Without such information a risk manager can easily be lulled into a false sense of security (with a false negative) or end up worrying about bogeymen (with a false positive).
We know from both cognitive science and basic anthropology that people did not develop a good instinct for estimating small probabilities — we essentially equate them with zero — because being accurate did not materially increase early humans’ survival. But the modern environment is different from the one faced by our Paleolithic ancestors, and financial survival today does require an accurate assessment of the probabilities of such events — or at least it requires suppressing the urge to dismiss them as impossible. To that end extreme events and presumably unlikely scenarios should be tested regularly.
One of the easiest stress tests to implement is setting all correlations to one — an unforgiving phenomenon that is observed in real-world financial crises. Another is to create nightmare scenarios that stress multiple markets and determine the likely impact on the portfolio.
This type of analysis is vital because it doesn’t limit us to the mathematically probable scenarios; it allows us to consider what would happen if several happened simultaneously. A rigorous probability analysis might suggest that such a multifaceted event would happen only once every 10,000 years, but humans have a natural blind spot when it comes to making sense of small numbers, like the probability of a prime broker (or two) collapsing. Better to let the risk manager create scenarios that our guts tell us are extremely unlikely. The evidence is overwhelming: Improbable scenarios happen much more often than we think.
Damian Handzy is chairman and CEO of Investor Analytics, a New York–based firm that provides risk management services and software for investment managers.