Bayesian Nets for Stress Testing and Scenario Analysis

printer-friendly version


1 Stress Testing and Its Engineering Challenges

Risk management, prudential macro- and microregulation, portfolio allocation and, in general, the strategic analysis of financial and economic outcomes share the common unstated assumption that the past conveys useful statistical information about the future. Indeed, a large part of contemporary finance rests on
modern portfolio theory, which in turn places at center stage the statistically determined vector of asset expected returns and their covariance matrix.
In normal market conditions, the frequentist techniques that underpin these statistical analyses work well, and are perfectly justifiable. In these contexts, the role played by domain knowledge and subjective inputs to the determination of the statistical quantities of interest is limited — and often regarded with

In recent years policy makers, regulators, portfolio managers and economic agents in general have been faced more and more frequently with situation of quasi-Knightian uncertainty. Take, as a salient example, the possible demise of the Euro. It is not clear what patches of past history could be relevant to provide ‘objective’ (frequentist) guidance about the expected outcomes of economic and financial variables. In these situations, subjective judgement and expert domain knowledge are forced to the fore. One enters the relatively unchartered territories of stress testing and scenario analysis.
As unprecedented financial and macropolitical events (or at least the fear thereof) seem to have visited the third millennium with disconcerting regularity, it comes to little surprise that there should have been a renewed interest in tail events in general, and stress testing in particular. As the quote that opens this chapter eloquently shows, the financial crisis of 2007-2009 has put under the spotlight the failures of traditional (frequentist) risk management techniques such as VaR.

“…purely statistical measures of risk by themselves have been proven to be inadequate to quantify the amount of capital financial institutions should hold”

An additional reason for this renewed interest in stress testing can be traced to the association that the new regulatory regime has established between the capital held by systemically important financial institutions (SIFIs) and the outcomes of stress testing exercises. This development has been welcome, because purely statistical measures of risk by themselves have been proven to be inadequate to quantify the amount of capital financial institutions should hold. Stress testing can, in this respect, provide a useful complement to techniques such as VaR, and can fulfill additional functions (such as signalling in periods in market distress).

Unfortunately, before the financial crisis of 2007-2009, for a variety of reasons stress testing had been regarded as the ‘poor relation’ of the statistical measures of risk — such as VaR — that were supposed to provide the most useful assessment of the risk of a financial institution. The following extensive quote from Aragones, Blanco and Dowd (2001) describes well the ‘purgatorial’ state of stress testing before the crisis:

“. . . traditional stress testing is done on a stand-alone basis, and the results of the stress test are evaluated side-by-side with the results of traditional market risk (or VaR) models. This creates a
problem for risk managers, who then have to choose which set of risk exposures to ‘believe’. Risk managers often don’t know whether to believe their stress tests results, because the stress test exercises give them no idea of how likely or unlikely stress-test scenarios might be. . . ”

“. . . in the absence of such information we often don’t know what to do with them. Suppose for instance that stress testing reveals that our firm will go bust under a particular scenario. Should we act on this information? The answer is that we can’t say. . . As Berkowitz (1999) nicely puts it, this absence of probabilities puts ‘stress testing in a statistical purgatory. We have some loss numbers but who is to say whether we should be concerned about them?’ ”

“Risk managers often don’t know whether to believe their stress tests results, because the stress test exercises give them no idea of how likely or unlikely stress-test scenarios might be. . . ”

“. . . two sets of separate risk estimates — probabilistic estimates (eg, such as VaR), and the loss estimates produced by the stress tests — and no way of combining them. How can we combine a probabilistic risk estimate that such-and-such a loss will occur if such-and-such happens? The answer, of course, is that we can’t. We therefore have to work with these estimates more or less independently of each other, and the best we can do is use one set of estimates to check for prospective losses that the other might have underestimated or missed. . . ”

It was perhaps because of this purgatorial status of stress testing that in the pre-crisis period no direct allocation capital allocation was calculated on the basis of the output of stress testing exercises, and the whole stress-testing programme had a pure ‘Pillar II’ dimension — and a rather limited one at that. In this background, stress testing practices developed in a haphazard manner, with little thought given to whether, as implemented, they actually served a useful purpose — and, indeed, to what this purpose actually was. Indeed, given the much more limited time and resources that before the 2007-2009 crisis had been devoted to stress testing compared to statistical measures of risk, little thought had been given to which features a useful stress testing programme should display, which functions it should fulfill, and how it should be in practice integrated with the existing risk-management practices.

Admittedly, the regulators produced a long ‘laundry list’ of the perceived shortcomings of the pre-crisis stress-testing practices and prescribed or suggested a number of desirable feature that a stress testing programme should display (see, eg, International Monetary Fund (2012), Haldane, Hall and Pezzini (2007)). However, in keeping with the then-current (and, to a large extent, still prevailing) regulatory philosophy, the desiderata expressed by the regulators fell well short of being prescriptive, and gave precious little indication as to how the multitude of practical problems associated with the implementation of stress testing should be solved.

In short, it is fair to say that the conceptual ‘case for stress testing’ has been won. One can also claim that, at least at a theoretical and ‘proof-of-concept’ level, the features a well-designed stress-testing programme should display are reasonably well understood. However, the financial community is still faced with a number of pressing ‘engineering’ problems to build a bridge between the enticing ideas that have been proposed, and the complex portfolios that need stressing.



To receive the slides from the webinar, please click here.



Riccardo Rebonato is Professor of Finance at EDHEC Business School, Member of EDHEC-Risk Institute and author of journal articles and books on Mathematical Finance, covering derivatives pricing, risk management and asset allocation.



5 September, 2017 at 3.00pm CEST.



To view the webinar, please click here.