Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 16 July 2019

Technical Introduction

Abstract and Keywords

This article begins with a brief discussion of how the current global credit crisis differs from previous ones, specifically in the form of the underlying relationship between borrowers and lenders, and then sets out the purpose of the book, which is to provide an overview of the complex and relatively novel area of mathematical modelling. Given the fact that mathematics entering into the analysis of credit risk was contemporaneous with unprecedented growth in the credit markets and thus preceded the crash, it is not surprising that many view mathematics as the cause of the crisis. It is argued that, despite the inevitable shortcomings of each mathematical model used, mathematics was not the cause of the current difficulties.

Keywords: credit risk, mathematical modelling, credit crisis

‘Modern man drives a mortgaged car over a bond‐financed highway on credit‐card gas’

Earl Wilson 1907–1987

Et in Arcadia ego

Traditionally ascribed to Death

1 Introduction

We write this introduction as a deep and global credit crisis continues to unfold, following years where spectacular growth in the credit markets coupled with benign conditions convinced many that we had permanently entered a new paradigm of plentiful cheap credit. The cyclical nature of credit is nothing new; the struggle between borrowers and lenders is documented throughout recorded history and each generation adds new examples of lenders led by transient periods of good behaviour into extending credit until they collapse as borrowers' conditions abruptly change for the worse. For a story that has repeated itself many hundreds of times with the same outcome, perhaps the only useful observation is how quickly it is forgotten. This time is always different.

There is, however, one way in which this current crisis does differ. Not in its inevitability, but in the form of the underlying relationship between borrowers and lenders. In the past, the credit risk embedded in this relationship was analysed and managed in traditional ways. Lenders performed detailed examinations of the business models, assets, and liabilities of particular obligors as well as the characteristics and quality of collateral offered up as security for loans. After terms were agreed and (p. 18) loans extended, they were then carried to term on the books of the lender, usually a commercial bank. Corporate debt and other subordinated debt were also carried to term in institutions such as banks, mutual and pension funds. Secondary trading was limited and markets effectively non‐existent. The concept of price as the sine qua non of credit quality was almost unknown.

In the late nineties, another complementary approach to credit began to appear in which derivatives‐inspired ideas allowed spectacular growth in the size and liquidity of secondary markets. The derivative counterpart of the corporate bond, the credit default swap (CDS), appeared first and rapidly achieved dominance in the market. In its simplest form, the CDS is an over‐the‐counter agreement between a protection buyer (PB) and a protection seller (PS) in which the PB agrees to pay fees in a regular periodic schedule to the PS in exchange for the PS paying a lump sum in the event of the default of a reference entity (RE). The size of the lump sum—effectively the insurance payout compensating for the RE defaulting—is determined by agreement on the recovery level for the appropriate subordination of debt. Though during the current credit crisis the high level of risk led the fees to be paid up‐front, conventionally these periodic fees were quoted as a spread in basis points. And as so often when a single numerical measure emerges, this spread became reified and arguably supplanted more complex and subtle traditional measures. Price and the market became increasingly important.

Following on from the CDS came the introduction of synthetic collateral debt obligations (CDOs). These derivative‐style contracts were imitative of the much older subordination rules that govern equity and debt claims over the assets and liabilities of a conventional corporation. However, instead of claims over real assets and liabilities, the arranger of a CDO defined a basket of reference entities (often a hundred or so) and provided periodic payments to investors in exchange for compensation in the event of losses being incurred by the basket of entities, calculated in a similar manner to the synthetic losses in CDSs. The CDO structure came from the fact that all investors were not equally liable for losses (and principal payments)—instead, as in a standard capital structure, they fell into categories each subordinate in turn to the previous one. Each class of investors would make payments only when the basket's total loss reached a particular trigger (the attachment) and ceased payments as the basket loss passed another, higher, trigger (the detachment). These classes are often referred to by terms from the language of corporate debt—equity, mezzanine, senior, super‐senior—and the CDO structure determined by a set of attachment and detachment points, αθ, βθ;, θ = e, m, s, ss, which described the percentage of losses attributed to each tranche and formed a disjoint cover of the capital structure—i.e. αe = 0, am = βe etc. If N is the total notional of the basket and L the sum of calculated losses from inception to maturity, then the total payment demanded from an investor in a θ tranche would change, as L increased, from 0 for LαθN to (βθ; − αθ;) × N for LβθN, with the rise linear between the two levels. As with CDSs, a liquid market developed for these tranches, though given the many thousand possible REs and basket sizes in the hundreds, combinatorics alone rapidly led to massive standardization with just two main investment‐grade (IG) baskets achieving dominance—DJ CDX and (p. 19) iTRAXX—representing respectively agreed‐upon sets of 125 American and European BBB corporates. A few other baskets were less liquid but also traded—High Vol (HV), Crossover (XO), and High Yield (HY).

Unlike the single reference entity CDSs, single tranches of CDOs represent exposure to the decorrelation or otherwise of many reference entities. Thus instead of a quoted spread becoming totemic, a form of correlation derived from the first and most popular model of joint defaults of obligors, the Gaussian copula, became the quoted number that represented the markets. Its relatively low value became symbolic of the period before the current crisis when systemic risk was believed abolished.

Other, more complex, derivative products also thrived in this period, but the notable feature of the times was the rise of mathematical modelling of credit and the primacy of the act of pricing via increasingly sophisticated algorithms calculated on ever larger arrays of computer processors. Thousands of technical papers were devoted to the new discipline of credit risk modelling and in the case of the most popular models such as Gaussian copulae, their parameters became real features of day‐to‐day markets.

This growth of mathematical modelling is the main impetus behind this handbook. We aim to provide an overview of this complex and relatively novel area of financial engineering. There is also another motivation. Given the fact that mathematics entering into the analysis of credit risk was contemporaneous with unprecedented growth in the credit markets and thus preceded the crash, it is not surprising that many view mathematics as the cause of the crisis. We believe strongly that despite the inevitable shortcomings of each mathematical model used, mathematics was not the cause of the current difficulties. That is not to say that it was completely innocent—before the current crisis broke we argued, along with others, that Gaussian copula modelling was oversimplistic and structurally underplayed system risks. Many models struggled to adapt as the weather changed from good to bad. Yet this credit crisis bears striking similarities to earlier ones, each achieved without the current mathematical framework. Moreover, the widely acknowledged worst culprits—sub‐prime mortgages securitized into so‐called ABS CDOs—were primarily analysed with traditional methods, often involving zipcode by zipcode examination of borrower behaviour. Until recently, there were almost no mathematical papers on ABS CDOs and the two chapters included in this book on house price modelling and pricing ABS CDOs are post‐crisis attempts to introduce mathematics into this area. We can only hope that our attempt to describe the mathematics of credit risk, dealing openly with its successes and deficiencies, is timely, necessary, and useful.

2 Empirical analysis of obligors

Before we can focus on theoretical aspects of modelling, we introduce a set of chapters that deal directly with either empirical analysis of actual markets or the statistical tools required to do so.

(p. 20) In Chapter 3, Altman considers the relationship between the probability of default of an individual obligor (PD) and the recovery rate in the event of default (RR). He surveys a number of models (whose theoretical bases are described in subsequent chapters) and categorizes the resulting implied relationships between PD and RR before turning to empirical analysis of corporate defaults. Previous statistical attempts to link PD and RR via joint observation of default rates, and corresponding recovery values of bonds have led to contradictory results, sometimes showing little correlation, sometimes negative correlation. Altman argues that this is due in large part to credit showing ‘regime’‐like behaviour with quite different behaviours between ‘good’ periods and ‘bad’. In contrast to good times, during downturns there is significant negative correlation between PD and RR. This should be taken into account when designing regulatory frameworks or implementing VaR‐like schemas.

In Chapter 4, Berd also considers the behaviour of recovery rates, but this time from the perspective of corporate bond valuation. Arguing from empirical grounds that the most realistic approach to recovery is to treat it as a fraction of the par value of the bond, he observes that this assumption rules out the traditional approach of stripping of bonds into risky zeros. Instead he lays out a different approach to the valuation of bonds via survival probabilities and recovery fractions and shows how survival probabilities can be derived from credit default swap spreads and recovery swap prices or by estimating survival probabilities via regression on the observed prices of bonds of similar type. For completeness, he surveys a number of other simpler bond valuation measures as well as deriving suitable risk measures.

In Chapter 5, Wei also analyses default, but from the theoretical perspective of Cox's model of proportional hazard rates (Cox 1972). Just as Altman stresses the importance of considering different regimes for default intensity or hazard rates, Wei considers a Cox model for hazard rates h(t) of the form:

ht=h0(t)exp(xTβ)
(1)

for some fundamental baseline function h0(t) where x represents some state vector and β provides the weighting between this state and the resulting hazard rate shocks. A comprehensive range of algorithms are described for estimation of β via various forms of maximum or partial likelihood functions before the author demonstrates a concrete example in the form of a spline basis for a trading strategy on credit indices.

3 Theory of individual obligors

The second set of chapters deals first with the theory of single obligors before extending to deal with the theory of many and the behaviour of obligors in aggregate. In general there are two approaches to describing the default of individual obligors—the reduced form model pioneered by Lando (1998) and Jarrow and Turnbull (1995) and the older (p. 21) firm‐value model originated by Merton (1974) and extended by Black and Cox (1976), Leland (1994), and Longstaff and Schwartz (1995). Traditionally, they have represented quite different approaches to the problem of individual default. The reduced form model focuses carefully on forming a sub‐filtration devoid of default time information, whilst the firm‐value model explicitly includes default times. In its original form, with continuous behaviour of asset value, the firm‐value model had no choice— default was completely predictable given knowledge of the asset process and indeed instantaneously was either impossible or inevitable. Having said this, recent extensions to include discontinuous behaviour have clouded this distinction—default can now be unpredictable in the firm‐value model and the differences are less pronounced than before. It remains, however, constructive to compare and contrast the formalism.

We start with the reduced‐form model—the inspiration comes from extending the short‐rate formulation of interest rate modelling to the intensity λt of a Cox process whose first jump represents the default of the obligor. Recent practice has emphasized the careful separation of filtration between a background ℱt containing all information available except for the explicit observation of default and a default filtration Ɗt which merely observes the default indicator. The combined filtration 𝒢t = ℱtƊt represents the full market but in order to avoid technical difficulties such as risky numeraires becoming zero‐valued, the theory aims to recast as much as possible of the pricing in terms of conditioning on ℱt rather than 𝒢t. Conditioned on ℱt, default is unpredictable.

A simple concrete example is the following—given a short rate rt adapted to the filtration ℱt produced by Brownian motion W1(t) and described by the stochastic differential equation (SDE)

drt=μ1(t,t)dt+σ1(t,t)dW1(t)
(2)

we can invoke a numeraireBt=exp(0trsds) and a pricing equation or a ℱT‐adapted claim XT, given by the usual

Xt=BtE[BT1XT|t]
(3)

where ℚ represents the appropriate pricing measure.

We then have the value of a risk‐free zero coupon bond D(t, T) corresponding to the claim XT = 1 given by

Xt=BtE[BT1|t]=E[exp(tTrsds)|t]
(4)

Other more complex interest rate claims follow similarly and the model parameters μ1 and σ1 could be calibrated to the yield curve and chosen vols from the swaption market. We now consider the extension to risky claims by increasing the filtration ℱt to include a second Brownian motion W2(t) and driving the stochastic intensity λt via the SDE:

dλt=μ2(t,t)dt+σ2(t,t)dW2(t)
(5)

(p. 22) Note that within ℱt, we have no explicit knowledge of the default time—indeed conditioned on ℱt, the event of default in any finite interval has probability strictly less than 1.

Within the extended filtration 𝒢t = ℱtƊt, the pricing equation for the 𝒢T‐adapted risky claim of the form 1τ>tXt is the usual

Xt=BtE[BT11τ>TXT|Gt]
(6)

However, if we wish to work entirely conditioned on sub‐filtration ℱt, this becomes

Xt=1τ>tE[1τ>t|t]BtE[BT11τ>TXT|t]
(7)

Thus the value of the risky zero coupon bond Qt, T is

Q(t,T)=1τ>texp(0t(λs+rs)ds)E[exp(0T(λs+rs)ds)|t]=1τ>tE[exp(tT(λs+rs)ds)|t]
(8)

Again more complex claims can be valued, and with appropriate recovery assumptions μ2 and σ2 can be calibrated to the CDS term structure and CDS options.

In contrast, in the firm‐value model we consider the evolution of a process representing an abstract firm value where default occurs because this value hits a down‐and‐out barrier representing the point at which the firm is no longer solvent. Typically, the firm value vt might be governed by an SDE of the form:

dvt=rtvtdt+σtvtdWt
(9)

with the firm defaulting if and when vt crosses a time‐dependent barrier Ht, with H0 <v0.

In this framework, we have the value of a risky claim 1τ>TXT given by

Xt=1vs>Hss[0,t]BtE[BT11vs>Hss[t,T]XT|t]
(10)

where this time the filtration ℱt contains the default indicator and often (but not necessarily) the numeraireBt=exp(0trsds) is derived from a non‐stochastic short rate rt.

If we hold all parameters constant, and choose a barrier Ht = H0 exp(rt), then the risky zero‐recovery discount bond becomes

Q(t,T)=1vs>Hss[0,t]BtBT1[Φ(ln(vtHt)12σ2(Tt)σ2(Tt))vtHtΦ(ln(Htvt)12σ2(Tt)σ2(Tt))]
(11)

(p. 23) However, as is obvious from the construction of the model, if vt > Ht, the continuity of Brownian motion prevents any possibility of default in the interval [t, t + dt]. Until default becomes inevitable, the instantaneous CDS spread st is always zero. Specifically, s0 = 0 which is emphatically not an observed feature of the market. Introduction of curvilinear barriers such that vtHt ↓ 0 as t ↓ 0 improves calibration to the short end of the CDS curve, but the problem simply re‐emerges when conditioned on future firm values. This has led to suggesting uncertain barriers (Duffie and Pan 2001) or, more naturally, adding discontinuous Levy behaviour such as Poisson jumps into the firm‐value process. Introducing such discontinuities clearly allows default to become unpredictable even when vt > Ht and thus allows more natural calibration to the initial CDS term structure and more natural evolution of that curve. It also has another useful consequence when the theory is extended to multiple reference entities: allowing simultaneous jumps in firm values can dramatically increase the amount of coupling between defaults allowing necessary freedom when calibrating to markets.

In Chapter 6, Schloegl lays out the detailed theoretical basis for CDS market models—reduced‐form models that define the stochastic properties of market observ‐ables such as forward CDS spreads. He lays out the theoretical underpinning of the filtration separation 𝒢t = ℱtƊt implicit in reduced‐form models before describing the necessary change of numeraires away from the standard rollingBt=exp(0t(rs)dt) towards risky numeraires. He demonstrates that a numeraire such as a risky annuity allows a model in which options on CDSs have Black‐Scholes type price solutions. Extending these ideas further, Schloegl shows how Libor‐market models (where the behaviour of forward Libor rates is controlled) can be extended to include carefully defined forward default intensities. With IR and credit correlated, the coupling between CDS spreads and the IR and intensity processes prevents forward CDS spreads being treated in a similar manner. However, as Schloegl shows, if interest rates and intensities are independent, this coupling becomes unimportant and a full CDS market model can be constructed.

Within Chapter 7, Lipton and Shelton extend the toy reduced‐form model above to consider the following

dXt=κ(θtXt)dt+σXtdWt+JdNt
(12)

where in addition to Wt as Brownian motion, we have Nt as a Poisson process with intensity vt and J a positive jump distribution with jump values occurring in the set {0, 1,…, M}, all processes and distributions mutually independent. This forms an affine jump diffusion process capable, with careful parameter choice, of remaining strictly positive. From this the authors derive prices for coupon and protection legs for CDSs as well as describing practical extensions to market instruments such as index default swaps. This affine jump diffusion model extends very naturally to multiple obligors but this, as well as further description of multiple obligor theory, we will describe in the next section.

(p. 24) 4 Theory of multiple obligors

Just as reduced‐form and firm‐value modelling emerged to analyse the default of single obligors, a series of models have arisen to analyse the joint default of multiple obligors. And just as the spectacular growth of the CDS market drove the first, the growth of CDO issuance, and specifically the challenge of warehousing single‐tranche CDOs, has driven the second.

To motivate the modelling discussion, consider pricing a single CDO tranche, covering the losses between the two thresholds 0 ≤ α < β ≤ 1. If Ri is the recovery associated with the default of obligor i, the portfolio loss process Lt is given by

Lt=1ni=1n(1Ri)1τi<t
(13)

and the loss on the tranche (α, β),Mtα,β is then given by

Mtα,β=min(max(Ltα,0),(βα))
(14)

Note that Lt andMtα,β are both jump processes, with the payments on the CDO protection leg corresponding to the jumps ofMtα,β.

Given a numeraire Bt and associated pricing measure ℚ, we have the value of the protection leg V1 given by

V1=E[0TBt1dMtα,β]
(15)

and the value of the corresponding premium leg V2 given by

V2=E[sii=1nBTi1((βα)MTiα,β)]
(16)

where (0 = T0, T1,…, TN = T) represent the schedule of dates associated with the premium leg and si the (full) premium associated with the period [Ti−1, Ti ].

If we assume deterministic numeraire Bt, then it is clear that the value of the premium leg is fully determined by knowledge of EMtα,β. The protection leg is superficially less clear, but integration by parts yields

V1=BT1E[MTα,β]0TdBt1dtE[Mtα,β]dt
(17)

and again, we have the value only dependent onE[Mtα,β]. Knowledge of the marginal distribution for loss is sufficient to price a single‐tranche CDO.

Given this simplification, it is unsurprising that an early and subsequently dominant model used for pricing single‐tranche CDOs was the (in)famous Gaussian copula model. Sklar's theorem (1959) shows that given a known multivariate (p. 25) cumulant F (x1,…, xn), with corresponding individual cumulants F1(),…, Fn(), we can manufacture a multivariate cumulant G(y1,…, yn) from individual cumulants G1(),…, Gn() via

G(y1,,yn)=F[F11(G1(y1)),,Fn1(Gn(yn))]
(18)

Thus for arbitrary G1,…, Gn, we can construct a coherent joint distribution via the scaffolding provided by F. Though it can be argued that subsequent history has shown it to be a poor choice, the initial scaffolding chosen was the familiar multivariate GaussianΦρn[.]with constant pair‐wise correlation ρ.

Given the simple correlation structure, the multivariate Gaussian X1,…, Xn can be rewritten in terms of n + 1 IID Gaussians Y0,Y1, …, Yn via

Xi=ρY0+1ρYi
(19)

and thus the joint cumulant of default times

(τ1T1,,τnTn)=Φρ(n)[Φ1(1(τ1T1)),,Φ1(n(τnTn))]
(20)

can be written as

(τ1T1,,τnTn)=0i=1nΦ(Φ1(i(τiTi))ρy1ρ)ϕ(y)dy
(21)

where Φ, Φ−1 and ϕ are the standard Gaussian cumulant, its inverse and density respectively.

In this way, we can construct a coherent joint distribution for default times, with a single parameter ρ which by inspection plays a plausible role in inducing coupling between obligors. If, for example, all the ℙi are identical, then as ρ→ 1, the default times of the obligors on any sample path coincide. And from this distribution of default times, numerical methods such as FFT or convolution coupled with 1D quadrature can produce effective estimates forE[Mtα,β].

As discussed earlier in this introduction, this single correlation ρ became totemic in the markets replacing other measures of diversification inherent in baskets of obligors. And as with any simplification, it has its dangers. Though some (see Salmon 2009) have gone as far as blaming the Gaussian copula for the current crisis—its simplicity inducing complacency in those warehousing increasing levels of risk in CDO form—it is worth noting again the inevitability of the credit cycle and the perhaps surprising fact that the analysis of ABS CDOs often involved no stochastic model whatsoever, let alone a Gaussian copula.

Indeed, as a model, it successfully weathered the ‘correlation crisis’ of May 2005 and continued dominant as the model of choice for single‐tranche CDOs. Yet its disadvantages should not be understated. The first and most obvious is that the single correlation parameter ρ, or its variant base correlation the ρθ corresponding to tranches with attachment point 0, is far from constant across either detachment point (p. 26) or maturity. Models in other markets also demonstrate ‘skew’ and ‘term structure’, of course, but it is sufficiently pronounced in the CDO market to render extrapolating the Gaussian copula to other products perilous. Moreover, the choice of the multivariate Gaussian as scaffolding means that the model inherits its distinctive and unusually weak tail dependence. Specifically for multivariate Gaussian X1,…, Xn with pairwise correlation ρ < 1,

limk[Xi>k|Xj>k]=0,ij
(22)

This inability for the multivariate Gaussian to give weight to simultaneous extremes impacts the ability for the Gaussian copula to price super‐senior tranches. This effect also brings into doubt the model's general capacity to cope with less benign credit conditions.

In recognition of this, much has been suggested in the form of extensions to the Gaussian copula—different choices of the scaffolding multivariate with improved tail dependence such as multivariate t‐copula (or indeed non‐parametric formulations) as well as introducing explicit states corresponding to ‘Doomsday’ scenarios via additional Poisson processes. In addition to the purely static approach of copulae, there has also been substantial focus on multivariate extensions to reduced‐form or firm‐value models. One such interesting extension to the reduced form approach is the Marshall‐Olkin framework (Marshall and Olkin 1967), where coupling between obligors is induced in a particularly natural way. Instead of a one‐to‐one mapping, the framework proposes a more complex mapping between m Poisson processes with intensity λj and n obligors, with m > n. The mapping of defaults is intermediated via {0,1}‐valued Bernoulli variables with probabilities pi,j ∈ [0, 1] which decide whether obligor i can default given a jump in Poisson process j. More formally, this can be thought of as a set of independent Poisson processes(Nπ)πΠn associated with all possible subsets of the obligors πΠn with intensities λπ given by

λπ=j=1m(iπpi,jiπ(1pi,j))λj
(23)

The default of individual obligor i then becomes the first jump across all Nπ with iπ. This framework is simultaneously rich enough to induce many different types of coupling and also analytically tractable.

These approaches, known as ‘bottom‐up’ models because of their focus on modelling the obligors individually have proved a rich source for understanding the range of possible aggregate loss distributions. In contrast, there is also activity in the form of ‘top‐down’ modelling where the loss distribution is tackled directly. The motivation behind these models is the desire to use couplings such as contagion where one default changes the likelihood of others, but to avoid working with the 2n sized filtration arising when the defaults of individual obligors are tracked. Given suitable homogeneity assumptions, top‐down modelling can often use a filtration of size O(n) and then apply results from continuous‐time Markov chains.

(p. 27) Consider such a Markov chain Ct, defined on K = {0, 1,…, n} with time‐homogeneous transition probabilities pij (t) given by

(Cs+t=j|Cs=i)=pij(t)i,jK
(24)

Then imposing sufficient conditions for it to exist, we can define a transition intensity λij given by

λij=limt0pij(t)pij(0)t
(25)

and then form the generator matrix Λ = [λij ]

This generator matrix uniquely determines the behaviour of the Markov chain— the probability density of states at time t being given by the solution to the linearly coupled ODEs

ddtp(t)Λp(t)=0
(26)

where p(t) = (p0(t),…, pn(t)) is the row vector representing the density of states at time t and p(0) is set appropriately. This or the corresponding backwards equation can be solved either via conventional ODE techniques or by directly approximating the matrix exponential eΛt via Padé approximants.

Both approaches are relatively efficient and allow respectable calculation times for state spaces several hundred in size. Given the typical basket size for single‐tranche CDOs of a hundred or so, this allows a reasonable range of models including, as mentioned before, contagion models with filtrations of 2n + 1 or similar.

There are consequences, however, to reducing the state space to O(n). Strict homogeneity assumptions are often needed—all obligors, for example, might be forced to be interchangeable. And this is in many ways a significant shortcoming compared to the ‘bottom‐up’ approach. Not only is it not true in practice that baskets contain identically behaving obligors—indeed during crises it is not unknown for the spreads of obligors in a single basket to diverge by two orders of magnitude—but this variation of behaviour has significant effect on the relative value of junior and senior tranches. More significantly, perhaps, if the identity of individual obligors is lost, it remains an outstanding, non‐trivial problem to derive hedging strategies.

Returning to Chapter 7, Lipton and Shelton provide a comprehensive overview of topics in multiple obligor modelling which this introduction only touches upon. They demonstrate a multivariate extension of their example affine jump‐diffusion model which induces a wide range of correlations between defaults of obligors and gives a concrete example of pricing first‐and second‐to‐default baskets within a two‐factor model.

The authors also devote considerable attention to calculation of portfolio loss distributions for multiple obligors giving a comparison of FFT, recursion and various analytic approximations. For homogeneous portfolios, there is a surprising result (p. 28) attributed to Panjer (1981), but likely to date back to Euler. For the restricted class of distributions on the non‐negative integers m such that

pm=pm1(a+bm)
(27)

where pm can be thought of as the probability of m defaults, then given a distribution for individual losses f(j) discretized onto the positive integers j ∈ ℤ+ we have the remarkable result that the probability pn(i) of the sum of n obligors resulting in loss i is

pn(0)=p0pn(i)=j=1i(a+bji)f(j)pn(ij),i=1,2,3
(28)

which is O(n2) compared to the more usual O(n3).

In Chapter 8, Elouerkhaoui gives details of a Marshall‐Olkin model where for n obligors a number m > n of Poisson processes are coupled to the default of the obligors by Bernoulli variables which decide for the rth jump of a given Poisson process which, if any, of the obligors default. By careful specification of the probabilities of these Bernoulli variables, the Poisson jumps become categorized into inducing patterns of default ranging from idiosyncratic, through enhanced defaulting in sectors, to group ‘Doomsday’ style defaults. Elouerkhaoui shows a range of efficient numerical techniques, including the Panjer recursion detailed above, before demonstrating the properties of a particular choice of Marshall‐Olkin model. As he observes, suitable choices for the Poisson intensities and corresponding sets of Bernoulli variables can result in a relaxed, natural calibration to single‐tranche CDOs and have fewer of the term‐structure and skew issues of Gaussian copulae.

In Chapter 9, Davis sets out a taxonomy of continuous‐time Markov modelling, giving three main categories—factor, frailty, and contagion models. In a factor model, the intensity of a given obligor is driven by a (common) multivariate process Xt often representing the macro‐economy via a finite‐state Markov chain. As usual, suitable homogeneity assumptions for the obligors renders the filtration manageable. In frailty models, this is extended to include additional random variables corresponding to ‘hidden’ unobservable features which further modify, usually multiplicatively, the factor‐driven intensities. In contagion models, as described above, the intensities are altered not by exogenous factors or frailties but by the events of defaults of the obligors themselves.

Davis showcases an elegant example of a contagion model with an O(n) filtration in which homogeneous obligors shuttle back and forwards spontaneously between a ‘normal’ state and an ‘enhanced’ risk state (with higher intensities) augmented by forced transition into the enhanced state if defaults occur. A filtration of size 2n + 1 again allows fast computation.

In Chapter 10, Bielecki, Crépey, and Herbertsson provide a comprehensive overview of continuous‐time Markov chains, before reviewing a number of Markovian models for portfolio credit risk They give examples of both ‘top‐down’ and ‘bottomup’ (p. 29) modelling with the corresponding divergence in size of filtration. They detail Monte Carlo methods for large filtrations and compare and contrast a range of matrix exponential and direct ODE solvers for models with smaller filtrations. Focusing on the latter, they introduce a contagion model for intensities with interesting properties. Inevitably homogeneity is a critical assumption with all obligors interchangeable and the default times of individual obligors {τ1, τ2, …, τn} replaced by the portfolio default times {T1, T2,…, Tn} representing the ordering of the individual τi. The obligors' intensities are specified by

λt=a+k=1n1bk1(Tkt)
(29)

where a is the common base intensity and {bk} are the increments to the obligors' intensities caused by the number of defaults in the portfolio rising to k.

Given that a typical index such as iTraxx or CDX has n = 125, but provides far fewer market observed tranche prices, the model induces further parsimony by grouping the bk into partitions {μ1, μ2,…,μ6 = n} via

bk={b(1)1k<μ1b(2)μ1k<μ2..b(6)μ5k<μ6=n
(30)

This grouping into blocks allows tuning of different tranche values via the b(i) producing a natural calibration; the relatively low filtration of n + 1 gives respectable computation times.

5 Counterparty credit

So far we have focused on the default of reference entities, this being the most obvious source of credit risk in products such as CDSs and CDOs. There are, however, two other obligors in a derivative transaction that should be considered—the buyer and the seller. The fact that a derivative represents an exchange of (generally) contingent cash flows, some owed by buyer to seller, some owed by seller to buyer, exposes the risk that non‐payment might occur. This risk, known as counterparty risk, can have a sizeable effect on the value, particularly if the reference entity or entities and one or other counterparty have coupled likelihoods of default. This exposes the other counterparty to the risk of non‐payment precisely when the amount owed is large—this ‘wrong‐way’ risk is not uncommon in real‐world trades.

Models used for counterparty risk valuation have to be rich enough to deal not only with the relative order of defaults of reference entities and counterparties but (p. 30) also the conditional value of the contracts at the time of default of a counterparty. This requirement arises through the natural asymmetry caused by a counterparty in default being unable to make full payment if net monies are owed, but nonetheless still demanding full payment if net monies are due. Specifically, given just one risky counterparty with default time τ and recovery rate R entered into a contract with value at time t of Vt, we can integrate across the time of default of the counterparty to give the difference in value now, ΔV0, caused by the possibility of default as

ΔV0=E[0TBt1(1R)Vt+1τ=tdt]
(31)

Multivariate firm‐value or stochastic reduced‐form models are capable of a filtration rich enough to capture the joint dependency onVt+ and 1τ=t dt but simpler marginal default distribution models such as copulae require careful extension to allow calculation of the necessary conditionedVt+.

In Chapter 11, Gregory takes up the challenges posed by marginal default distribution models—for illustrative purposes he chooses a straightforward Gaussian copula—and details counterparty risk corrections for CDSs, index CDSs, and CDOs. As he points out, the static copula approach fixes the expected value of the product conditional on counterparty default, but not its distribution. As a consequence, he provides only bounds for the value of the correction, but in many cases these are tight enough to be useful. He provides a number of numerical examples including a timely reminder that ‘risk‐free’ super‐senior tranches are particularly prone to counterparty risk.

In Chapter 12, Lipton and Sepp provide a multivariate time‐inhomogeneous firm‐value model where the coupling between firms is driven not only by the familiar correlative coupling between Brownian diffusive components Wi(t) but also by inducing jumps in the firm values via Poisson processes Ni (t) which are also coupled together by a Marshall‐Olkin inspired common ‘systemic’ Poisson process. The authors consider two different distributions for the jump sizes—one with a single value −v and the other an exponential distribution with mean − 1/v. The first induces strong coupling, the second weaker since though the likelihood of jumps is correlated, the sizes of the jumps remain independent. This choice allows the model to fit both regimes where the default correlations are low and regimes where they are high.

One attraction of a firm‐value model is the capacity to model equity as well as credit and the authors exploit this. Formally, given a firm value v(t), they assume a down‐and‐out barrier

Ht=H0exp(0t(rsζs)ds)
(32)

where rt is the deterministic short rate and ζt the deterministic dividend rate for the firm's assets. They can then regard the equity price as vtHt prior to default; the resulting displaced diffusion with jumps being a realistic model for equity derivative valuation.

(p. 31) Concluding that the current market state with relatively high default correlations favours the discrete jump model, the authors choose a suitable level of default correlation and via analytic and numerical results for CDSs, CDS options, and equity options demonstrate a detailed calibration to the market with realistic skews for equity and CDS options.

6 Beyond normality

In much of the preceding discussion, it is apparent that restricting modelling to Gaussian marginals or Brownian processes fails to capture the ‘fat‐tailed’ nature of credit. The probability of large events in a Gaussian framework seems too low to fit the observed frequency of extremes in credit and the strong restrictions on local structure that result from pricing with just the continuous Brownian part of generic Lévy processes similarly hamper attempts to match actual market prices. To some extent, this is an inevitable consequence of a popular and not unreasonable philosophy of pricing. This philosophy views limiting the number of parameters and imposing a restrictive structure via process choice such as Brownian as essential to safely pricing claims outside the linear span of liquidly observed market instruments. As a market fragments during a crisis, it is not surprising that such a stiff model struggles to fit but, more positively, it forces acknowledgement that the safety of pricing ‘difficult’ claims depends on a well‐behaved market.

There are, however, different responses. One, as described in provocative fashion by Ayache in Chapter 13, is to abandon the philosophy altogether and instead of a process‐based model which underfits reality, move to an overfitted ‘regime’‐based model with many parameters. With such a model, the calibration net can be spread wide enough to both fit difficult conditions and to increase the linear span of calibrated instruments large enough to reduce extrapolation in pricing. Arguably the non‐parametric regime model which Ayache describes is one of the few ways to allow a market to speak freely, unencumbered by artificial restrictions. But it remains controversial amongst those who doubt that pricing can ever fully escape extrapolation and that without an a priori justifiable structure underpinning a model, extrapolation is lethal.

A second less controversial approach remains. The Gaussian distribution and Brownian motion may be limiting but they are far from inevitable. Within the framework of Lévy processes, Brownian motion is just one sub‐species of process—coherent discontinuous processes with both finite (i.e. Poisson) and infinite (i.e. Gamma) activity also exist providing much greater access to extreme behaviour. And Gaussian distributions are distinctively ‘thin‐tailed’ compared to almost any other choice without finite support. More formally, given X1, X2, … IID, and following Fisher and Tippett (1928), if there exist norming constants cn > 0, dn ∈ ℝ such that

max(X1,,Xn)dncndHasn
(33)

(p. 32) for some non‐degenerate H, then H must be of the following type

Hξ(x)={exp((1+ξx)1ξ)ifξ0,(1+ξx)>0exp(exp(x))ifξ=0
(34)

These extreme value distributions—Weibull for ξ < 0, Gumbel for ξ = 0, and Fréchet for ξ > 0—represent a universe of non‐degenerate distributions for max(X1,…, Xn) that far exceed the restricted behaviour of Gaussians. Indeed if X1,…,Xn are Gaussian, the corresponding Hξ is the Gumbel with ξ = 0, the thinnest‐tailed distribution in the family that still retains infinite support.

As Chavez‐Demoulin and Embrechts detail in Chapter 14, the existence of a prescriptive family Hξ has two critical implications.

The first, as already observed, raises awareness of a much larger fat‐tailed world outside the familiar Gaussian. Ten or twenty standard‐deviation sized events are not necessarily as ‘impossible’ as the Gaussian predicts and it is hard to overstate the importance of this observation for anyone such as regulators interested in the extremes of distributions. The Gaussian distribution is commonplace when modelling the ‘middle’ of distributions—the not unlikely events—but it forces a very particular and aggressive reduction of probability as the size of observations increase. This is far from inevitable, indeed empirical study suggests a ξ > 0 across much of financial markets and for applications such as VaR, a Gaussian choice can lead to massive understatement of risk.

The second is that though there is more freedom to extremes than Gaussians would suggest, this freedom is not without limit. Indeed for large enough values, tails have a surprisingly restricted structure, which opens up the possibility of meaningful parameter estimation from data and thus investigation of tails empirically. Chavez‐Demoulin and Embrechts provide an overview of some of the relevant statistical techniques.

The first implication is uncontroversial; the second, as the authors themselves warn, should be treated with more caution. By its nature, extreme‐value theory deals with events that are rare and thus only marginally present (if at all) in any data; any estimation will almost inevitably involve alarmingly few data points. Moreover, the theory itself is silent on how far out in the tails we must go for convergence to happen; some base distributions for Xi involve extremely slow convergence into Hξ.

In Chapter 15, Martin continues with this theme with results that provide good approximations for the tail of the sum of n i.i.d. random variables even in non‐Gaussian cases. Specifically, if X1, X2, …, Xn are i.i.d. with cumulant KX(ω) = ln [exp(iωx)] valid for ω∈ℂin some band around the real axis, then the density fyY=Σi=1nXi is given by

fY(y)=12πiCenKX(ω)eiωydω
(35)

for any contour  equivalent to integration along the real axis.

(p. 33) With careful choice of , specifically one that lies along the path of steepest descent, the integrand is well behaved and Watson's Lemma provides a high‐quality approximation, namely

fY(y)enKX(ω^)iω^y2πnKX(ω^)
(36)

where ω̂ is such that nKX(ω) − iωy is stationary, indeed forming a saddlepoint.

Given analytic knowledge of KX(ω) for a useful range of distributions, plus the ability to induce coupling in models via conditional independence thus preserving the above conditional on one (or more) variable(s), Martin shows how the saddlepoint method can be a computationally efficient technique for VaR and other tail‐related values that avoids the use of Gaussian approximation.

7 Securitization

The final part of the book is devoted to the asset backed securities (ABS) which played (and continue to play) such a pivotal role in the recent banking crisis. This part consists of three chapters discussing ABS from several complementary viewpoints.

In Chapter 16, Alexander Batchvarov describes the rise and fall of the parallel banking system based on securitization. In essence, the parallel banking system emerged as a result of de‐regulation and disintermediation and led to in a period of very high and, in retrospect, unsustainable leverage. Its development was facilitated by the creation of new financial instruments (notably CDSs and CDOs). Instead of keeping loans on their books, banks and other financial players were originating them, bundling them together, slicing into tranches, and selling these tranches to other investors, such as pension funds, insurers, etc., with different and complementary risk appetites. While this system supported the real economy for a long time—especially consumer, real estate, and leveraged finance—it contributed to the build‐up of leverage in the global capital markets and subsequent worldwide crisis.

Batchvarov convincingly demonstrates that the collapse of the parallel banking system was precipitated by an asset/liability mismatch which could not be avoided due to the absence of a lender of last resort (central bank)—a critical role in the traditional banking system. This collapse, which led to a severe systemic crisis, has been extremely traumatic for both the real and financial economy and resulted in re‐regulation and re‐intermediation. As of this writing, both governments and private investors are looking for a better structure of the markets based on lower leverage levels and more sensible regulations.

Batchvarov makes another interesting point, which is the ‘fallacy of the historical data series’. Specifically, he points out that during the boom of the parallel banking system market participants (especially and disastrously rating agencies) worked with historical data, which, by their very nature, did not account for the effects of the (p. 34) leverage facilitated by the creation of the parallel banking system. By the same token, in the future, financial data series are going to incorporate effects created by the parallel banking system, while the system itself is no longer in place. Thus, blind reliance on historical data is dangerous and has to be avoided; instead, one should augment historical information with a healthy dose of common sense.

Undoubtedly, residential and commercial mortgages form the most important class of financial instruments which are used as building blocks of ABS. In Chapter 17, Alexander Levin discusses the nature of the housing prices and their dynamics. More specifically, he presents a qualitative and quantitative overview of the formation of the housing price bubble in the United States and its subsequent collapse. Levin argues that home prices are stochastic in nature and proposes their modelling in two complementary ways: (a) based on pure empirical considerations; (b) based on general risk‐neutral arguments using the fact that several purely financial home price indices (HPI) are traded at various exchanges. He starts with a detailed overview of financial indices such as the Case‐Shiller futures and Radar Logic (RPX) forwards. Next, Levin introduces the so‐called equilibrium HPI, HPIeq, which he determines using a simple postulate: a mortgage loan payment has to constitute a constant part of the household income. While the corresponding constant is difficult to determine, it turns out that one can build a meaningful dynamic model without knowledge of it, since the real quantity of interest is changes in the HPI, i.e. δ ln (HPIeq) /δt rather than ln (HPIeq) itself.

In order to describe the evolution of HPI in quantitative terms, Levin introduces the so‐called home price appreciation measure (HPA) which represents the rate of return on HPI (in recent times, home price depreciation might be a more accurate term) and proceeds to model it as a mean‐reverting stochastic process. By its nature, it is convenient to view HPA as a quantity defined on discrete time grid with, say, a quarterly time step. Then one can model the evolution for xk = x(tk), tk = kh, in the well‐known Kalman filter framework as follows

xk=X1k+iinfk+vk
(37)
Xk=FXk1+uk+wk
(38)

where Xk = (X1k, X2k)T is the vector of unobservable state variables, iinfk is the income inflation, and uk = h (k1, k3) Tδ ln (HPIeq) /δt, and vk, wk are appropriately normalized normal variables. The corresponding transition matrix F has the form

F=(1hbh1ah)
(39)

where constants a, b are chosen in such a way that the matrix F possesses stable complex conjugate roots −α ± . Levin analyses the corresponding system in detail and decomposes HPA into rate‐related, non‐systematic volatility, and system volatility due to other economic and social factors. He then develops an HPI forecast with an emphasis on the growth of the real estate bubble and its subsequent crash. The key (p. 35) observation is that the bubble was caused to a large degree by the very big discrepancy between the nominal mortgage rate and the effective mortgage rate attributable to the proliferation of cheap financing in the form of Option ARMs (adjustable‐rate mortgages), IO (interest‐only) ARMs, and the like. Next, Levin considers HPI futures and forwards and develops a traditional risk‐neutral model for their dynamics based on the classical (Health‐Jarrow‐Morton) HJM premisses. Overall, Levin provides a detailed description of housing prices and their dynamics.

Finally, in Chapter 18, Manzano et al. develop a pricing model for ABS CDO tranches backed by mortgage collateral (including prime, sub‐prime, and other property types) via the Monte Carlo method. Traditional pricing of such tranches is done along the following lines. First, a very small set of possible scenarios describing the future evolution of collateral is chosen (often this set is reduced to a single scenario). These scenarios are characterized by different levels of CPR (Constant Prepayment Rate), CDR (Constant Default Rate), and Severity S (100% minus the price at liquidation as a percentage of the loan balance) for prime properties. Base case parameters are adjusted as appropriate for other property types. Each scenario is run through INTEX subroutines that model waterfall rules applied to prepayment, delinquency, loss, and IR scenarios specified by the user, and appropriate cash flows are generated. (INTEX is a software provider used as a de facto market standard; applying INTEX subroutines is very computationally costly.) Finally, the corresponding cash flows are averaged with (somewhat arbitrary) weights assigned to different scenarios and the tranche under consideration is priced.

It is clear that this approach has many deficiencies, mainly due to the small number of scenarios used and their arbitrary weighting. Manzano et al. develop an alternative approach. First, they generate a much larger number of scenarios (about 10,000) while taking particular care to correctly model the behaviour of all the relevant property types (from prime to sub‐prime). Next, they use INTEX to generate cash flows for the tranche under consideration and a set of other instruments with known market prices which are used for calibration purposes. Using an entropy regularizer, Manzano et al. find the probability distribution for the set scenarios that best describes the corresponding market quotes. Once the weights for individual scenarios are found, the model is used to price the target ABS CDO tranche and compute fast price sensitivities to all the calibration instruments.

References

Black, F., and Cox, J. (1976). ‘Valuing corporate securities: some effects of bond indenture provisions’. Journal of Finance, 31: 351–67.Find this resource:

Cox, D. R. (1972). ‘Regression models and life tables’. Journal of the Royal Statistical Society, Series B 34/2: 187–220.Find this resource:

Duffie, D., and Lando, D. (2001). ‘Term structures of credit spreads with incomplete accounting information’. Econometrica, 69/3: 633–64.Find this resource:

(p. 36) Fisher, R., and Tippett, L. (1928). ‘Limiting forms of the frequency distribution of the largest or smallest member of a sample’. Proceedings of Cambridge Philosophical Society, 24: 180–90.Find this resource:

Jarrow, R., and Turnbull, T. (1995). ‘Pricing derivatives on financial securities subject to credit risk’. Journal of Finance, 50/1: 53–85.Find this resource:

Lando, D. (1998). ‘On Cox processes and credit risky securities’. Review of Derivatives Research, 2: 99–120.Find this resource:

Leland, H. (1994). ‘Risky debt, bond covenants and optimal capital structure’. Journal of Finance, 49: 1213–52.Find this resource:

Longstaff, F., and Schwartz, E. (1995). ‘A simple approach to valuing risky fixed and floating rate debt’. Journal of Finance, 50: 789–819.Find this resource:

Marshall, A. W., and Olkin, I. (1967). ‘A multivariate exponential distribution’. Journal of the American Statistical Association, 2: 84–98.Find this resource:

Merton, R. (1974). ‘On the pricing of corporate debt: the risk structure of interest rates’. Journal of Finance, 29: 449–70.Find this resource:

Panjer, H. (1981). ‘Recursive evaluation of a family of compound distributions’. ASTIN Bulletin, 12: 22–6.Find this resource:

Salmon, F. (2009). ‘Recipe for disaster: the formula that killed Wall Street’. Wired, 17/03.Find this resource: