Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). (c) Oxford University Press, 2015. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy).

Subscriber: null; date: 24 February 2017

# Marshall‐Olkin Copula‐Based Models

## Abstract and Keywords

This article, which shows that the Marshall–Olkin model can be a viable alternative to the standard Gaussian copula, is organized as follows. Section 2 introduces the Marshall–Olkin model. Section 3 derives the copula function of default times. Section 4 studies the aggregate default distribution. Section 5 discusses the model calibration. Section 6 compares Marshall–Olkin with the Gaussian and t-copula, and Section 7 uses the Marshall–Olkin copula to reproduce the correlation skew in the collateralised debt obligation market.

# 1 Introduction

The problem of correlating multiple credits boils down to the specification of a copula function, which links the marginal default distributions. There is a growing literature that addresses this problem. The main approach that has emerged as the market standard is the Gaussian copula approach. The idea of using a Gaussian copula to model the default times' dependence in basket products goes back to Li (2000). It was also used implicitly in the CreditMetrics framework and the KMV (Kealhofer, McQuown, Vasicek) firm‐value approach. The t‐copula is an extension of the Gaussian copula with a higher tail dependence. It allows for a better modelling of extreme events' risk. A good reference on t‐copulas and the pricing of small baskets can be found in Mashal and Naldi (2002a, 2002b).

In this chapter, we study another alternative: the Marshall‐Olkin copula. This latter was first used in the context of basket credit derivatives pricing by Duffie (1998), then by Duffie and Garleanu (2001). The Marshall‐Olkin copula was traditionally used in reliability theory to model the failure of multi‐component systems. In this set‐up, the failure of each component is assumed to be contingent on some independent Poisson shocks. This is also known as a multivariate Poisson model. A good description of these models can be found in Barlow and Proschan (1981). The Marshall‐Olkin copula can also be viewed as the limiting distribution of a multivariate binary model (Wong 2000). One practical feature of the Marshall‐Olkin approach is the simplicity of its Monte Carlo implementation. In addition, it has a number of useful analytical results for aggregate portfolio distributions (see Lindskog and McNeil 2003).

The purpose of this chapter is primarily to show that the Marshall‐Olkin (MO) model can be a viable alternative to the standard Gaussian copula. This is achieved in three steps. First, we present the MO framework. Second, we propose (p. 258) a parameterization procedure of the model based on market intuition and observed market prices. And third, we compare the Marshall‐Olkin copula with its elliptical counterparts: the Gaussian and the t‐copula.

The rest of the chapter is structured as follows. In section 2, we introduce the Marshall‐Olkin model. In section 3, we derive the copula function of default times. In section 4, we study the aggregate default distribution. In section 5, we discuss the model calibration. In section 6, we compare Marshall‐Olkin with the Gaussian and t‐copula. In section 7, we use the Marshall‐Olkin copula to reproduce the correlation skew in the CDO market.

# 2 The model

We work on probability space (Ω, 𝔖, P), on which is given a set of n non‐negative random variables (τ1,…, τn) representing the default times of a basket of obligors.

We introduce, for each obligor i, the right‐continuous process$Dti≜1{τi≤t}$ indicating whether the firm has defaulted or not.

We assume that there exists a set of m independent Poisson processes$(Ntcj)t≥0$ with intensities$λcj∈ℝ+$, which can trigger simultaneous joint defaults.

Each Poisson process$Ncj$ can be equivalently represented by the sequence of event trigger times${θrcj}r∈{1,2,…}$.

At the rth occurrence of an event of type cj, we draw a set of independent {0, 1}‐ valued Bernoulli variables$(Aθrcj1,j,…,Aθrcjn,j)$ with probabilities (p1, j,…, pn, j, pi, j ∈ [0, 1]. The variable$Aθrcji,j$ indicates whether a default of type i has occurred or not.

The process$(Nti)t≥0$ defined as

$Display mathematics$
(1)

is also a Poisson process with intensity

$Display mathematics$
(2)

It is obtained by superpositioning m independent (thinned) Poisson processes.

The default time τi is defined as the first jump time of the Poisson process$(Nti)t≥0:$

$Display mathematics$
(3)

This common shock model can also be described formally by the following stochastic differential equation (SDE) (p. 259)

$Display mathematics$
(4)

This description was used, for instance, in Duffie (1998).

# 3 The copula function

In this section, we derive the copula function of the shock model described above. To this end, we shall use the ‘equivalent fatal shock model’ of Lindskog and McNeil (2003).

## 3.1 Equivalent fatal shock model

Let Πn be the set of all subsets of {1,…, n}, excluding the empty set ϕ. For each πΠn, we introduce the point process$Ntπ$, which counts the number of shocks in (0, t] resulting in joint defaults of the obligors in π only:

$Display mathematics$
(5)

where, or each trigger time$θrcj, Aθrcjπ,j$ is a Bernoulli variable, which is equal to 1 if all obligors iπ default and all the others, iπ, survive:

$Display mathematics$
(6)

At the occurrence of the rth common shock, of type c j, at time$θrcj$, the point process$Ntπ$ gets incremented by$ΔNθrcjπ=Aθrcjπ,j$. For example, if π = {1, 2}, then the process$Nt{1,2}$, counts the shocks, which trigger simultaneous defaults of obligors 1 and 2 but not the other obligors 3 to n.

We have the following fatal shock representation key result. We refer to Lindskog and McNeil (2003) for details (see Proposition 4).

Proposition 3.1 (Fatal shock representation). The processes$(Nπ)π∈Πn$ are independent Poisson processes with intensities

$Display mathematics$

(p. 260) where

$Display mathematics$

This provides a fatal shock representation of the original not‐necessarily‐fatal shock set‐up. It will allow us to analyse the multivariate distribution of the default times.

For πΠn, let τπ denote the first jump time of the Poisson process Nπ:

$Display mathematics$

Each obligor i can be equivalently described using the fatal shock representation.

Lemma 3.2 (Obligor description using the fatal shock representation).

1. 1. The Poisson process Ni can be expressed as

$Display mathematics$

and its intensity is given by

$Display mathematics$

2. 2. The default time τi is given by

$Display mathematics$

## 3.2 Multivariate exponential distribution

Since we have

$Display mathematics$

the multivariate distribution of$(τ1,…,τn)=(minπ:1∈πτπ,…,minπ:n∈πτπ)$ can be computed as follows.

Proposition 3.3 (Multivariate Exponential Distribution). The multivariate distribution of the default times (τ1,…, τn) is

$Display mathematics$
(7)

where$ΛTπ≜∫0Tλπds.$

This is the Multivariate Exponential Distribution developed by Marshall and Olkin (1967). We refer to Barlow and Proschan (1981), Joe (1997) or Nelsen (1999) (p. 261) for a detailed study of this distribution function:

$Display mathematics$

Proof. We proceed as follows:

$Display mathematics$

the third equality is from the definition of τπ, the fourth equality is due to the independence of the Poisson processes$(Nπ)π∈Πn$.        □

The Multivariate Exponential distribution is ‘memoryless’, i.e. it has the property that

$Display mathematics$

for all T1 > t1,…, Tn > tn. This is the multi‐dimensional version of the well‐known property for the exponential distribution.

Example. For n = 2, if we set u1 ≜ 𝕡 (τ1 > t1), u2 ≜ 𝕡 (τ2 > t2) and$α1≜λ{1,2}λ{1}, α2≜λ{1,2}λ{2}$, we get the bivariate Marshall‐Olkin survival copula

$Display mathematics$
(8)

The copula function (8) has an absolutely continuous part on the upper and lower triangles: {u1 < u2} and {u2 < u1}, and has a singular component on the diagonal {u1 = u2}.

# (p. 262) 4 The aggregate default distribution

The central question in credit portfolio modelling is the study of the aggregate default distribution of a given portfolio. Let Xt denote the total number of defaults, for a fixed time horizon t:

$Display mathematics$
(9)

The distribution of Xt is referred to as the aggregate default distribution at time t. In this section, we derive the aggregate default distribution in the Marshall‐Olkin model.

## 4.1 Poisson approximation

In Lindskog and McNeil (2003), the default indicators$Dti$ are approximated by their corresponding Poisson counters$Nti$. For low default probabilities this is a reasonable approximation. For t ≥ 0, the total number of defaults Xt is then approximated by the random variable Zt defined as

$Display mathematics$
(10)

This is known in the actuarial literature as the approximation of the individual model with the collective model; for low individual default probabilities, the likelihood of multiple jumps in the Poisson process is small, and is neglected for the purposes of estimating the aggregate portfolio distribution.

The total number of losses$Zt≜∑i=1nNti$ is a compound Poisson process. It is the sum of m independent compound Poisson processes$Ztcj$:

$Display mathematics$
(11)

Next, we derive the distribution of Zt, first, using its moment generating function, then using Panjer's algorithm.

## 4.2 Moment generating function (MGF)

The aggregate portfolio counter Zt is a compound Poisson process, which is obtained as the sum of m independent compound Poisson processes:

$Display mathematics$
(12)

(p. 263) The distribution of each compound Poisson$Ztcj$ is not available in closed form, but one can compute its moment generating function$ℒZt(α)≜E[e−αZt]$. Since the processes$Ztcj$ are independent the MGF of Zt is given by

$Display mathematics$

$Ztcj$ is defined by the Poisson counter$Ntcj$ and its compounding distribution$Xcj$, i.e.

$Display mathematics$
(13)

where$X1cj,…,XNtcjcj(=dXcj)$ are i.i.d. independent of$Ntcj$. When the jump sizes$Xrcj$ are discrete random variables taking values in {a1, a2, …}, one can write

$Display mathematics$

where$Ntak$ are independent Poisson processes with intensities

$Display mathematics$

and the MGF of the compound Poisson process is obtained immediately as

$Display mathematics$

Here, the jump sizes take values in {0, 1,…, n} and the distribution of$Xcj=∑i=1nAi,j$, where Ai, j is a Bernoulli variable with probability pi, j, can be computed by inverting its Fourier transform, which is given by the product

$Display mathematics$

The moment generating function of Zt is then given by

$Display mathematics$
(14)

## 4.3 Panjer's recursion

As shown in Lindskog and McNeil (2003), the distribution of the compound Poisson Zt can also be derived using Panjer's recursion. The total number of losses Zt has the following representation (see Proposition 6 in Lindskog and McNeil 2003): (p. 264)

$Display mathematics$
(15)

where Ñt is a Poisson process with intensity

$Display mathematics$

It counts any loss‐causing shock in (0, t].$X˜1,…,X˜Ntcj(=dX˜)$ are Independent identically distributed (i.i.d.) and independent of Ñt. The distribution of X̃ is given by

$Display mathematics$

The distribution of Zt can then be computed with Panjer's algorithm (see Panjer 1981):

$Display mathematics$

This recursive algorithm offers a more efficient method for computing the probabilities 𝕡 (Zt = l), than the inversion of the moment generating function.

## 4.4 Duffie's approximation

Duffie and Pan (2001) have suggested another approximation of the aggregate default distribution. They have neglected the probability of multiple jumps of the common market factor events and they have assumed that the solution of the SDE (4) can be approximated as

$Display mathematics$
(16)

Using equation (16), we can easily compute the Laplace transform of$Xt≜∑i=1nDti$ as a product of conditional market factor Laplace transforms:

$Display mathematics$

(p. 265) if m > n, so that we have n idiosyncratic factors and mn common market factors, i.e.,

$Display mathematics$

where$Nt0,i≜Ntcm−n+i$ is the idiosyncratic factor of obligor i. Then, the Laplace transform collapses to

$Display mathematics$
(17)

where

$Display mathematics$

A direct inversion of the Laplace transform (17) gives the aggregate default distribution.

## 4.5 Monte Carlo

The aggregate default distribution can also be estimated using a Monte Carlo method. A good reference on simulating Multivariate‐Exponential Default Times can be found, for instance, in Duffie and Singleton (1999b).

We are interested in simulating the correlated default times (τ1,…, τn) only if they occur before a fixed time horizon T. The basic algorithm proceeds as follows:

1. 1. Simulate the jump times of the market factor Poisson processes$(NTc1,…,NTcm):{θrcj}r∈{1,2,…}$, for 1 ≤ jm,

1. (a) Initialize$θ0cj=0$,

2. (b) While$θrcj≤T$, simulate a uniformly distributed variable U, find the inter‐ jump time S such that$1−exp(−(ΛScj−Λθr−1cjcj))=U, setθrcj=θr−1cj+S;$

2. 2. For each market factor jump time$θrcj$, simulate the individual default Bernoulli variables$(Aθrcj1,j,…Aθrcjn,j)$:

1. (a) Simulate a set of n independent uniformly distributed variables ( U 1,…, Un), set$Aθrcji,j=1{Ui≤pi,j}$, for 1 ≤ in;

3. 3. Set the individual default times:

$Display mathematics$

(p. 266) A variant of this simulation uses the fact that the process$Ntc≜Ntc1+…+Ntcm$ is a Poisson process and its intensity is given by the sum of intensities$λc≜λc1+…+λcm$. The probability of having a market factor jump of type {c j } is given by the ratio$pjc=λcjλc$. Conditional on a market factor jump at$θrc$, the identity of the market factor that triggered follows a multinomial distribution with parameters$(λc1λc,…,λcmλc)$. We have the following algorithm:

1. 1. Simulate the jump times of ‘any type’ of market factor events in the interval$(0,T]:{θrc}r∈{1,2,…}$,

1. (a) Initialize$θ0c=0$,

2. (b) While$θrc≤T$, simulate a uniformly distributed variable U, find the inter‐ jump time S such that$1−exp(−(ΛSc−Λθr−1cc))=U, set θrc=θr−1c+S$;

2. 2. For each jump time$θrc$, simulate the identity of the market factor that has triggered$Jrc$:

1. (a) Simulate a uniformly distributed variable U, find the index$Jrc$ in {1,…, m} such that

$Display mathematics$

3. 3. For each pair$(θrc, Jrc)$, simulate the individual Bernoulli variables$(Ar1,Jrc,…Arn,Jrc)$:

1. (a) Simulate a set of n independent uniformly distributed variables (U1,…, Un), set$Ari,Jrc=1{Ui≤pi,Jrc}$, for 1 ≤ in;

4. 4. Set the individual default times:

$Display mathematics$

The second algorithm is clearly more efficient than the first since we restrict the simulation to a single vector of market factor default times. The intensity of the ‘any‐ type’ market factor event is potentially m times larger than the individual market factor intensities, hence would produce more jump times. The conditional probabilities$(pjc)1≤j≤m$ provide a much better way to simulate the identity of the market event.

# 5 Calibration

The Marshall‐Olkin copula model offers a very rich correlation structure, which can be used to reproduce some observable measures of interdependence such as estimates of default correlations or basket credit derivative prices. In this section, we discuss the calibration of the model parameters.

(p. 267) In general, one needs to specify m(n + 1) parameters, corresponding to the vector of market factor intensities and the matrix of factor loadings. Market factor events can be classified in two categories: (a) ‘common’ market factors affecting a subset of credits, which share some common characteristics; (b) idiosyncratic factors specific to individual obligors. Suppose m > n, so that we have n idiosyncratic factors$Nt0,i$, with intensities λ0,i, and mcmn common factors. The decomposition (2) becomes

$Display mathematics$
(18)

Each idiosyncratic factor N0,i triggers the default of obligor i only with probability 1. The number of unknown parameters is mc (n+ 1); the individual idiosyncratic terms are obtained directly by the residuals$[λi−∑j=1mcpi,jλcj]$.

The aim of the calibration procedure is threefold:

1. 1. define the common market factor events that constitute the backbone of the correlation structure;

3. 3. calibrate the intensity levels of the pre‐specified market factors in the model.

Next, we explore each point in turn.

## 5.1 Choice of common market factors

The first step consists in specifying the common market factors that explain joint default events. Clearly, this choice is market specific and depends on macro‐ and microeconomic factors, which prevail at a particular point in time. The set of economic drivers that explain the default correlation ‘sentiment’ for investment grade credits, for example, are different from the ones that affect emerging market or high‐ yield credits. On one hand, one would find that the behaviour of investment‐grade credits is most likely to be explained by industry‐sector events; on the other hand, in emerging markets the joint behaviour of credit entities is better explained by regional and country factors. Taking investment‐grade credits as an example, one can highlight three distinct types of market behaviours:

• Intra‐sector segment: it is commonly accepted that credit spreads of reference entities, belonging to the same industry‐sector, have a tendency to move in tandem. This would seemingly imply the existence of a sector factor, which is generally stable in time and jumps occasionally. Sector factor shocks are observed through the joint co‐movements of credit spreads in this particular sector. The sector factor itself cannot be observed but we can observe its effect.

• Inter‐sector segment: historically, we observe that credit spreads in different industries have also a tendency to move together. The dependence between credits from different sectors is less important than the one observed intra‐sector. This (p. 268) would correspond to general economy‐wide events such as economic cycles, recessions, etc. Using the equity market terminology, we refer to the inter‐sector driver as the ‘Beta’ driver.

• Super senior risk: using the market‐standard Gaussian copula model, one finds that the value attributed to a super senior CDO tranche is equal to zero. This is due to the ‘zero‐tail‐dependence’ property of the Gaussian copula. The credit CDO market, however, has a different view. Indeed, super senior tranches are priced and traded at a premium of the order of a few basis points. This suggests that the market is pricing the highly unlikely global Armageddon risk (a situation where everyone defaults), and attributes an insurance‐like premium to this ‘catastrophe’ risk Unlike the Gaussian copula, the Marshall‐Olkin model is capable of capturing this effect. By assuming a low‐probability global ‘World’ driver and letting every credit in our universe have a factor‐loading equal to 1, we ensure that the premiums of super senior CDO tranches are floored at the world driver spread. We can view this as a background radiation effect: the world driver sits silently in the background, and would never be active (under the real probability measure), but if the event occurs, every credit entity would default almost surely.

Summary 4 In this specification of the Marshall‐Olkin model, we have the following decomposition:

$Display mathematics$
(19)

where

• λW is the intensity of the ‘World’ driver;

• λB is the intensity of the ‘Beta’ driver, and pi,B is the loading on that driver;

• $λSj$ is the intensity of the ‘Sector’ driver Sj, and$pi,Sj$ is the loading on that sector1;

• λ0,i is the intensity of the idiosyncratic event.

The second step consists in fixing the matrix of factor loadings [pi,j]. It is clear that given the large number of credits that one has to deal with, it is crucial to reduce the dimensionality of the parameters to be specified. A natural approach, as suggested in Lindskog and McNeil (2003) and Duffie and Pan (2001), is to assume that the contribution of each market factor component$(pi,jλcj)$, in equation (19), is a fixed percentage αj of the total intensity λi, i.e. for all 1 ≤ in,

$Display mathematics$

(p. 269) The market factor contributions (α1,…, αm) are chosen such that the residual idiosyncratic intensities are positive:

$Display mathematics$

One consequence of this choice is that the loadings are completely specified by the individual intensities and the corresponding market factor intensities

$Display mathematics$

Unfortunately, as the intensity λi increases, the loading pi,j increases and can breach the condition that it is a probability:

$Display mathematics$
(20)

A suggested parameterization is to impose condition (20) by writing pi,j as a conditional probability function

$Display mathematics$
(21)

where the ‘hazard’ rate γi, j is defined as

$Display mathematics$

Expanding the exponential to first order, we find, for$αjλiλcj<<1$,

$Display mathematics$

and as λi goes to +∞, pi,j converges asymptotically to 1.

## 5.3 Calibration of the market factor intensities

Finally, given the parametric form (21), the only unknown parameters left are the common market factor intensities, which can be recovered from benchmark basket instruments such as first‐to‐default swaps, or CDO tranches. Another possibility for calibrating the driver intensities is to use empirical default correlations. For example, one can use the average inter‐sector default correlation to fit the Beta driver, then for each sector driver use the average intra‐sector default correlation.

Note that the calibration method presented in this section resembles the one used for an HJM (Heath‐Jarrow‐Morton) model of yield curve dynamics,

$Display mathematics$

(p. 270) which is also done in three steps:

1. 1. define the number of drivers that explain the dynamics of the yield curve, e.g. n = 3;

2. 2. specify a parameterization of the volatility curve for each driver:

$Display mathematics$

3. 3. calibrate the instantaneous volatility levels σi (t) on a set of benchmark swaption or cap instruments.

# 6 Gauss vs.Marshall‐Olkin

In this section, we compare the Marshall‐Olkin copula with the standard Gaussian and Student copulas.

## 6.1 Overview

The Gaussian copula is defined as

$Display mathematics$

where Φn (∙) is the joint distribution function of a normally distributed random vector with unit variances and zero means, and Φ−1 (∙) is the inverse function of the univariate standard normal distribution. The Gaussian copula can be viewed as the copula function of a set of correlated Gaussian variables transposed back into ‘uniform’ space with the inverse normal function.

The t‐copula is defined as

$Display mathematics$

where$tvn(.)$ is the joint distribution function of the multivariate student distribution and$tv−1(.)$ is the inverse function of the univariate student distribution. The t‐copula is obtained from the multivariate dependence of a set of correlated student variables.

Some key differences between Marshall‐Olkin, Gaussian, and t‐copula are summarized below.

1. Tail dependence: the upper tail dependence is defined as

$Display mathematics$
(22)

(see Embrechts, Lindskog, and McNeil, (2003). The expression of the tail dependence (22) for the three copula functions, Gaussian, t‐copula, and MO, is given by: (p. 271)

$Display mathematics$

The tail dependence for a Gaussian copula is always equal to zero. The t‐copula and the Marshall‐Olkin copula can be parameterized to fit non‐zero tail dependence and to capture more extreme tail events.

2. Elliptical copulas do not allow for multiple defaults in the interval [t, t + dt), i.e. 𝕡 (τi = τj) = 0, for ij. In a Marshall‐Olkin model, the probability of instantaneous joint defaults can be non‐zero. In fact, the foundation of the correlation profile in MO is based on joint instantaneous defaults.

3. The mixed partial derivatives of a copula function,$∂kC∂u1…∂uk$, exist for almost all u ∈ [0, 1]n. The copula function can then be decomposed into its absolutely continuous part A (u1,…, un) and its singular part S (u1, …, un):

$Display mathematics$

Elliptical copulas, by construction, are absolutely continuous. The Marshall‐Olkin copula has a singular part and a continuous part (see, for example, Embrechts, Lindskog, and McNeil, 2003).

4. Many analytical results are available for the Marshall‐Olkin copula, and the pricing of credit derivatives such as first‐to‐default swaps can be implemented in closed form. In general, for the Gaussian and t‐copula, one needs to use a Monte Carlo simulation, except in simplified one‐factor models (as in Schonbucher 2000 or Frey and McNeil 2003), where semi‐analytical results are available by using the conditional independence property and integrating over values of the conditioning latent variable.

## 6.2 Modes of the aggregate default distribution

Click to view larger

Figure 8.1 Default distribution for a portfolio of 100 credits:$λi=2%, λW=0.05%, λB=5%, λSj=2.5%, pi,B=0.24$ and $pi,Sj=0.16$.

Consider a portfolio of 100 obligors with 10 credits per sector. Set the intensities of the individual credits to λi = 2%, the intensity of the World driver to λW = 0.05%, the intensity of the Beta driver to λB = 5%, and all the sector intensities to$λSj=2.5%$. Attribute 60% of the credit intensity to the Beta factor, 20% to the sector factor, and the remaining 20% to the idiosyncratic factor. This implies the following values of the (p. 272) factor loadings: pi,B = 0.24, and$pi,Sj=0.16$. The five‐year default correlation in this model is 19.25% intra‐sector, and 16.16% inter‐sector.2

To begin with let us consider the aggregate default distribution at the five‐year time horizon.

We plot the distribution of the portfolio specified here with$(pi,B=0.24; pi,Sj=0.16)$, and we compare it with the distribution of a portfolio with similar marginals but a different multivariate dependence$(pi,B=0; pi,Sj=0)$.

In Figure (8.1), we observe that the default distribution has four different modes: the first big hump corresponds to the idiosyncratic component, the second hump corresponds to the Beta contribution, the third hump is the sector contribution, and the last spike at the far end of the distribution is due to the world driver.

Click to view larger

Figure 8.2 Default distribution for a portfolio of 100 credits: $λi=2%, λW=0.05%, λB=5%, λSj=2.5%, pi,B=0$ and $pi,Sj=0$.

In the second model depicted in Figure (8.2), the Beta and sector factors are turned off. The joint dependence is built in via the world driver. Thus, the default distribution has only a single idiosyncratic mode and the world driver spike. Compared with Figure (8.1), the idiosyncratic mode has shifted to the right since the idiosyncratic default probabilities are higher in this case. (p. 273)

Next, we compare the distribution in Figure (8.1) with the ones of a Gaussian copula and a t‐copula. In order to do a meaningful comparison, we impose that the five‐year default correlation is the same for the various models. Since the marginal distributions are unchanged, the mean of the aggregate default distribution is fixed independently from the copula function. The additional requirement to have the same pair‐wise default correlations corresponds to keeping the variance fixed as well. In this example, the five‐year default correlations of the Marshall‐Olkin model are$ρi,jD=19.25%$ intra‐ sector, and$ρi,jD=16.16%$ inter‐sector. A direct inversion of the default correlation formula, with a Gaussian copula dependence, gives the following values of the Gaussian asset correlation$ρi,jA=41.68%$ intra‐sector, and$ρi,jA=36.39%$ inter‐sector. Doing a similar calibration for a t‐copula with a parameter ν = 9, we get$ρi,jA=35.92%$ intra‐ sector, and$ρi,jA=30.12%$ inter‐sector. Note that to arrive at the same level of default correlation, the equivalent asset correlation in the t‐copula is lower than the one in the Gaussian copula. This is natural since the t‐copula has higher tail dependence. In fact, as pointed out in Mashal and Naldi (2002a), even with zero asset correlation, the implied default correlation with the t‐copula is non‐zero.

Click to view larger

Figure 8.3 Comparison of the default distributions for the calibrated Marshall‐Olkin, Gaussian and t‐copula.

Figure (8.3) depicts the default distributions of the three calibrated copula models. Having matched the first two moments of the default distribution, the key difference between the Marshall‐Olkin copula and the elliptical copulas is the shape of the distribution function. Marshall‐Olkin implies a multi‐modal distribution. Gaussian and Student copulas imply uni‐modal distributions. (p. 274)

## 6.3 Tail of the distribution

The Marshall‐Olkin and the Gaussian copulas have very different behaviours in the tail of the portfolio distribution. To highlight this difference, we plot the cumulative default distribution of the example portfolio in log‐space. We introduce the re‐scaled log variable hk, for k = 1,2,…, n,

$Display mathematics$

The re‐scaled tail measure hk can be interpreted as the ‘hazard’ rate of the kth ‐to‐ default time τ[ k ],

$Display mathematics$

Click to view larger

Figure 8.4 Tail of the portfolio default distribution for the Marshall‐Olkin, Gaussian, and t‐copula.

Figure (8.4) shows that, for the Gaussian and t‐copula, hk converges to zero as we move further into the tail of the distribution. In the Marshall‐Olkin model, the values of hk are floored at 0.05%. This is the effect of the World driver: it suffices to observe that

$Display mathematics$
(p. 275)

Here, we have used the property that all credits default if and only if the World driver triggers, i.e.${DTW=1}$. This part of the distribution is precisely the one that determines the value of the extreme events and catastrophe risk. The World driver plays a unique role since it can be used to match insurance premiums of super senior risk. This cannot be achieved with a Gaussian copula.

## 6.4 Time Invariance

Another major difference between Marshall‐Olkin and the Gaussian copula is the ‘time’ behaviour. Consider an example with two obligors, and a one‐factor MO model:

$Display mathematics$

Set the intensities to λ1 = λ2 = λc = 1%, and the factor loadings to p1,c = p2,c = 0.3915. The five‐year default correlation is equal to 15%. For the equivalent Gaussian copula, set the asset correlation to ρA = 41.04% in order to match the 15% default correlation at the five‐year time horizon. With this specification, compute the default correlation at other time horizons between 0 and 5 years, and compare the implied term structures.

Click to view larger

Figure 8.5 Default correlation as a function of the time horizon for the Gaussian and Marshall‐ Olkin copulas.

Figure (8.5) shows that the Marshall‐Olkin default correlation is stable through time. This is not surprising, since, as mentioned before, the multivariate exponential distribution is memoryless, therefore the T‐default correlation estimated at time t would be the same as the (Tt)‐default correlation at time 0. The Gaussian copula, on the other hand, is highly time dependent. The upward sloping shape of (p. 276) its default correlation term structure means that a first‐to‐default swap, for example, would become cheaper as time goes by, even if the underlying credit spreads remain unchanged. At time t = 0, the five‐year FTD (first‐to‐default) basket would be priced at 15% default correlation. Then, after one year, the maturity of the FTD becomes four years, which corresponds to a default correlation of 13.8%. And at time t = 4 year, the same basket becomes a 1‐year trade and would be marked at 8% default correlation. Rogge and Schonbucher (2003) point out the same deficiency of the Gaussian copula by analysing the size of the default contagion as a function of time.

# 7 Correlation skew

In this section, we discuss how the Marshall‐Olkin copula can be used to match the correlation skew of the CDO market.

## 7.1 Overview

Over the last few years, we have seen an increased liquidity in CDO tranche trading, which resulted in an observable market of default correlation. Dealers are starting to quote a two‐way market on a pre‐specified set of tranches referenced to a given index portfolio. By inverting the Gaussian copula formula, one finds the implied level of correlation that would match the quoted tranche premiums. As with the Black‐Scholes (p. 277) option model, there is not a single correlation number that would match all tranches at various attachment points. Supply and demand factors combined with the credit views and risk appetite of the market participants would explain the discrepancy of correlations across the capital structure. The example below gives the market bid/offer premiums of the European iTraxx index tranches.

$Display mathematics$

The index level is 37 bps. The (0–3%) tranche is quoted in points upfront for a tranche paying 500 bps running.

Next, we explain some concepts such as base correlation and compound correlation in a formal manner.

## 7.2 One‐factor Gaussian copula

We give a formal definition of the one‐factor Gaussian copula function.

Definition 7.1 (One‐factor Gaussian Copula). The one‐factor Gaussian copula with parameter ρ ∈ [0, 1) is defined as

$Display mathematics$

where Φ (∙), Φ−1 (∙) and ϕ (∙) are the standard normal distribution function, its inverse and its density function respectively.

This formal definition can be understood by considering a simplified firm value model as in Schonbucher (2000) for example. The default of obligor i is triggered when the asset value of the firm, denoted Vi are below a given threshold. Vi is assumed to be normally distributed. The relationship between default and the asset value is given by

$Display mathematics$

The asset values of different obligors are correlated. Their joint dependence is defined via a common factor Y, which follows a standard normal distribution, and idiosyncratic standard normal noises ϵ1,…, ϵn:

$Display mathematics$

where Y and ϵ1,…, ϵn are i.i.d. standard normally distributed. The linear correlation between the asset values of two obligors is ρ. This coefficient, which is used to (p. 278) parameterize the family of one‐factor Gaussian copulas, is sometimes called an asset correlation. Conditional on a given value of the systemic factor Y, the asset values are independent; hence, the default times are independent as well. This is the set‐up of a conditionally independent defaults model.

One can write down the default times' copula function by conditioning on Y and using the law of iterated expectations:

$Display mathematics$

The one‐factor Gaussian copula is the standard model used to quote CDO tranches in the market.

## 7.3 Pricing CDOs

Let us consider the pricing of a CDO tranche, which covers the losses of a given portfolio between two thresholds 0 ≤ K 1 < K 2 ≤ 1.

Letting δi denote the recovery rate of obligor i, we define the portfolio loss process as

$Display mathematics$

The loss on the tranche (K 1, K2) is defined as

$Display mathematics$

The processes L t and$MtK1,K2$ are pure jump processes. The CDO payments correspond to the increments of$MtK1,K2$, i.e. there is a payment when the process$MtK1,K2$ jumps, which happens at every default time. The payoff of the protection leg of a CDO is therefore defined as the Stieljes integral

$Display mathematics$

(p. 279) Letting (T0 = 0, T 1,…, TN) denote the cash flow dates, Δ TiTiTi−1, the payment fractions and S the tranche premium, the payoff of the premium leg is defined as:

$Display mathematics$

The value of the CDO tranche is given by the expected value of the discounted payoff under a risk neutral measure.

Assume deterministic interest rates and let$B(0,T)≜exp(−∫0Trsds)$ denote the discount factor maturing at time T. Using the integration by part formula and Fubini's theorem to interchange the order of integration, we can rewrite the protection integral as

$Display mathematics$

Similarly, to compute the value of the premium leg, we need to know the expected tranche losses at times$Ti:E[MTiK1,K2]$.

The pricing of CDO tranches boils down to computing the values of all ‘tranchelets’:

$Display mathematics$
(23)

For t ≥ 0, if we know the density function ƒt (∙) of the portfolio loss Lt:

$Display mathematics$
(24)

then, the expectation (23) is given by

$Display mathematics$
(25)

where$Ft(x)=∫−∞xft(z)dz$ is the cumulative probability function of Lt. With a given copula, such as the one‐factor copula, it is easy to compute the density function ƒt (∙) using techniques such as the FFT (see Gregory and Laurent 2002) or the convolution recursion (see Andersen, Sidenius, Basu, 2003).

## 7.4 Compound correlation

As mentioned earlier, the one‐factor copula has been used by dealers to quote the standardized CDO tranches traded in the market. Since the prices of various tranches are driven by supply and demand, a single correlation parameter is not sufficient to reproduce market prices. Inverting the pricing formula of the one‐factor Gaussian copula, one would find the implied correlation, which matches the market price of each tranche. This implied correlation is referred to as ‘Compound Correlation’.

(p. 280) Definition 7.2 (Compound Correlation). For a given CDO tranche with attachment points (K 1, K2) and quoted premium$SK1,K2$, let$GK1,K2(S,ρ)$ denote the model price using the one‐factor Gaussian copula with parameter ρ. We call compound correlation, the value of the parameter ρ such that

$Display mathematics$
(26)

If a compound correlation exists, i.e. if the mapping (26) is invertible, then this offers a way to compare different tranches on a relative value basis. Unfortunately, it turns out that the model price is not a monotonic function of compound correlation. Therefore, it is not guaranteed that we can always find a solution. Moreover, in some instances, we can find more than one value of correlation, which satisfies (26). This usually happens with Mezzanine tranches, which are not correlation sensitive. This behaviour is well documented (see McGinty et al. 2004) and has motivated the base correlation approach that we describe next.

Solving for compound correlations in the previous example, we get the following results.

$Display mathematics$

## 7.5 Base correlation

One can view each CDO tranche with attachment points (K 1, K2) as the difference between two equity tranches: (0, K2) and (0, K 1). This can be checked easily from the definition of the payoff:

$Display mathematics$

Therefore, to price any CDO tranche it suffices to have the whole continuum of equity tranches (0, K), for K ∈ [0, 1]. Each one of these equity tranches can be valued with a different one‐factor Gaussian copula correlation ρ (0, K). The function ρ (0, K) : [0, 1] → [0, 1] is called the ‘Base Correlation’ curve.

Definition 7.3 (Base Correlation). The base correlation curve is a function ρ(0, K) : [0, 1] → [0, 1], which parameterizes the prices of all equity tranches (0, K). In other words, the price of the (0, K)‐tranche is given by the one‐factor Gaussian copula model with parameter ρ (0, K).

Furthermore, the value of any tranche with attachment points (K1, K 2) and quoted premium$SK1,K2$, is given by

$Display mathematics$
(27)

(p. 281) Using the standard tranches quoted in the market, one would proceed with a bootstrapping algorithm to find the base correlation curve, which reproduces the market quotes. The popularity of this method lies in the fact that the function

$Display mathematics$

is monotonic. Hence, we can always invert the relationship (27) for each attachment point.

Mathematically, base correlation is just another way of parameterizing the density function ƒT(∙) of the portfolio loss Lt. Indeed, given a base correlation curve (ρ (0, K))0≤K≤1, one can compute the value of all ‘tranchelets’ (CT (0, K))0≤K≤1:

$Display mathematics$

Assuming that, for T ≥ 0, the function KCT (0, K) is C2, we can recover the density function as:

$Display mathematics$
(28)

This follows directly from equation (25). This is similar to the Breeden and Litzen‐ berger (1978) formula in options theory where the implied density of the forward stock price is obtained from the continuum of call prices at different strikes.

Solving for base correlations in the previous example, we get the following results.

$Display mathematics$

## 7.6 Marshall‐Olkin skew

As mentioned earlier, because of the multi‐modality of the Marshall‐Olkin loss distribution, it is possible to use each mode of the distribution to match various parts of the capital structure. Figure (8.1), for example, suggests that the idiosyncratic hump can be used to match the equity tranche (0–3%), the Beta hump can be used to match the mezzanine tranches (3–6%, 6–9%, and 9–12%), and the World driver can be used to match the senior tranche (12–22%). Additional tweaking of the calibration can also be done with sector drivers.

Figure (8.6) shows the results of the calibration using a MO model with one common Beta driver and the World driver.

Click to view larger

Figure 8.6 Base correlation skew.

Using the Beta driver, we can match accurately most of the equity and mezzanine tranches. The senior tranches are more sensitive to extreme events and require additional common factors to have a better market fit. Here our intent is solely to show that (p. 282) the multi‐modality feature of the MO copula generates a correlation skew curve, which mirrors the one observed in the market. A precise study of the market calibration is outside the scope of this chapter.

# 8 Conclusion

We have presented in this chapter the Marshall‐Olkin copula in the context of default correlation modelling. We have proposed a calibration procedure to fit this rich correlation structure to an intuitively sound market dynamic. And we have shown that MO offers some desirable features that make it an eligible alternative to the Gaussian copula. The comparison between MO and the Gaussian copula is similar in many ways to the evolution from Black‐Scholes to term‐structure models in fixed income markets. Black‐Scholes has been used as the model of choice by a lot of traders because of its simplicity. It converts one volatility number to a price. However, there is no guarantee that an exogenous BS swaption matrix is arbitrage free or at least self‐consistent. On the other hand, a calibrated HJM model, which is built upon a defined set of yield curve deformations or drivers, is self‐consistent by construction. The Gaussian copula can be viewed as the Black‐Scholes of default correlation. The Marshall‐Olkin approach corresponds to an HJM framework. Once the market factors are calibrated, all combinations of sub‐baskets can be priced consistently in this calibrated term‐ structure of default inter‐dependence.

## References

Andersen, L., Sidenius, J., and Basu, S. (2003). ‘All your hedges in one basket’. Risk (Nov.) 67–70.Find this resource:

Barlow, R., and Proschan, F. (1981). Statistical Theory of Reliability and Life Testing. Silver Spring, MD.Find this resource:

Breeden, D. T., and Litzenberger, R. H. (1978). ‘Prices of state‐contingent claims implicit in options prices’. Journal of Business, 51/4.Find this resource:

Brémaud, P. (1980). Point Processes and Queues: Martingale Dynamics, New York: SpringerVerlag.Find this resource:

Duffie, D. (1998). ‘First‐to‐default valuation’. Working paper, Graduate School of Business, Stanford University, CA.Find this resource:

—— and Garleanu, N. (2001). ‘Risk and valuation of collateralized debt obligations’. Financial Analysts Journal, 57/1: 41–59.Find this resource:

—— and Pan, J. (2001). ‘Analytical value‐at‐risk with jumps and credit risk’. Finance and Stochastics, 5: 155–80.Find this resource:

—— and Singleton, K. (1999a). ‘Modeling term structures of defaultable bonds’. Review of Financial Studies, 12/4: 687–720.Find this resource:

—— —— (1999b). ‘Simulating correlated defaults’. Working paper, Graduate School of Business, Stanford University, CA.Find this resource:

Elliott, R. J., Jeanblanc, M., and Yor, M. (2000). ‘On models of default risk’. Mathematical Finance, 10/2: 179–95.Find this resource:

Embrechts, P., Lindskog, F., and McNeil, A. (2003). ‘Modelling dependence with copulas and applications to risk management’. In S.T. Rechev (ed.), Handbook of Heavy Tailed Distributions in Finance. Amsterdam Elsevier/North‐Holland,Find this resource:

Frey, R., and McNeil, A. (2003). ‘Dependent defaults in models of portfolio credit risk’. Journal of Risk, 6/1: 59–92.Find this resource:

Gregory, J., and Laurent, J. P. (2002). ‘Basket default swaps, CDO's and factor copulas’. Working paper, BNP Paribas and University of Lyon.Find this resource:

Joe, H. (1997). Multivariate Models and Dependence Concepts. London: Chapman & Hall.Find this resource:

Kevorkian, J., and Cole, J. D. (1996). Multiple Scale and Singular Perturbation Methods. New York: Springer‐Verlag.Find this resource:

Lando, D. (1998). ‘On Cox processes and credit risky securities’. Review of Derivatives Research, 2/2–3: 99–120.Find this resource:

Li, D. X. (2000). ‘On default correlation: a copula function approach’. Journal of Fixed Income, 9: 43–54.Find this resource:

Lindskog, F., and McNeil, A. (2003). ‘Common Poisson shock models: applications to insurance and credit risk modelling’. ASTIN Bulletin, 33/2: 209–38.Find this resource:

Marshall, A. W., and Olkin, I., (1967). ‘A multivariate exponential distribution’. Journal of the American Statistical Association.Find this resource:

Mashal, R., and Naldi, M. (2002a). ‘Pricing multiname credit derivatives: heavy tailed hybrid approach’. Working paper, Lehman Brothers.Find this resource:

—— (2002b). ‘Extreme events and default baskets’. Risk (June), 119–22.Find this resource:

McGinty, L., Beinstein, E., Ahluwalia, R., and Watts, M. (2004). ‘Introducing base correlations’. Credit Derivatives Strategy, JP Morgan.Find this resource:

Nagpal, K., and Bahar, R. (2001). ‘Measuring default correlation’. Risk (Mar), 129–32.Find this resource:

Nelsen, R. (1999). An Introduction to Copulas, New York: Springer‐Verlag.Find this resource:

(p. 284) Panjer, H., (1981). ‘Recursive evaluation of a family of compound distributions’. ASTIN Bulletin, 12: 22–6.Find this resource:

Rogge, E., and Schonbucher, P. J. (2003). ‘Modelling dynamic portfolio credit risk’. Working paper.Find this resource:

Schonbucher, P. J. (1998). ‘The term structure of defaultable bond prices’. Review of Derivatives Research, 2/2–3: 161–92.Find this resource:

—— (2000). ‘Factor models for portfolio credit risk’. Working paper, Bonn University.Find this resource:

Servigny, A., and Renault, O. (2002). ‘Default correlation: empirical evidence’, Working paper, Standard & Poors.Find this resource:

Vasiceck, O. (1997). ‘The loan loss distribution’. Working paper, KMV Corporation.Find this resource:

Wong, D. (2000). ‘Copula from the Limit of a Multivariate Binary Model’. Working paper, Bank of America Corporation.Find this resource:

## Notes:

(1) If iSj then pi, Sj > 0 otherwise pi, Sj = 0.

(2) The numerical values in this example are chosen arbitrarily to exhibit the shape of the distributions and compare the various copulas. For empirical studies of default correlation, we refer the reader to the article by Nagpal and Bahar (2001) and the paper by Servigny and Renault (2002).