Chapter 14 Appendix: Characteristics of Copulas

Copulas were introduced in Section 4.2. This appendix describes selected characteristics of copulas that are useful in our study of risk retention.

14.1 Copula Derivatives

We wish to calculate partial derivatives of the copula distribution function. Following custom, we first consider the bivariate case.

14.1.1 Bivariate Copulas

Let \(C\) be a bivariate copula distribution and \(c\) be the corresponding density. Taking partial derivatives, we have \[ C_1(v,w) = \partial_v ~C(v,w) = \partial_v \int^{v}_0\int^{w}_0 c(z_1,z_2)~dz_1dz_2 =\int^{w}_0 c(v,z_2)~dz_2 \] and similarly for \(C_2\). Because the copula is symmetric in its arguments (the usual assumption), we have \[ C_2(v,w) = \partial_w ~ C(v,w) =\int^{v}_0 c(z_1,w)~dz_1 =\int^{v}_0 c(w,z_1)~dz_1= C_1(w,v). \] We also need the mixed partial derivatives \(C_{11}(v,w) = \partial_v~ C_1(v,w)\) and \(C_{12}(v,w)\) \(= \partial_w ~C_1(v,w)\) \(=c(v,w)\). To illustrate, for the case of independence, we have \(C(v,w) = v \cdot w\), \(C_1(v,w) = w\), \(C_2(v,w) = v\), \(C_{11}(v,w) = 0\), and \(C_{12}(v,w) = 1\).

14.1.1.1 Frank’s Copula

To see how this works in a simple case, consider Frank’s copula. The copula distribution function is \[ \begin{array}{lc} C(v,w) = -\frac{1}{\theta} \ln \left( \frac{1-e^{-\theta} - (1-e^{-\theta v})(1-e^{-\theta w})}{1-e^{-\theta}} \right) . \end{array} \] Here, \(\theta\) is the dependence parameter. With this, we have \[ \begin{array}{ll} C_1(v,w) & = -\frac{1}{\theta} ~\partial_v ~ \ln \left( 1-e^{-\theta} - (1-e^{-\theta v})(1-e^{-\theta w}) \right) \\ & = \frac{ e^{-\theta v}(1-e^{-\theta w})} {1-e^{-\theta} - (1-e^{-\theta v})(1-e^{-\theta w})} \\ \end{array} \tag{14.1} \] and \[ \begin{array}{ll} C_{11}(v,w) &= \partial_v ~ C_{1}(v,w) = (1-e^{-\theta w}) ~ \partial_v ~ \frac{ e^{-\theta v}} {1-e^{-\theta} - (1-e^{-\theta v})(1-e^{-\theta w})} \\ & = - \frac{\theta (1-e^{-\theta w}) e^{-\theta v}(e^{-\theta w}-e^{-\theta})} {[1-e^{-\theta} - (1-e^{-\theta v})(1-e^{-\theta w})]^2} \\ \end{array} \] In the same way, the copula density is \[ \begin{array}{ll} C_{12}(v,w) &= c(v,w) = \partial_w C_{1}(v,w) \\ & = \frac{\theta (1-e^{-\theta}) e^{-\theta (v+w)}} {[1-e^{-\theta} - (1-e^{-\theta v})(1-e^{-\theta w})]^2} . \\ \end{array} \] See Joe (2014) (p. 165) for the distribution and density function. See also Schepsmeier and Stöber (2012), Schepsmeier and Stöber (2014).

14.1.1.2 Gaussian Copula

We will use the normal, or Gaussian, copula, so let us collect results for it in the simple bivariate case. The distribution function is: \[ C(v_1,v_2) = \Phi_2 \left(\Phi^{-1}(v_1), \Phi^{-1}(v_2)\right)= \Phi_2\left(z_1, z_2\right) , \] where \(z_j= \Phi^{-1}(v_j), j=1,2\) are normal scores and \(\Phi^{-1}(v_2)\) is a bivariate normal distribution function.

Taking partial derivatives, we have \[ \begin{array}{rl} C_2(v_1,v_2) &= \Phi \left(\frac{z_1 - \rho z_2}{\sqrt{1-\rho^2}}\right), \\ C_{12}(v_1,v_2) &= c(v_1,v_2) =\frac{1}{\sqrt{1-\rho^2}}\frac{1}{\phi(z_1)} \phi \left(\frac{z_1 - \rho z_2}{\sqrt{1-\rho^2}}\right), \ \ \text{and}\\ C_{22}(v_1,v_2) &=\frac{-\rho}{\sqrt{1-\rho^2}}\frac{1}{\phi(z_2)} \phi \left(\frac{z_1 - \rho z_2}{\sqrt{1-\rho^2}}\right) . \\ \end{array} \] The density \(C_{12}(\cdot,\cdot)\) and distribution function \(C(\cdot,\cdot)\) are available from the R package copula (dCopula and pCopula, respectively). The first partial derivative \(C_2(\cdot,\cdot)\) is available from the package VineCopula (the function BiCopHfunc2). We provide code for the function \(C_{22}(\cdot,\cdot)\).

14.1.2 Multivariate Copulas

Now consider \(p \ge 2\) risks, \(X_1, \ldots, X_p,\) with copula \(C\) that is indexed by a matrix of association parameters \(\boldsymbol \Sigma\). We wish to calculate partial derivatives of the copula distribution function. For Gaussian, and later extensions to elliptical, copulas, it is natural to relate them to conditional copulas as follows. \[ \begin{array}{ll} &\partial_{v_{p-k+1} \cdots v_{p}} C(v_1, \ldots v_p) \\ &= \frac{\partial^k} {\partial v_{p-k+1} \cdots \partial v_{p}} C(v_1, \ldots v_p) \\ \ \ \ &= \int^{v_1} \cdots \int^{v_{p-k}} c\left(x_1, \ldots, x_{p-k}, v_{p-k+1}, \dots, v_p\right) dx_1 \cdots dx_{p-k} \\ &= \int^{v_1} \cdots \int^{v_{p-k}} c\left(x_1, \ldots, x_{p-k} | v_{p-k+1}, \dots, v_p\right) c\left(v_{p-k+1}, \dots, v_p\right) dx_1 \cdots dx_{p-k} \\ &= C \left(v_1, \ldots, v_{p-k} | v_{p-k+1}, \dots, v_p\right) c\left(v_{p-k+1}, \dots, v_p\right) . \end{array} \] To illustrate, for \(k=1\), we have \[ \partial_{v_p} ~C(v_1, \ldots, v_p) = C(v_1, \ldots, v_{p-1}|v_p) . \] It is convenient to partition the association matrix as \[ \boldsymbol \Sigma = \left( \begin{array}{cc} \boldsymbol \Sigma_{1:p-k,1:p-k} & \boldsymbol \Sigma_{1:p-k,p-k+1:p}\\ \boldsymbol \Sigma_{1:p-k,p-k+1:p}^{\prime} & \boldsymbol \Sigma_{p-k+1:p,p-k+1:p} \end{array} \right), \] so that \(\boldsymbol \Sigma_{1:p-k,1:p-k}\) is the submatrix for the first \(p-k\) elements and similarly for the other entries.

14.1.3 Gaussian Copula

Although properties of the multivariate Gaussian copula has been developed extensively, it is worthwhile to collect facts needed for this book. This section is largely drawn from Joe (2014).

Consider a \(p\) dimensional multivariate normal distribution with variance-covariance matrix \(\boldsymbol \Sigma\). As we will use this as a basis for defining copulas, consider the mean to be zero and variance to be 1 so that the diagonal elements of \(\boldsymbol \Sigma\) equal 1. Let \(\Phi_p( \cdot; \boldsymbol \Sigma)\) be the corresponding distribution function. With this, the Gaussian copula can be expressed as \[ C\left(v_1, \ldots, v_p\right) = \Phi_p\left(z_1, \ldots, z_p; \boldsymbol \Sigma \right) . \] In this expression, we use the normal scores defined as \(z_j = \Phi^{-1}(v_j), j=1, \ldots, d.\)

For the Gaussian copula, we can utilize classic multivariate normal distribution theory to develop conditional distributions. With this, for the Gaussian copula, we have \[ \begin{array}{ll} C \left(v_1, \ldots, v_{p-k} | v_{p-k+1}, \dots, v_p\right) \\ = \Pr \left( \Phi(N_1) \le v_1, \ldots, \Phi(N_{p-k}) \le v_{p-k} | \right.\\ ~~~~~~~~~~~~~ \left. \Phi(N_{p-k+1}) = v_{p-k+1}, \ldots, \Phi(N_{p}) = v_{p} \right) \\ = \Pr \left( N_1 \le \Phi^{-1}(v_1), \ldots, N_{p-k} \le \Phi^{-1}(v_{p-k}) | \right. \\ ~~~~~~~~~~~~~ \left.N_{p-k+1} = \Phi^{-1}(v_{p-k+1}), \ldots, N_{p} = \Phi^{-1}(v_{p}) \right) \\ = \Phi_{p-k}\left( z_1 - \mu_{1 \cdot 2, 1}, \ldots, z_{p-k} - \mu_{1 \cdot 2, p-k} ; \boldsymbol \Sigma_{11 \cdot 2} \right) . \end{array} \] Here, \(\mu_{1 \cdot 2, j}\) is the \(j\)th component of \[ \boldsymbol \mu_{1 \cdot 2} = \boldsymbol \Sigma_{1:p-k,p-k+1:p} \boldsymbol \Sigma_{p-k+1:p,p-k+1:p}^{-1} \left( \begin{array}{c} z_{p-k+1}\\ \vdots \\ z_{p} \end{array} \right) . \] Further, \(\Phi_{p-k}(\cdot; \boldsymbol \Sigma_{11 \cdot 2})\) is a \(p-k\) dimensional multivariate normal distribution function with mean zero and variance-covariance matrix \[ \boldsymbol \Sigma_{11 \cdot 2} = \boldsymbol \Sigma_{1:p-k,1:p-k} - \boldsymbol \Sigma_{1:p-k,p-k+1:p} \boldsymbol \Sigma_{p-k+1:p,p-k+1:p}^{-1} \boldsymbol \Sigma_{1:p-k,p-k+1:p}^{\prime} . \] In the following, we sometimes require the correlation matrix, denoted as \(\boldsymbol \Sigma\). Further, following standard notation, the subscripts in \(\boldsymbol \Sigma_{ij}\) refer to the \(i\)th row and \(j\)th column of the matrix \(\boldsymbol \Sigma\).

14.1.4 Derivatives for the Conditional Distributions

For the conditional distribution, we need the case where \(k=1\) \[ \begin{array}{ll} C_p \left(v_1, \ldots, v_p \right) &= \partial_{v_p} ~ C(v_1, \ldots, v_p) =C \left(v_1, \ldots, v_{p-1} | v_p\right) \\ &= \Phi_{p-1}\left(z_1 - \mu_{\{1, \ldots, p-1\} \cdot p, 1}, \ldots, z_{p-1} - \mu_{\{1, \ldots, p-1\} \cdot p, p-1} ; \boldsymbol \Sigma_{\{1, \ldots, p-1\} \cdot p} \right) , \end{array} \] where \[ \boldsymbol \mu_{\{1, \ldots, p-1\} \cdot p} = \boldsymbol \Sigma_{1:p-1,p:p} ~z_{p} \] and \[ \boldsymbol \Sigma_{\{1, \ldots, p-1\} \cdot p} = \boldsymbol \Sigma_{1:p-1,1:p-1} - \boldsymbol \Sigma_{1:p-1,p:p} \boldsymbol \Sigma_{1:p-1,p:p}^{\prime} . \] In the same way, for \(k=2\), we have \[ \begin{array}{ll} C_{p-1,p} \left(v_1, \ldots, v_p \right) = \frac{\partial^2}{\partial v_{p-1} \partial v_p} C(v_1, \ldots, v_p) \\ \ \ \ = C \left(v_1, \ldots, v_{p-2} | v_{p-1}, v_p\right) c(v_{p-1}, v_p) \\ \ \ \ = \Phi_{p-2}\left( z_1 - \mu_{\{1, \ldots, p-2\} \cdot \{p-1,p\}, 1}, \ldots, z_{p-2} - \right. \\ ~~~~~~~~~~~~~~~~~\left. \mu_{\{1, \ldots, p-2\} \cdot \{p-1,p\}, p-2} ; \boldsymbol \Sigma_{\{1, \ldots, p-2\} \cdot \{p-1,p\}} \right) c(v_{p-1}, v_p), \end{array} \] where \[ \boldsymbol \mu_{\{1, \ldots, p-2\} \cdot \{p-1,p\}} = \boldsymbol \Sigma_{1:p-2,p-1:p} \boldsymbol \Sigma_{p-1:p,p-1:p}^{-1} \left( \begin{array}{c} z_{p-1}\\ z_{p} \end{array} \right) \] and \[ \boldsymbol \Sigma_{\{1, \ldots, p-2\} \cdot \{p-1,p\}} = \boldsymbol \Sigma_{1:p-2,1:p-2} - \boldsymbol \Sigma_{1:p-2,p-1:p} \boldsymbol \Sigma_{p-1:p,p-1:p}^{-1} \boldsymbol \Sigma_{1:p-2,p-1:p}^{\prime} . \]

14.1.5 Gaussian Dependency Sensitivity

For derivatives with respect to \(\rho\), we start with the distribution function and note a result that, according to , was well known even in the mid-1950’s, \[ \frac{\partial }{\partial \rho} \Phi_2(z_1,z_2) = \phi_2(z_1,z_2). \] For Gaussian copulas, this immediately yields \[ \frac{\partial }{\partial \rho} C(v_1,v_2) =\phi_2(z_1,z_2). \] For other details in the bivariate case, we use the work of Schepsmeier and Stöber (2012), Schepsmeier and Stöber (2014). For example, they calculate the partial derivative \[ \frac{\partial C_2(v_1,v_2)}{\partial \rho} = \phi \left(\frac{\Phi^{-1}(v_1) - \rho \Phi^{-1}(v_2)}{\sqrt{1 - \rho^2}}\right) \cdot \frac{\rho \Phi^{-1}(v_1) -\Phi^{-1}(v_2)}{(1 - \rho^2)^{3/2}} . \] We now consider derivatives with respect to dependency parameters. We work in the case of general elliptical copulas.

For a general multivariate approach, we cite a result due to Plackett (1954). Partition the correlation matrix as \[ \boldsymbol \Sigma= \left( \begin{array}{cc} \boldsymbol \Sigma_{1:2,1:2} & \boldsymbol \Sigma_{1:2,3:p}\\ \boldsymbol \Sigma_{1:2,3:p}^{\prime} & \boldsymbol \Sigma_{3:p,3:p} \end{array} \right) \ \ \ \ \ \ \ \boldsymbol \Sigma_{1:2,1:2} = \left( \begin{array}{cc} 1 & \boldsymbol \Sigma_{12}\\ \boldsymbol \Sigma_{12} & 1 \end{array} \right) , \] so that \(\boldsymbol \Sigma_{1:2,1:2}\) is the submatrix for the first two elements and \(\boldsymbol \Sigma_{12}\) is the corresponding correlation coefficient. Then, from Plackett (1954), we have \[ \begin{array}{cc} \frac{\partial} {\partial \boldsymbol \Sigma_{12}} \Phi_p(x_1, \ldots, x_p; \boldsymbol \Sigma) = \phi_2\left(x_1 , x_2 ; \boldsymbol \Sigma_{1:2,1:2}\right) \Phi_{p-2}(\mathbf{x}^*; \boldsymbol \Sigma_{\{3:p,3:p\}\cdot \{1:2\}}) , \end{array} \] where \[ \mathbf{x}^* = \left( \begin{array}{c} x_3 \\ \vdots \\ x_p \end{array} \right) - \boldsymbol \Sigma_{1:2,3:p}^{\prime} \boldsymbol \Sigma_{1:2,1:2}^{-1} \left( \begin{array}{c} x_1 \\ x_2 \end{array} \right) \\ \ \ \ \text{and} \ \ \ \boldsymbol \Sigma_{\{3:p,3:p\}\cdot \{1:2\}} = \boldsymbol \Sigma_{3:p,3:p} - \boldsymbol \Sigma_{1:2,3:p}^{\prime} \boldsymbol \Sigma_{1:2,1:2}^{-1} \boldsymbol \Sigma_{1:2,3:p} . \] See also Gassmann (2003) for a more recent commentary on this interesting result.

14.2 Elliptical Dependence Sensitivity

14.2.1 Elliptical Distributions

Assume that a random \(p\)-dimensional vector \(\mathbf{Y}_p\) has a multivariate elliptical distribution with density function \[\begin{equation} \begin{array}{c} f_{\mathbf{Y}_p}(\mathbf{y}; \boldsymbol \Sigma, p)= \frac{c_p}{\sqrt{\mathrm{det}(\boldsymbol \Sigma)}} ~f_p\left(\frac{1}{2}\mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y} \right) . \end{array} \tag{14.2} \end{equation}\] The function \(f_p(\cdot)\) should satisfy \(\int_0^{\infty} y^{p/2 -1} f_p(y) dy < \infty\) and is known as the density generator function. As in Landsman and Valdez (2003), special cases of interest in insurance work include: \[ \small{ \begin{array}{l|ccc}\hline \text{Multivariate} & \text{generator}& \text{derivative } & NRatio\\ \text{Distribution} & f_p(y)& f_p^{\prime}(y)& -\frac{f_p^{\prime}(y)}{f_p(y)}\\ \hline \text{Normal distribution} & e^{-y}& -e^{-y} & 1\\ t-\text{distribution with } & (1+2y/r)^{-(p+r)/2} &-\frac{p+r}{r}(1+2y/r)^{-(p+r+2)/2}&\frac{p+r}{2y+r}\\ ~~~ r \text{ degrees of freedom} \\ \text{Cauchy} & (1+2y)^{-(p+1)/2} &-(p+1) (1+2y)^{-(p+3)/2}&\frac{p+1}{1+2y}\\ \text{Logistic} & \frac{e^{-y}}{(1+e^{-y})^2} & -\frac{e^{-y}(1-e^{-y})}{(1+e^{-y})^3} & \frac{1-e^{-y}}{1+e^{-y}}\\ \text{Exponential power} & \exp(-ry^s) & -rs\exp(-ry^s) y^{s-1} &rs ~y^{s-1}\\ \hline \end{array} } \] For dependence sensitivities, we first look to derivatives of the elliptical density. From equation (14.2), we have \[ {\small \begin{array}{ll} \frac{1}{c_p}&\partial_{\sigma} f_{\mathbf{Y}_p}(\mathbf{y}; \boldsymbol \Sigma, p) \\ &= \partial_{\sigma}\left[\left[\mathrm{det}(\boldsymbol \Sigma)\right]^{-1/2} \right] ~f_p\left(\frac{1}{2}\mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y} \right) + \left[\mathrm{det}(\boldsymbol \Sigma)\right]^{-1/2} ~\partial_{\sigma} f_p\left(\frac{1}{2}\mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y} \right) \\ &= \left[ -\frac{1}{2}\left[\mathrm{det}(\boldsymbol \Sigma)\right]^{-3/2} \partial_{\sigma} \mathrm{det}(\boldsymbol \Sigma) \right] ~f_p\left(\frac{1}{2}\mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y} \right) + \left[\mathrm{det}(\boldsymbol \Sigma)\right]^{-1/2} ~f_p^{\prime}\left(\frac{1}{2}\mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y} \right) \frac{1}{2}\mathbf{y}^{\prime} \partial_{\sigma} \boldsymbol \Sigma^{-1} \mathbf{y}\\ &= \left[ -\frac{1}{2}\left[\mathrm{det}(\boldsymbol \Sigma)\right]^{-3/2} \mathrm{det}(\boldsymbol \Sigma) ~\text{tr}( \boldsymbol \Sigma^{-1} \partial_{\sigma}\boldsymbol \Sigma) \right] ~f_p\left(\frac{1}{2}\mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y} \right) \\ & ~~~ \ -\frac{1}{2} \left[\mathrm{det}(\boldsymbol \Sigma)\right]^{-1/2} ~f_p^{\prime}\left(\frac{1}{2}\mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y} \right) \mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} (\partial_{\sigma} \boldsymbol \Sigma) \boldsymbol \Sigma^{-1} \mathbf{y}\\ &= \frac{1}{2}\left[\mathrm{det}(\boldsymbol \Sigma)\right]^{-1/2} f_p\left(\frac{1}{2}\mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y} \right) \left\{ -\frac{f_p^{\prime}\left(\frac{1}{2}\mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y} \right)}{f_p\left(\frac{1}{2}\mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y} \right)} \mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} (\partial_{\sigma} \boldsymbol \Sigma) \boldsymbol \Sigma^{-1} \mathbf{y} -\text{tr}( \boldsymbol \Sigma^{-1} \partial_{\sigma}\boldsymbol \Sigma) \right\} .\\ \end{array} } \] Summarizing, we have \[\begin{equation} \begin{array}{ll} \partial_{\sigma} f_{\mathbf{Y}_p}(\mathbf{y}; \boldsymbol \Sigma, p) &= \frac{1}{2}f_{\mathbf{Y}_p}(\mathbf{y}; \boldsymbol \Sigma, p) \left\{ NRatio\left(\frac{1}{2}\mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y} \right) \mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} (\partial_{\sigma} \boldsymbol \Sigma) \boldsymbol \Sigma^{-1} \mathbf{y} \right. \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\left. -\text{tr}( \boldsymbol \Sigma^{-1} \partial_{\sigma}\boldsymbol \Sigma) \right\} ,\\ \end{array} \tag{14.3} \end{equation}\] where \(NRatio(y)= -\frac{f_p^{\prime}(y)}{f_p(y)}\).

14.2.2 Elliptical Copula Dependence Sensitivity

Because copulas are concerned with relationships, we may restrict our considerations to the case where diagonal elements of \(\boldsymbol \Sigma\) are 1. For elliptical copulas, the marginal distributions are identical so that \(F_{Y_j} = F_Y\) with density \(f_Y(y) = c_1 f_1(y^2/2)\). With this, the copula distribution function is \(C\left(v_1, \ldots, v_p\right) = F_{\mathbf{Y}_p}[F_Y^{-1}(v_1), \ldots, F_Y^{-1}(v_p)]\) and the corresponding probability density function \[ \begin{array}{ll} c\left(v_1, \ldots, v_p\right) &= f_{{\bf Y}_p}[F_Y^{-1}(v_1), \ldots, F_Y^{-1}(v_p); \boldsymbol \Sigma, p] \prod_{j=1}^p \frac{1}{f_Y[F_Y^{-1}(v_j)]} \\ &= f_{{\bf Y}_p}[y_1, \ldots, y_p; \boldsymbol \Sigma, p] \prod_{j=1}^p \frac{1}{f_Y(y_j)} ,\\ \end{array} \] where \(y_j = F_Y^{-1}(v_j)\). Here, \(F_Y\) is the distribution function association with the copula, for example, the normal distribution function for the normal copula or a t-distribution function (with a specified degree of freedom) for the t-copula.

With equation (14.3), we have \[\begin{equation} {\small \begin{array}{ll} \partial_{\sigma} &c\left(v_1, \ldots, v_p\right) \\ &= \left\{ \partial_{\sigma} f_{{\bf Y}_p}(y_1, \ldots, y_p; \boldsymbol \Sigma, p) \right\} \prod_{j=1}^p \frac{1}{f_Y(y_j)} \\ &= \left\{ \frac{1}{2} f_{{\bf Y}_p}(y_1, \ldots, y_p; \boldsymbol \Sigma, p) \left[ NRatio\left(\frac{1}{2}\mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y} \right) \mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} (\partial_{\sigma} \boldsymbol \Sigma) \boldsymbol \Sigma^{-1} \mathbf{y} \right. \right. \\ & \left. \left. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -\text{tr}( \boldsymbol \Sigma^{-1} \partial_{\sigma}\boldsymbol \Sigma) \right] \right\} \prod_{j=1}^p \frac{1}{f_Y(y_j)} \\ &= \frac{1}{2} c\left(v_1, \ldots, v_p\right) \left[ NRatio\left(\frac{1}{2}\mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y} \right) \mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} (\partial_{\sigma} \boldsymbol \Sigma) \boldsymbol \Sigma^{-1} \mathbf{y} -\text{tr}( \boldsymbol \Sigma^{-1} \partial_{\sigma}\boldsymbol \Sigma)\right] .\\ \end{array} } \tag{14.4} \end{equation}\] This gives an expression for the elliptical copula dependence sensitivity.

14.2.3 Portfolio Expectation Dependence Sensitivity using Elliptical Copulas

We are interested in quantities of the form: \[ \begin{array}{rl} \mathrm{E}[h(X_1, \ldots, X_p)] &=\mathrm{E}\{h[F_1^{-1}(V_1), \ldots, F_p^{-1}(V_p)]\} \\ &= \int \cdots \int h[F_1^{-1}(v_1), \ldots, F_p^{-1}(v_p)] ~c(v_1, \ldots, v_p) ~dv_1 \cdots dv_p .\\ \end{array} \] Here, the first equality is because \(X_j = F^{-1}_j(U)\) in distribution (this is true for continuous, discrete, and hybrid/mixed distributions). Taking a derivative and using equation (14.4) yields the portfolio expectation dependence sensitivity \[ \small{ \begin{array}{rl} \partial_{\sigma} & \mathrm{E}[h(X_1, \ldots, X_p)] \\ &= \int \cdots \int h[F_1^{-1}(v_1), \ldots, F_p^{-1}(v_p)] ~\partial_{\sigma}c(v_1, \ldots, v_p) ~dv_1 \cdots dv_p \\ &= \frac{1}{2} \int \cdots \int h[F_1^{-1}(v_1), \ldots, F_p^{-1}(v_p)] \left[ ~NRatio\left(\frac{1}{2}\mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y} \right) \mathbf{y}^{\prime} \boldsymbol \Sigma^{-1} (\partial_{\sigma} \boldsymbol \Sigma) \boldsymbol \Sigma^{-1} \mathbf{y} \right.\\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \left. -\text{tr}( \boldsymbol \Sigma^{-1} \partial_{\sigma}\boldsymbol \Sigma) \right] c\left(v_1, \ldots, v_p\right) ~dv_1 \cdots dv_p \\ &= \frac{1}{2} ~ \mathrm{E}\left\{ h[F_1^{-1}(V_1), \ldots, F_p^{-1}(V_p)] ~NRatio\left[\frac{1}{2}\mathbf{y}(U)^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y}(U) \right] \mathbf{y}(U)^{\prime} \boldsymbol \Sigma^{-1} (\partial_{\sigma} \boldsymbol \Sigma) \boldsymbol \Sigma^{-1} \mathbf{y}(U) \right\} \\ &~~~~~~~~~~ -\frac{1}{2} ~ \text{tr}( \boldsymbol \Sigma^{-1} \partial_{\sigma}\boldsymbol \Sigma)~\mathrm{E}~h[F_1^{-1}(V_1), \ldots, F_p^{-1}(V_p)] ,\\ \end{array} } \] where \({\bf y}(\mathbf{V}) = [F_Y^{-1}(V_1), \ldots, F_Y^{-1}(V_p)]^{\prime}\). The variables \(V_1, \ldots, V_p\) have marginal uniform distributions with the same copula as \(X_1, \ldots, X_p\).

Using matrix notation, we summarize this as \[ \small{ \begin{array}{rl} \partial_{\sigma} \mathrm{E}[h(\mathbf{X})] &=\frac{1}{2}~ \mathrm{E}\left\{ h[\mathbf{q}(\mathbf{V})] ~NRatio\left[\frac{1}{2}\mathbf{y}(\mathbf{V})^{\prime} \boldsymbol \Sigma^{-1} \mathbf{y}(\mathbf{V}) \right] \mathbf{y}(\mathbf{V})^{\prime} \boldsymbol \Sigma^{-1} (\partial_{\sigma} \boldsymbol \Sigma) \boldsymbol \Sigma^{-1} \mathbf{y}(\mathbf{V}) \right\} \\ &~~~~~~~~~ -\frac{1}{2}~ \text{tr}( \boldsymbol \Sigma^{-1} \partial_{\sigma}\boldsymbol \Sigma)~\mathrm{E}\{h[\mathbf{q}(\mathbf{V})]\}\\ \end{array} } \] where \(\mathbf{X} = (X_1, \ldots, X_p)\) and \(\mathbf{q}(\mathbf{V})^{\prime} = [F_1^{-1}(V_1), \ldots, F_p^{-1}(V_p)]\). To promote interpretation, we re-label \({\bf ns}(\mathbf{V}) ={\bf y}(\mathbf{V})\) that, in the Gaussian case, are “normal scores” and define the weights \(W_{\sigma}(\mathbf{V})\) \(= {\bf ns}(\mathbf{V})^{\prime} ~\boldsymbol \Sigma^{-1} (\partial_{\sigma} \boldsymbol \Sigma) \boldsymbol \Sigma^{-1} ~{\bf ns}(\mathbf{V})\). With this notation, we have \[ \begin{array}{rl} \partial_{\sigma} \mathrm{E}[h(\mathbf{X})] &=\frac{1}{2}~ \mathrm{E}\left\{ h[\mathbf{q}(\mathbf{V})] ~NRatio\left[\frac{1}{2}\mathbf{ns}(\mathbf{V})^{\prime} \boldsymbol \Sigma^{-1} \mathbf{ns}(\mathbf{V}) \right] W_{\sigma}(\mathbf{V}) \right\} \\ &~~~~~~~~~ -\frac{1}{2}~ \text{tr}( \boldsymbol \Sigma^{-1} \partial_{\sigma}\boldsymbol \Sigma)~\mathrm{E}\{h[\mathbf{q}(\mathbf{V})]\} . \end{array} \] Now, choosing \(h(\mathbf{X})=1\), we have \[ \text{tr}( \boldsymbol \Sigma^{-1} \partial_{\sigma}\boldsymbol \Sigma)=\mathrm{E}\left\{ ~NRatio\left[\frac{1}{2}\mathbf{ns}(\mathbf{V})^{\prime} \boldsymbol \Sigma^{-1} \mathbf{ns}(\mathbf{V}) \right] W_{\sigma}(\mathbf{V}) \right\} . \] Thus, \[\begin{equation} \begin{array}{rl} \partial_{\sigma} \mathrm{E}[h(\mathbf{X})] &=\frac{1}{2}~ \mathrm{Cov}\left\{ h[\mathbf{q}(\mathbf{V})] , ~NRatio\left[\frac{1}{2}\mathbf{ns}(\mathbf{V})^{\prime} \boldsymbol \Sigma^{-1} \mathbf{ns}(\mathbf{V}) \right] W_{\sigma}(\mathbf{V}) \right\} . \\ \end{array} \tag{14.5} \end{equation}\] For example, with a Gaussian copula, we can write equation (14.5) as \[\begin{equation} \begin{array}{rl} \partial_{\sigma} \mathrm{E}[h(\mathbf{X})] &=\frac{1}{2}~ \mathrm{Cov}\left\{ h[\mathbf{q}(\mathbf{V})] , ~W_{\sigma}(\mathbf{V}) \right\} \\ \end{array} \tag{14.6} \end{equation}\] where \({\bf ns}(\mathbf{V})^{\prime} = [\Phi^{-1}(V_1), \ldots, \Phi^{-1}(V_p)]\).

As another special case, if the marginal distributions \(F_1, \ldots, F_p\) are continuous, then \(F_j(X_j)\) has a uniform distribution and we may write \[\begin{equation} \begin{array}{rl} \partial_{\sigma} \mathrm{E}[h(\mathbf{X})] &= \frac{1}{2} ~ \mathrm{Cov}\left\{ h(\mathbf{X}) , ~NRatio\left[\frac{1}{2}{\bf ns}(\mathbf{X})^{\prime} \boldsymbol \Sigma^{-1} {\bf ns}(\mathbf{X}) \right] W_{\sigma}(\mathbf{X}) \right\} , \end{array} \tag{14.7} \end{equation}\] where \({\bf ns}(\mathbf{X}) = (F_Y^{-1}[F_1(X_1)], \ldots, F_Y^{-1}[F_p(X_p)])^{\prime}\) and \(W_{\sigma}({\bf X})\) \(= {\bf ns}({\bf X})^{\prime} ~\boldsymbol \Sigma^{-1} (\partial_{\sigma} \boldsymbol \Sigma) \boldsymbol \Sigma^{-1} ~{\bf ns}({\bf X})\).