{"id":5933,"date":"2016-12-27T13:05:25","date_gmt":"2016-12-27T19:05:25","guid":{"rendered":"http:\/\/www.ssc.wisc.edu\/~jfrees\/?page_id=5933"},"modified":"2017-01-06T07:07:05","modified_gmt":"2017-01-06T13:07:05","slug":"3-3-methods-for-creating-new-distributions","status":"publish","type":"page","link":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/loss-data-analytics\/chapter-3-modeling-loss-severity\/3-3-methods-for-creating-new-distributions\/","title":{"rendered":"3.3 Methods For Creating New Distributions"},"content":{"rendered":"<div class=\"scbb-content-box scbb-content-box-gray\">In this section we\n<ol>\n<li> understand connections among the distributions;<\/li>\n<li>give insights into when a distribution is preferred when compared to<br \/>\n    alternatives;<\/li>\n<li>provide foundations for creating new distributions.<\/li>\n<\/ol>\n<p><\/p><\/div>\n<h1>3.3.1 Functions of Random Variables and their Distributions<\/h1>\n<p>In Section 3.2 we discussed some elementary known distributions. In this section we discuss means of creating new parametric probability distributions from existing ones. Let $X$ be a continuous random variable with a known probability density function $f_{X}(x)$ and distribution function $F_{X}(x)$. Consider the transformation $Y = g\\left( X \\right)$, where $g(X)$ is a one-to-one transformation defining a new random variable $Y$. We can use the\u00a0distribution function technique, the\u00a0change-of-variable technique\u00a0or the\u00a0moment-generating function technique to find the probability density function of the variable of interest $Y$. In this section we apply the following techniques for creating new families of distributions: (a) multiplication by a constant (b) raising to a power, (c) exponentiation and (d) mixing.<\/p>\n<h2>Multiplication by a Constant<\/h2>\n<p>If claim data show change over time then such transformation can be useful to adjust for inflation. If the level of inflation is positive then claim costs are rising, and if it is negative then costs are falling. To adjust for inflation we multiply the cost $X$ by 1+ inflation rate (negative inflation is deflation). To account for currency impact on claim costs we also use a transformation to apply currency\u00a0conversion from a base to a counter currency.<\/p>\n<p>Consider the transformation $Y = cX$, where $c > 0$, then the distribution function of $Y$ is given by<br \/>\n$$F_{Y}\\left( y \\right) = \\Pr\\left( Y \\leq y \\right) = \\Pr\\left( cX \\leq y \\right) = \\Pr\\left( X \\leq \\frac{y}{c} \\right) = F_{X}\\left( \\frac{y}{c} \\right).$$<br \/>\nHence, the probability density function of interest $f_{Y}(y)$ can be written as<br \/>\n$$f_{Y}\\left( y \\right) = \\frac{1}{c}f_{X}\\left( \\frac{y}{c} \\right).$$<br \/>\nSuppose that $X$ belongs to a certain set of parametric distributions and define a rescaled version $Y\\  = \\ cX$, $c\\  > \\ 0$. If $Y$ is in the same set of distributions then the distribution is said to be a scale distribution. When a member of a scale distribution is multiplied by a constant $c$ ($c > 0$), the scale parameter for this scale distribution meets two conditions:<\/p>\n<ol>\n<li>The parameter is changed by multiplying by $c$;<\/li>\n<li>All other parameter remain unchanged.<\/li>\n<\/ol>\n<p><strong>Example 3.7 (SOA)<\/strong> The aggregate losses of Eiffel Auto Insurance are denoted in Euro currency and follow a Lognormal distribution with $\\mu = 8$ and $\\sigma = 2$. Given that 1 euro $=$ 1.3 dollars, find the set of lognormal parameters, which describe the distribution of Eiffel&#8217;s losses in dollars?<br \/>\n<a id=\"displayText37\" href=\"javascript:toggle('toggleText37','displayText37');\"><i>Solution<\/i><\/a><\/p>\n<div id=\"toggleText37\" style=\"display: none\">\n<hr \/>\n<p>Let $X$ and $Y$ denote the aggregate losses of Eiffel Auto Insurance in euro currency and dollars respectively. Then, $Y = 1.3X$.<br \/>\n$$F_{Y}\\left( y \\right) = \\Pr\\left( Y \\leq y \\right) = \\Pr\\left( 1.3X \\leq y \\right) = \\Pr\\left( X \\leq \\frac{y}{1.3} \\right) = F_{X}\\left( \\frac{y}{1.3} \\right).$$<\/p>\n<p>$X$ follows a lognormal distribution with parameters $\\mu = 8$ and $\\sigma = 2$. The probability density function of $X$ is given by<br \/>\n$$f_{X}\\left( x \\right) = \\frac{1}{x \\sigma \\sqrt{2\\pi}}\\exp \\left\\{- \\frac{1}{2}\\left( \\frac{\\ln x &#8211; \\mu}{\\sigma} \\right)^{2}\\right\\} \\ \\ \\ \\text{for } x > 0.$$<br \/>\nThen, the probability density function of interest $f_{Y}(y)$ is<br \/>\n$$f_{Y}\\left( y \\right) = \\frac{1}{1.3}f_{X}\\left( \\frac{y}{1.3} \\right) \\\\<br \/>\n= \\frac{1}{1.3}\\frac{1.3}{y \\sigma \\sqrt{2\\pi}}\\exp \\left\\{- \\frac{1}{2}\\left( \\frac{\\ln\\left( y\/1.3 \\right) &#8211; \\mu}{\\sigma} \\right)^{2}\\right\\} \\\\<br \/>\n= \\frac{1}{y \\sigma\\sqrt{2\\pi}}\\exp \\left\\{- \\frac{1}{2}\\left( \\frac{\\ln y &#8211; \\left( \\ln 1.3 + \\mu \\right)}{\\sigma} \\right)^{2}\\right\\}.$$<br \/>\nThen $Y$ follows a lognormal distribution with parameters $\\ln 1.3 + \\mu = 8.26$ and $\\sigma = 2.00$. If we let $\\mu = ln(m)$ then it can be easily seen that $m$=$e^{\\mu}$ is the scale parameter which was multiplied by 1.3 while $\\sigma$ is the shape parameter that remained unchanged.<\/p>\n<hr \/>\n<\/div>\n<p><strong>Example 3.8<\/strong> Demonstrate that the gamma distribution is a scale distribution.<br \/>\n<a id=\"displayText38\" href=\"javascript:toggle('toggleText38','displayText38');\"><i>Solution<\/i><\/a><\/p>\n<div id=\"toggleText38\" style=\"display: none\">\n<hr \/>\n<p>Let $X\\sim Ga(\\alpha,\\theta)$ and $Y = cX$, then<br \/>\n$$f_{Y}\\left( y \\right) = \\frac{1}{c}f_{X}\\left( \\frac{y}{c} \\right) = \\frac{\\left( \\frac{y}{\\text{c\u03b8}} \\right)^{\\alpha}}{y\\Gamma\\left( \\alpha \\right)}\\exp &#8211; \\left( \\frac{y}{\\text{c\u03b8}} \\right)  .$$<br \/>\nWe can see that $Y\\sim Ga(\\alpha,c\\theta)$ indicating that gamma is a scale distribution and $\\theta$ is a scale parameter.<\/p>\n<hr \/>\n<\/div>\n<h2>Raising to a Power<\/h2>\n<p>In the previous section we have talked about the flexibility of the Weibull distribution in fitting reliability data. Looking to the origins of the Weibull distribution, we recognize that the Weibull is a power transformation of the exponential distribution. This is an application of another type of transformation which involves raising the random variable to a power.<\/p>\n<p>Consider the transformation $Y = X^{\\tau}$, where $\\tau > 0$, then the distribution function of $Y$ is given by<br \/>\n$$F_{Y}\\left( y \\right) = \\Pr\\left( Y \\leq y \\right) = \\Pr\\left( X^{\\tau} \\leq y \\right) = \\Pr\\left( X \\leq y^{1\/ \\tau} \\right) = F_{X}\\left( y^{1\/ \\tau} \\right).$$<\/p>\n<p>Hence, the probability density function of interest $f_{Y}(y)$ can be written as<br \/>\n$$f_{Y}(y) = \\frac{1}{\\tau} y^{1\/ \\tau &#8211; 1} f_{X}\\left( y^{1\/ \\tau} \\right).$$<br \/>\nOn the other hand, if $\\tau \\lt 0$, then the distribution function of $Y$ is given by<br \/>\n$$F_{Y}\\left( y \\right) = \\Pr\\left( Y \\leq y \\right) = \\Pr\\left( X^{\\tau} \\leq y \\right) = \\Pr\\left( X \\geq y^{1\/ \\tau} \\right) = 1 &#8211; F_{X}\\left( y^{1\/ \\tau} \\right), $$<br \/>\nand<br \/>\n$$f_{Y}(y) = \\left| \\frac{1}{\\tau} \\right|{y^{1\/ \\tau &#8211; 1}f}_{X}\\left( y^{1\/ \\tau} \\right).$$<\/p>\n<p><strong>Example 3.9<\/strong> We assume that $X$ follows the exponential distribution with mean $\\theta$ and consider the transformed variable $Y = X^{\\tau}$. Show that<br \/>\n$Y$ follows the Weibull distribution when $\\tau$ is positive and determine the parameters of the Weibull distribution.<br \/>\n<a id=\"displayText39\" href=\"javascript:toggle('toggleText39','displayText39');\"><i>Solution<\/i><\/a><\/p>\n<div id=\"toggleText39\" style=\"display: none\">\n<hr \/>\n<p>$$f_{X}(x) = \\frac{1}{\\theta}e^{- \\frac{x}{\\theta}} \\ \\ \\ \\, x > 0.$$<br \/>\n$$f_{Y}\\left( y \\right) = \\frac{1}{\\tau}{y^{\\frac{1}{\\tau} &#8211; 1}f}_{X}\\left( y^{\\frac{1}{\\tau}} \\right) \\\\<br \/>\n= \\frac{1}{\\text{\u03c4\u03b8}}y^{\\frac{1}{\\tau} &#8211; 1}e^{- \\frac{y^{\\frac{1}{\\tau}}}{\\theta}} = \\frac{\\alpha}{\\beta}\\left( \\frac{y}{\\beta} \\right)^{\\alpha &#8211; 1}e^{- \\left( \\frac{y}{\\beta} \\right)^{\\alpha}}.$$<br \/>\nwhere $\\alpha = \\frac{1}{\\tau}$ and $\\beta = \\theta^{\\tau}$. Then, $Y$ follows the Weibull distribution with shape parameter $\\alpha$ and scale parameter $\\beta$.<\/p>\n<hr \/>\n<\/div>\n<h2>Exponentiation<\/h2>\n<p>The normal distribution is a very popular model for a wide number of applications and when the sample size is large, it can serve as an approximate distribution for other models. If the random variable $X$ has a normal distribution with mean $\\mu$ and variance $\\sigma^{2}$, then $Y = e^{X}$ has lognormal distribution with parameters $\\mu$ and $\\sigma^{2}$. The lognormal random variable has a lower bound of zero, is positively skewed and has a long right tail. A lognormal distribution is commonly used to describe distributions of financial assets such as stock prices. It is also used in fitting claim amounts for automobile as well as health insurance. This is an example of another type of transformation which involves exponentiation.<\/p>\n<p>Consider the transformation $Y = e^{X}$, then the distribution function of $Y$ is given by<br \/>\n$$F_{Y}\\left( y \\right) = \\Pr\\left( Y \\leq y \\right) = \\Pr\\left( e^{X} \\leq y \\right) = \\Pr\\left( X \\leq \\ln y \\right) = F_{X}\\left( \\ln y \\right).$$<br \/>\nHence, the probability density function of interest $f_{Y}(y)$ can be written as<br \/>\n$$f_{Y}(y) = \\frac{1}{y}f_{X}\\left( \\ln y \\right).$$<\/p>\n<p><strong>Example 3.10 (SOA)<\/strong> $X$ has a uniform distribution on the interval $(0,\\ c)$. $Y = e^{X}$. Find the distribution of $Y$.<br \/>\n<a id=\"displayText310\" href=\"javascript:toggle('toggleText310','displayText310');\"><i>Solution<\/i><\/a><\/p>\n<div id=\"toggleText310\" style=\"display: none\">\n<hr \/>\n<p>$$F_{Y}\\left( y \\right) = \\Pr\\left( Y \\leq y \\right) = \\Pr\\left( e^{X} \\leq y \\right) = \\Pr\\left( X \\leq \\ln y \\right) = F_{X}\\left( \\ln y \\right).$$<br \/>\nThen,<br \/>\n$$f_{Y}\\left( y \\right) = \\frac{1}{y}f_{X}\\left(\\ln y \\right) = \\frac{1}{\\text{cy}}. $$<br \/>\nSince $0 \\lt x \\lt c$, then $1 \\lt y  \\lt  e^{c}$.<\/p>\n<hr \/>\n<\/div>\n<h1>3.3.2 Finite Mixtures<\/h1>\n<p>Mixture distributions represent a useful way of modelling data that are drawn from a heterogeneous population. This parent population can be<br \/>\nthought to be divided into multiple subpopulations with distinct distributions.<\/p>\n<h2>Two-point mixture<\/h2>\n<p>If the underlying phenomenon is diverse and can actually be described as two phenomena representing two subpopulations with different modes, we can construct the two point mixture random variable $X$. Given random variables $X_{1}$ and $X_{2}$, with probability density functions $f_{X_{1}}\\left( x \\right)$ and $f_{X_{2}}\\left( x \\right)$ respectively, the probability density function of $X$ is the weighted average of the component probability density function $f_{X_{1}}\\left( x \\right)$ and $f_{X_{2}}\\left( x \\right)$. The probability density function and distribution function of $X$ are given by<br \/>\n$$f_{X}\\left( x \\right) = af_{X_{1}}\\left( x \\right) + \\left( 1 &#8211; a \\right)f_{X_{2}}\\left( x \\right),$$<br \/>\nand<br \/>\n$$F_{X}\\left( x \\right) = aF_{X_{1}}\\left( x \\right) + \\left( 1 &#8211; a \\right)F_{X_{2}}\\left( x \\right),$$<\/p>\n<p>for $0 \\lt a \\lt 1$, where the mixing parameters $a$ and $(1 &#8211; a)$ represent the proportions of data points that fall under each of the two subpopulations respectively. This weighted average can be applied to a number of other distribution related quantities. The <em>k<\/em>-th moment and moment generating function of $X$ are given by<br \/>\n$E\\left( X^{k} \\right) = aE\\left( X_{1}^{K} \\right) + \\left( 1 &#8211; a \\right)E\\left( X_{2}^{k} \\right)$,<br \/>\nand<br \/>\n$$M_{X}\\left( t \\right) = aM_{X_{1}}\\left( t \\right) + \\left( 1 &#8211; a \\right)M_{X_{2}}\\left( t \\right),$$ respectively.<\/p>\n<p><strong>Example 3.11 (SOA)<\/strong> The distribution of the random variable $X$ is an equally weighted mixture of two Poisson distributions with parameters $\\lambda_{1}$ and $\\lambda_{2}$ respectively. The mean and variance of $X$ are 4 and 13, respectively. Determine $\\Pr\\left( X > 2 \\right)$.<br \/>\n<a id=\"displayText311\" href=\"javascript:toggle('toggleText311','displayText311');\"><i>Solution<\/i><\/a><\/p>\n<div id=\"toggleText311\" style=\"display: none\">\n<hr \/>\n<p>$$E\\left( X \\right) = 0.5\\lambda_{1} + 0.5\\lambda_{2} = 4$$<\/p>\n<p>$$E\\left( X^{2} \\right) = 0.5\\left( \\lambda_{1} + \\lambda_{1}^{2} \\right) + 0.5\\left( \\lambda_{2} + \\lambda_{2}^{2} \\right) = 13 + 16$$<\/p>\n<p>Simplifying the two equations we get $\\lambda_{1} + \\lambda_{2} = 8$ and $\\lambda_{1}^{2} + \\lambda_{2}^{2} = 50$. Then, the parameters of the two Poisson distributions are 1 and 7.<br \/>\n$$\\Pr\\left( X > 2 \\right) = 0.5\\Pr\\left( X_{1} > 2 \\right) + 0.5\\Pr\\left( X_{2} > 2 \\right) = 0.05$$<\/p>\n<hr \/>\n<\/div>\n<h2><em>k<\/em>-point mixture<\/h2>\n<p>In case of finite mixture distributions, the random variable of interest $X$ has a probability $p_{i}$ of being drawn from homogeneous subpopulation $i$, where $i = 1,2,\\ldots,k$ and $k$ is the initially specified number of subpopulations in our mixture. The mixing parameter $p_{i}$ represents the proportion of observations from subpopulation $i$. Consider the random variable $X$ generated from $k$ distinct subpopulations, where subpopulation $i$ is modeled by the continuous distribution $f_{X_{i}}\\left( x \\right)$. The probability distribution of $X$ is given by<br \/>\n$$f_{X}\\left( x \\right) = \\sum_{i = 1}^{k}{p_{i}f_{X_{i}}\\left( x \\right)},$$<br \/>\nwhere $0 \\lt p_{i} \\lt 1$ and $\\sum_{i = 1}^{k} p_{i} = 1$.<\/p>\n<p>This model is often referred to as a <em>finite mixture<\/em> or a $k$ point mixture. The distribution function, $r$-th moment and moment generating functions of the $k$-th point mixture are given as<\/p>\n<p>$$F_{X}\\left( x \\right) = \\sum_{i = 1}^{k}{p_{i}F_{X_{i}}\\left( x \\right)},$$<br \/>\n$$E\\left( X^{r} \\right) = \\sum_{i = 1}^{k}{p_{i}E\\left( X_{i}^{r} \\right)}, \\text{and}$$<br \/>\n$$M_{X}\\left( t \\right) = \\sum_{i = 1}^{k}{p_{i}M_{X_{i}}\\left( t \\right)},$$ respectively.<\/p>\n<p><strong>Example 3.12 (SOA)<\/strong> $Y_{1}$ is a mixture of $X_{1}$ and $X_{2}$ with mixing weights $a$ and $(1 &#8211; a)$. $Y_{2}$ is a mixture of $X_{3}$ and $X_{4}$ with mixing weights $b$ and $(1 &#8211; b)$. $Z$ is a mixture of $Y_{1}$ and $Y_{2}$ with mixing weights $c$ and $(1 &#8211; c)$.<\/p>\n<p>Show that $Z$ is a mixture of $X_{1}$, $X_{2}$, $X_{3}$ and $X_{4}$, and find the mixing weights.<br \/>\n<a id=\"displayText312\" href=\"javascript:toggle('toggleText312','displayText312');\"><i>Solution<\/i><\/a><\/p>\n<div id=\"toggleText312\" style=\"display: none\">\n<hr \/>\n<p>$$f_{Y_{1}}\\left( x \\right) = af_{X_{1}}\\left( x \\right) + \\left( 1 &#8211; a \\right)f_{X_{2}}\\left( x \\right)$$<\/p>\n<p>$$f_{Y_{2}}\\left( x \\right) = bf_{X_{3}}\\left( x \\right) + \\left( 1 &#8211; b \\right)f_{X_{4}}\\left( x \\right)$$<\/p>\n<p>$$f_{Z}\\left( x \\right) = cf_{Y_{1}}\\left( x \\right) + \\left( 1 &#8211; c \\right)f_{Y_{2}}\\left( x \\right)$$<\/p>\n<p>$$f_{Z}\\left( x \\right) = c\\left\\lbrack af_{X_{1}}\\left( x \\right) + \\left( 1 &#8211; a \\right)f_{X_{2}}\\left( x \\right) \\right\\rbrack + \\left( 1 &#8211; c \\right)\\left\\lbrack bf_{X_{3}}\\left( x \\right) + \\left( 1 &#8211; b \\right)f_{X_{4}}\\left( x \\right) \\right\\rbrack$$<\/p>\n<p>$= caf_{X_{1}}\\left( x \\right) + c\\left( 1 &#8211; a \\right)f_{X_{2}}\\left( x \\right) + \\left( 1 &#8211; c \\right)bf_{X_{3}}\\left( x \\right) + (1 &#8211; c)\\left( 1 &#8211; b \\right)f_{X_{4}}\\left( x \\right)$.<\/p>\n<p>Then, $Z$ is a mixture of $X_{1}$, $X_{2}$, $X_{3}$ and $X_{4}$, with mixing weights $\\text{ca}$, $c\\left( 1 &#8211; a \\right)$, $\\left( 1 &#8211; c \\right)b$ and $(1 &#8211; c)\\left( 1 &#8211; b \\right)$.<\/p>\n<hr \/>\n<\/div>\n<h1>3.3.3 Continuous Mixtures<\/h1>\n<p>A mixture with a very large number of subpopulations ($k$ goes to infinity) is often referred to as a continuous mixture. In a continuous mixture, subpopulations are not distinguished by a discrete mixing parameter but by a continuous variable $\\theta$, where $\\theta$ plays the role of $p_{i}$ in the finite mixture. Consider the random variable $X$ with a distribution depending on a parameter $\\theta$, where $\\theta$ itself is a continuous random variable. This description yields the following model for $X$<br \/>\n$$f_{X}\\left( x \\right) = \\int_{0}^{\\infty}{f_{X}\\left( x\\left| \\theta \\right.\\  \\right)g\\left( \\theta \\right)} d \\theta ,$$ where $f_{X}\\left( x\\left| \\theta \\right.\\  \\right)$ is the conditional distribution of $X$ at a particular value of $\\theta$ and $g\\left( \\theta \\right)$ is the probability statement made about the unknown parameter $\\theta$, known as the prior distribution of $\\theta$ (the prior information or expert opinion to be used in the analysis).<\/p>\n<p>The distribution function, $k$-th moment and moment generating functions of the continuous mixture are given as<br \/>\n$$F_{X}\\left( x \\right) = \\int_{-\\infty}^{\\infty}{F_{X}\\left( x\\left| \\theta \\right.\\  \\right)g\\left( \\theta \\right)} d \\theta,$$<br \/>\n$$E\\left( X^{k} \\right) = \\int_{-\\infty}^{\\infty}{E\\left( X^{k}\\left| \\theta \\right.\\  \\right)g\\left( \\theta \\right)}d \\theta,$$<br \/>\n$$M_{X}\\left( t \\right) = E\\left( e^{t X} \\right) = \\int_{-\\infty}^{\\infty}{E\\left( e^{ tx}\\left| \\theta \\right.\\  \\right)g\\left( \\theta \\right)}d \\theta, $$ respectively.<\/p>\n<p>The $k$-th moments of the mixture distribution can be rewritten as<br \/>\n$$E\\left( X^{k} \\right) = \\int_{-\\infty}^{\\infty}{E\\left( X^{k}\\left| \\theta \\right.\\  \\right)g\\left( \\theta \\right)}d\\theta = E\\left\\lbrack E\\left( X^{k}\\left| \\theta \\right.\\  \\right) \\right\\rbrack .$$<\/p>\n<p>In particular the mean and variance of $X$ are given by<br \/>\n$$E\\left( X \\right) = E\\left\\lbrack E\\left( X\\left| \\theta \\right.\\  \\right) \\right\\rbrack$$<br \/>\nand<br \/>\n$$Var\\left( X \\right) = E\\left\\lbrack Var\\left( X\\left| \\theta \\right.\\  \\right) \\right\\rbrack + Var\\left\\lbrack E\\left( X\\left| \\theta \\right.\\  \\right) \\right\\rbrack .$$<\/p>\n<p><strong>Example 3.13 (SOA)<\/strong> $X$ has a binomial distribution with a mean of $100q$ and a variance of $100q\\left( 1 &#8211; q \\right)$ and $q$ has a beta distribution with parameters $a = 3$ and $b = 2$. Find the unconditional mean and variance of $X$.<br \/>\n<a id=\"displayText313\" href=\"javascript:toggle('toggleText313','displayText313');\"><i>Solution<\/i><\/a><\/p>\n<div id=\"toggleText313\" style=\"display: none\">\n<hr \/>\n<p>$E\\left( q \\right) = \\frac{a}{a + b} = \\frac{3}{5}$ and<br \/>\n$E\\left( q^{2} \\right) = \\frac{a\\left( a + 1 \\right)}{\\left( a + b \\right)\\left( a + b + 1 \\right)} = \\frac{2}{5}$.<\/p>\n<p>$E\\left( X \\right) = E\\left\\lbrack E\\left( X\\left| q \\right.\\  \\right) \\right\\rbrack = E\\left( 100q \\right) = 100E\\left( q \\right) = 60$,<\/p>\n<p>$$Var\\left( X \\right) = E\\left\\lbrack Var\\left( X\\left| q \\right.\\  \\right) \\right\\rbrack + Var\\left\\lbrack E\\left( X\\left| q \\right.\\  \\right) \\right\\rbrack = E\\left\\lbrack 100q\\left( 1 &#8211; q \\right) \\right\\rbrack + Var\\left( 100q \\right)$$<\/p>\n<p>$= 100E\\left( q \\right) &#8211; 100E\\left( q^{2} \\right) + 100^{2}V\\left( q \\right) = 420$.<\/p>\n<hr \/>\n<\/div>\n<p><strong>Exercise 3.14 (SOA)<\/strong> Claim sizes, $X$, are uniform on for each policyholder. varies by policyholder according to an exponential distribution with mean 5. Find the unconditional distribution, mean and variance of $X$.<br \/>\n<a id=\"displayText314\" href=\"javascript:toggle('toggleText314','displayText314');\"><i>Solution<\/i><\/a><\/p>\n<div id=\"toggleText314\" style=\"display: none\">\n<hr \/>\n<p>The conditional distribution of $X$ is $f_{X}\\left( \\left. \\ x \\right|\\theta \\right) = \\frac{1}{10}$ for $\\theta \\lt x \\lt \\theta + 10$.<\/p>\n<p>The prior distribution of $\\theta$ is $g\\left( \\theta \\right) = \\frac{1}{5}e^{- \\frac{\\theta}{5}}$ for $0 \\lt  \\theta \\lt  \\infty$.<\/p>\n<p>The conditional mean and variance of $X$ are given by<br \/>\n$$E\\left( \\left. \\ X \\right|\\theta \\right) = \\frac{\\theta + \\theta + 10}{2} = \\theta + 5$$<br \/>\nand<br \/>\n$$Var\\left( \\left. \\ X \\right|\\theta \\right) = \\frac{\\left\\lbrack \\left( \\theta + 10 \\right) &#8211; \\theta \\right\\rbrack^{2}}{12} = \\frac{100}{12}, $$<br \/>\nrespectively.<\/p>\n<p>Hence, the unconditional mean and variance of $X$ are given by<br \/>\n$$E\\left( X \\right) = E\\left\\lbrack E\\left( X\\left| \\theta \\right.\\  \\right) \\right\\rbrack = E\\left( \\theta + 5 \\right) = E\\left( \\theta \\right) + 5 = 5 + 5 = 10,$$ and<br \/>\n$$Var\\left( X \\right) = E\\left\\lbrack V\\left( X\\left| \\theta \\right.\\  \\right) \\right\\rbrack + Var\\left\\lbrack E\\left( X\\left| \\theta \\right.\\  \\right) \\right\\rbrack \\\\<br \/>\n= E\\left( \\frac{100}{12} \\right) + Var\\left( \\theta + 5 \\right) = 8.33 + Var\\left( \\theta \\right) = 33.33. $$<br \/>\nThe unconditional distribution of $X$ is<br \/>\n$$f_{X}\\left( x \\right) = \\int_{}^{}{f_{X}\\left( \\left. \\ x \\right|\\theta \\right)g\\left( \\theta \\right)\\text{d\u03b8}} .$$<\/p>\n<figure class=\"wp-caption aligncenter\" style=\"max-width: 300px;\" aria-label=\"&lt;br \/&gt;\"><a href=\"http:\/\/www.ssc.wisc.edu\/~jfrees\/wp-content\/uploads\/2016\/12\/Fig3Exer.png\"><img decoding=\"async\" loading=\"lazy\" src=\"http:\/\/www.ssc.wisc.edu\/~jfrees\/wp-content\/uploads\/2016\/12\/Fig3Exer-300x189.png\" alt=\"fig3exer\" width=\"300\" height=\"189\" class=\"alignnone size-medium wp-image-5985\" srcset=\"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-content\/uploads\/2016\/12\/Fig3Exer-300x189.png 300w, https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-content\/uploads\/2016\/12\/Fig3Exer.png 544w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/a><figcaption class=\"wp-caption-text\"><br \/><\/figcaption><\/figure>\n<p>$$f_{X}\\left( x \\right) = \\left\\{ \\begin{matrix}<br \/>\n\\int_{0}^{x}{\\frac{1}{50}e^{- \\frac{\\theta}{5}}d\\theta = \\frac{1}{10}\\left( 1 &#8211; e^{- \\frac{x}{5}} \\right)} &#038; 0 \\leq x \\leq 10, \\\\<br \/>\n\\int_{x &#8211; 10}^{x}{\\frac{1}{50}e^{- \\frac{\\theta}{5}}\\text{d\u03b8}} = \\frac{1}{10}\\left( e^{- \\frac{\\left( x &#8211; 10 \\right)}{5}} &#8211; e^{- \\frac{x}{5}} \\right) &#038; 10 \\lt x \\lt \\infty. \\\\<br \/>\n\\end{matrix} \\right.\\ $$<\/p>\n<hr \/>\n<\/div>\n<p><div class=\"alignleft\"><a href=\"https:\/\/users.ssc.wisc.edu\/~ewfrees\/loss-data-analytics\/chapter-3-modeling-loss-severity\/3-2-2-continuous-distributions-for-modeling-loss-severity\/\" title=\"3.2 Continuous Distributions for Modeling Loss Severity\">&#9668 Previous page<\/a><\/div><div class=\"alignright\"><a href=\"https:\/\/users.ssc.wisc.edu\/~ewfrees\/loss-data-analytics\/chapter-3-modeling-loss-severity\/coverage-modifications\/\" title=\"3.4 Coverage Modifications\">Next page &#9658<\/a><\/div><\/p>\n","protected":false},"excerpt":{"rendered":"<p>3.3.1 Functions of Random Variables and their Distributions In Section 3.2 we discussed some elementary known distributions. In this section we discuss means of creating new parametric probability distributions from existing ones. Let $X$ be &hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":5910,"menu_order":3,"comment_status":"closed","ping_status":"closed","template":"","meta":{"jetpack_post_was_ever_published":false},"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/P8cLPd-1xH","acf":[],"_links":{"self":[{"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/pages\/5933"}],"collection":[{"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/comments?post=5933"}],"version-history":[{"count":19,"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/pages\/5933\/revisions"}],"predecessor-version":[{"id":6136,"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/pages\/5933\/revisions\/6136"}],"up":[{"embeddable":true,"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/pages\/5910"}],"wp:attachment":[{"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/media?parent=5933"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}