Introducing the Lognormal Distribution

This post introduces the lognormal distribution and discusses some of its basic properties. The lognormal distribution is a transformation of the normal distribution through exponentiation. The basic properties of the lognormal distribution discussed here are derived from the normal distribution. The normal distribution is applicable in many situations but not in all situations. The normal density curve is a bell-shaped curve and is thus not appropriate in phenomena that are skewed to the right. In such situations, the lognormal distribution can be a good alternative to the normal distribution.

________________________________________________________________________

Defining the distribution

In this post, the notation \log refers to the natural log function, i.e., logarithm to the base e=2.718281828459 \cdots.

A random variable Y is said to follow a lognormal distribution if \log(Y) follows a normal distribution. A lognormal distribution has two parameters \mu and \sigma, which are the mean and standard deviation of the normal random variable \log(Y). To be more precise, the definition is restated as follows:

    A random variable Y is said to follow a lognormal distribution with parameters \mu and \sigma if \log(Y) follows a normal distribution with mean \mu and standard deviation \sigma.

Many useful probability distributions are transformations of other known distributions. The above definition shows that a normal distribution is the transformation of a lognormal distribution under the natural logarithm. Start with a lognormal distribution, taking the natural log of it gives you a normal distribution. The other direction is actually more informative, i.e., a lognormal distribution is the transformation of a normal distribution by the exponential function. Start with a normal random variable X, the exponentiation of it is a lognormal distribution, i.e., Y=e^{X} is a lognormal distribution. The following summarizes these two transformations.

    Y \text{ is lognormal}  \longrightarrow X=\log(Y) \text{ is normal}

    X \text{ is normal} \longrightarrow  Y=e^{X} \text{ is lognormal}

Since the exponential function gives positive values, the lognormal distribution always takes on positive real values. The following diagram shows the probability density functions of the standard normal distribution and the corresponding lognormal distribution. Recall that the standard normal distribution is the normal distribution with mean 0 and standard deviation 1.

    Figure 1 – normal and lognormal density curves
    standard normal - lognormal PDFs

In Figure 1, the standard normal density curve is symmetric bell shaped curve, with mean and median located at x = 0. The standard lognormal density (it is called standard since it is derived from the standard normal distribution) is located entirely over the positive x-axis. This is because the exponential function always gives positive values regardless of the sign of the argument. The lognormal density curve in Figure 1 is not symmetric and is a uni-modal curve skewed toward the right. All the standard normal values on the negative x-axis, when exponentiated, are in the interval (0, 1). Thus in Figure 1, the lower half of the lognormal probabilities lie in the interval x = 0 to x = 1 (i.e. the median of this lognormal distribution is x = 1). The other half of the lognormal probabilities lie in the interval (1, \infty). Such lopsided assignment of probabilities shows that lognormal distribution is a positively skewed distribution (skewed to the right).

In the above paragraph, the lower half of the normal distribution on (-\infty,0) is matched with the lognormal distribution on the interval (0, 1). Such interval matching can tell us a great deal about the lognormal distribution. Another example: about 75% of the standard normal distributional values lie below x = 0.67. Thus in Figure 1, about 75% of the lognormal probabilities lie in the interval (0, 1.95) where e^{0.67} \approx 1.95. Another example: what is the probability that the lognormal distribution in Figure 1 lie between 1 and 3.5? Then the normal matching interval is (0, 1.25) where \log(3.5) \approx 1.25. The normal probability in this interval is 0.8944 – 0.5 = 0.3944. Thus randomly generated a value in the standard lognormal distribution, there is a 39.44% percent chance that it is between 1 and 3.5.

The interval matching idea is very useful for computing lognormal probabilities (e.g. cumulative distribution function) and for finding lognormal percentiles. This idea is discussed further below to make it work for any lognormal distribution, not just the standard lognormal distribution.

Though lognormal distribution is a skewed distribution, some are less skewed than others. The lognormal distributions with larger parameter value of \sigma tend to be more skewed. The following is a diagram of three lognormal density curves that demonstrates this point. Note that the small \sigma of 0.25 relatively resembles a bell curve.

    Figure 2 – three lognormal density curves
    Three lognormal density curves

________________________________________________________________________

How to compute lognormal probabilities and percentiles

Let Y be a random variable that follows a lognormal distribution with parameters \mu and \sigma. Then the related normal random variable is X=\log(Y), which has mean \mu and standard deviation \sigma. If we raise e to X, we get back the lognormal Y.

Continuing the interval matching idea, the lognormal interval Y \le y will match with the normal interval X \le \log(y). Both intervals receive the same probability in their respective distributions. The following states this more clearly.

    \displaystyle P\biggl(Y \le y\biggr)=P\biggl(X=\log(Y) \le \log(y)\biggr) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)

On the other hand, the normal interval X \le x will match with the lognormal interval Y \le e^x. The same probability is assigned to both intervals in their respective distributions. This idea is stated as follows:

    \displaystyle P\biggl(X \le x\biggr)=P\biggl(Y=e^X \le e^x\biggr) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)

The idea of (1) gives the cumulative distribution of the lognormal distribution (argument y), which is evaluated as the CDF of the corresponding normal distribution at \log(y). One obvious application of (2) is to have an easy way to find percentiles for the lognormal distribution. It is relatively easy to find the corresponding percentile of the normal distribution. Then the lognormal percentile is e raised to the corresponding percentile of the normal distribution. For example, the median of the normal distribution is at the mean \mu. Then the median of the lognormal distribution is e^{\mu}.

The calculation in both (1) and (2) involve finding normal probabilities, which can be obtained using software or using a table of probability values of the standard normal distribution. To do the table approach, each normal CDF is converted to the standard normal CDF as follows:

    \displaystyle P\biggl(X \le x \biggr)=P\biggl(\frac{X-\mu}{\sigma} \le \frac{x-\mu}{\sigma} \biggr)=\Phi(z) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3)

where z is the z-score which is the ratio (x-\mu)/\sigma and \Phi(z) is the cumulative distribution function of the standard normal distribution, which can be looked up from a table based on the z-score. In light of this, (1) can be expressed as follows:

    \displaystyle P\biggl(Y \le y\biggr)=\Phi\biggl(\frac{\log(y)-\mu}{\sigma} \biggr) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (4)

A quick example to demonstrate how this works.

Example 1
If Y is lognormally distributed with parameters \mu=2.5 and \sigma=1.5,

  • what is the probability P(1.9 \le Y \le 31.34)?
  • what is the 95th percentile of this lognormal distribution?

The first answer is P(1.9 \le Y \le 31.34)=P(Y \le 31.34)-P(Y \le 1.9), which is calculated as follows:

    \displaystyle P(Y \le 31.34)=\Phi \biggl( \frac{\log(31.34)-2.5}{1.5}\biggr)=\Phi(0.63)=0.7357

    \displaystyle P(Y \le 1.9)=\Phi \biggl( \frac{\log(1.9)-2.5}{1.5}\biggr)=\Phi(-1.24)=1-0.8925=0.1075

    \displaystyle \begin{aligned} P(1.9 \le Y \le 31.34)&=P(Y \le 31.34)-P(Y \le 1.9) \\&=0.7357-0.1075 \\&=0.6282  \end{aligned}

The z-score for the 95th percentile for the standard normal distribution is z = 1.645. Then the 95th percentile for the normal distribution with mean 2.5 and standard deviation 1.5 is x = 2.5 + 1.645 (1.5) = 4.9675. Then apply the exponential function to obtain e^{4.9675} \approx 143.67, which is the desired lognormal 95th percentile. \square

As (2) and Example 1 suggest, to find a lognormal percentile, first find the percentile for the corresponding normal distribution. If x_p is the 100pth percentile of the normal distribution, then e^{x_p} is the 100pth percentile of the lognormal distribution. Usually, we can first find the 100pth percentile for the standard normal distribution z_p. Then the normal percentile we need is x=\mu + z_p \cdot \sigma. The lognormal percentile is then:

    \displaystyle e^{\displaystyle \mu + z_p \cdot \sigma}=\text{lognormal } 100p \text{th percentile} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (5)

The above discussion shows that the explicit form of the lognormal density curve is not needed in computing lognormal probabilities and percentiles. For the sake of completeness, the following shows the probability density functions of both the normal distribution and the lognormal distribution.

    Normal PDF
    \displaystyle f_X(x)=\frac{1}{\sigma \sqrt{2 \pi}} \ \ \text{\Large e}^{\displaystyle -\frac{(x-\mu)^2}{2 \sigma^2}} \ \ \ \ \ \ \ \ \ \ \ -\infty<x<\infty \ \ \ \ \ \ \ \ \ \ (6)

    Lognormal PDF
    \displaystyle f_Y(y)=\frac{1}{y \ \sigma \sqrt{2 \pi}} \ \ \text{\Large e}^{\displaystyle -\frac{(\log(y)-\mu)^2}{2 \sigma^2}} \ \ \ \ \ 0<y<\infty \ \ \ \ \ \ \ \ \ \ \ \ \ (7)

The cumulative distribution function for the lognormal distribution is then

    \displaystyle F_Y(y)=\int_0^y \frac{1}{t \ \sigma \sqrt{2 \pi}} \ \ \text{\Large e}^{\displaystyle -\frac{(\log(t)-\mu)^2}{2 \sigma^2}} \ dt \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (8)

Of course, we do not have to use (8) since the lognormal CDF can be obtained based on the corresponding normal CDF.

One application of the lognormal PDF in (7) is to use it to find the mode (by taking its derivative and finding the critical value). The mode of the lognormal distribution with parameters \mu and \sigma is \displaystyle e^{\mu - \sigma^2}.

________________________________________________________________________

How to find lognormal moments

To find the mean and higher moments of the lognormal distribution, we once again rely on basic information about normal distribution. For any random variable T (normal or otherwise), its moment generating function is defined by M_T(t)=E(e^{t \ T}). The following is the moment generating function of the normal distribution with mean \mu and standard deviation \sigma.

    \displaystyle M_X(t)=\text{\Large e}^{\displaystyle \biggl[ \mu t + (1/2) \sigma^2 t^2 \biggr]} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (9)

As before, let Y be a random variable that follows a lognormal distribution with parameters \mu and \sigma. Then Y=e^X where X is normal with mean \mu and standard deviation \sigma. Then E(Y)=E(e^X) is simply the normal moment generating function evaluated at 1. In fact, the kth moment of Y, E(Y^k)=E(e^{k X}), is simply the normal mgf evaluated at k. Because the mgf of the normal distribution is defined at any real number, all moments for the lognormal distribution exist. The following gives the moments explicitly.

    E(Y)=\text{\Large e}^{\displaystyle \biggl[ \mu+(1/2) \sigma^2 \biggr]}

    E(Y^k)=\text{\Large e}^{\displaystyle \biggl[ k \mu+(k^2/2) \sigma^2 \biggr]} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (10)

In particular, the variance and standard deviation are:

    \displaystyle \begin{aligned}Var(Y)&=\text{\Large e}^{\displaystyle \biggl[ 2 \mu+2 \sigma^2 \biggr]}-\text{\Large e}^{\displaystyle \biggl[ 2 \mu+\sigma^2 \biggr]} \\&=\biggl(\text{\Large e}^{\displaystyle \sigma^2}-1\biggr) \text{\Large e}^{\displaystyle \biggl[ 2 \mu+\sigma^2 \biggr]} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (11) \end{aligned}

    \displaystyle \sigma_Y=\sqrt{\text{\Large e}^{\displaystyle \sigma^2}-1} \ \ \text{\Large e}^{\displaystyle \biggl[ \mu+\frac{1}{2} \sigma^2 \biggr]}=\sqrt{\text{\Large e}^{\displaystyle \sigma^2}-1} \ E(Y) \ \ \ \ \ \ \ \ \ \ \ \ (12)

The formulas (11) and (12) give the variance and standard deviation if the parameters \mu and \sigma are known. They do not need to be committed to memory, since they can always be generated from knowing the moments in (10). As indicated before, the lognormal median is e^{\mu}, which is always less than the mean, which is e raised to \mu+(1/2) \sigma^2. So the mean is greater than the median by a factor of e raised to (1/2) \sigma^2. The mean being greater than the median is another sign that the lognormal distribution is skewed right.

Example 2
Suppose Y follows a lognormal distribution with mean 12.18 and variance 255.02. Determine the probability that Y is greater than its mean.

With the given information, we have:

    E(Y)=\text{\Large e}^{\displaystyle \biggl[ \mu+(1/2) \sigma^2 \biggr]}=12.18

    \biggl(\text{\Large e}^{\displaystyle \sigma^2}-1\biggr) \text{\Large e}^{\displaystyle \biggl[ 2 \mu+\sigma^2 \biggr]}=\biggl(\text{\Large e}^{\displaystyle \sigma^2}-1\biggr) \biggl(\text{\Large e}^{\displaystyle \biggl[ \mu+(1/2) \sigma^2 \biggr]}\biggr)^2=255.02

    \displaystyle \begin{aligned}Var(Y)&=\biggl(\text{\Large e}^{\displaystyle \sigma^2}-1\biggr) \text{\Large e}^{\displaystyle \biggl[ 2 \mu+\sigma^2 \biggr]} \\&=\biggl(\text{\Large e}^{\displaystyle \sigma^2}-1\biggr) \biggl(\text{\Large e}^{\displaystyle \biggl[ \mu+(1/2) \sigma^2 \biggr]}\biggr)^2  \\&=  \biggl(\text{\Large e}^{\displaystyle \sigma^2}-1\biggr) 12.18^2=255.02    \end{aligned}

From the last equation, we can solve for \sigma. The following shows the derivation.

    \displaystyle \begin{aligned} &\biggl(\text{\Large e}^{\displaystyle \sigma^2}-1\biggr)=\frac{255.02}{12.18^2}   \\&\text{\Large e}^{\displaystyle \sigma^2}=2.719014994  \\&\sigma^2=\log(2.719014994)=1.00026968 \end{aligned}

Thus we can take \sigma=1. Then plug \sigma=1 into E(Y) to get \mu=2. The desired probability is:

    \displaystyle \begin{aligned} P(Y>12.18)&=1-\Phi \biggl(\frac{\log(12.18)-2}{1}  \biggr) \\&=1-\Phi(0.499795262)  \\&=1-\Phi(0.5)=1-0.6915=0.3085  \ \square    \end{aligned}

________________________________________________________________________

Linear transformations

For any random variable W, a linear transformation of W is the random variable aW+b where a and b are real constants. It is well known that if X follows a normal distribution, any linear transformation of X also follows a normal distribution. Does this apply to lognormal distribution? A linear transformation of a lognormal distribution may not have distributional values over the entire positive x-axis. For example, if Y is lognormal, then Y+1 is technically not lognormal since the values of Y+1 lie in (1, \infty) and not (0, \infty). Instead, we focus on the transformations cY where c>0 is a constant. We have the following fact.

    If Y has a lognormal distribution with parameters \mu and \sigma, then cY has a lognormal distribution with parameters \mu+\log(c) and \sigma.

The effect of the constant adjustment of the lognormal distribution is on the \mu parameter, which is adjusted by adding the natural log of the constant c. Note that the adjustment on \mu is addition and not multiplication. The \sigma parameter is unchanged.

One application of the transformation cY is that of inflation. For example, suppose Y represents claim amounts in a given calendar year arising from a group of insurance policies. If the insurance company expects that the claims amounts in the next year will increase by 10%, then 1.1Y is the random variable that models next year’s claim amounts. If Y is assumed to be lognormal, then the effect of the 10% inflation is on the \mu parameter as indicated above.

To see why the inflation on Y works as described, let’s look at the cumulative distribution function of T=cY.

    \displaystyle \begin{aligned} P(T \le y)&=P(Y \le \frac{y}{c})=F_Y(y/c) \\&=\int_0^{y/c} \frac{1}{t \ \sigma \sqrt{2 \pi}} \ \ \text{\Large e}^{\displaystyle -\frac{(\log(t)-\mu)^2}{2 \sigma^2}} \ dt  \end{aligned}

Taking derivative of the last item above, we obtain the probability density function f_T(y).

    \displaystyle \begin{aligned} f_T(y)&=\frac{1}{y/c \ \sigma \sqrt{2 \pi}} \ \ \text{\Large e}^{\displaystyle -\frac{(\log(y/c)-\mu)^2}{2 \sigma^2}} (1/c) \\&=\frac{1}{y \ \sigma \sqrt{2 \pi}} \ \ \text{\Large e}^{\displaystyle -\frac{\biggl(\log(y)-(\mu+\log(c))\biggr)^2}{2 \sigma^2}}  \end{aligned}

Comparing with the density function (8), the last line is the lognormal density function with parameters \mu + \log(c) and \sigma.

________________________________________________________________________

Distributional quantities involving higher moments

As the formula (10) shows, all moments exist for the lognormal distribution. As a result, any distributional quantity that is defined using moments can be described explicitly in terms of the parameters \mu and \sigma. We highlight three such distributional quantities: coefficient of variation, coefficient of skewness and kurtosis. The following shows their definitions. The calculation is done by plugging in the moments obtained from (10).

    \displaystyle CV=\frac{\sigma_Y}{\mu_Y} \ \ \ \ \ \text{(Coefficient of variation)}

    \displaystyle \begin{aligned} \gamma_1&=\frac{E[ (Y-\mu_Y)^3 ]}{\sigma_Y^3}  \\&=\frac{E(Y^3)-3 \mu_Y \sigma_Y^2-\mu_Y^3}{(\sigma_Y^2)^{\frac{3}{2}}}  \ \ \ \ \ \text{(Coefficient of skewness)}    \end{aligned}

    \displaystyle \begin{aligned} \beta_2&=\frac{E[ (Y-\mu_Y)^4 ]}{\biggl(E[ (Y-\mu_Y)^2]\biggr)^2}  \\&=\frac{E(Y^4)-4 \ \mu_Y \ E(Y^3) + 6 \ \mu_Y^2 \ E(Y^2) - 3 \ \mu_Y^4}{(\sigma_Y^2)^{2}}  \ \ \ \ \ \text{(Kurtosis)}    \end{aligned}

The above definitions are made for any random variable Y. The notations \mu_Y and \sigma_Y are the mean and standard deviation of Y, respectively. Coefficient of variation is the ratio the standard deviation to the mean. It is a standardized measure of dispersion of a probability distribution or frequency distribution. The coefficient of skewness is defined as the ratio of the third central moment about the mean to the cube of the standard deviation. The second line in the above definition is an equivalent form that is in terms of the mean, variance and the third raw moment, which may be easier to calculate in some circumstances. Kurtosis is defined to be the ratio of the fourth central moment about the mean to the square of the second central moment about the mean. The second line in the definition gives an equivalent form that is in terms of the mean, variance and the third and fourth raw moments.

The above general definitions of CV, \gamma_1 and \beta_2 can be obtained for the lognormal distribution. The mean and variance and higher raw moments can be obtained by using (10). Then it is a matter of plugging in the relevant items into the above definitions. The following example shows how this is done.

Example 3
Determine the CV, \gamma_1 and \beta_2 of the lognormal distribution in Example 2.

The calculation in Example 2 shows that the lognormal parameters are \mu=2 and \sigma=1. Now use formula (10) to get the ingredients.

    \mu_Y=e^{2.5}

    Var(Y)=(e-1) e^5 \ \ \ \ \ \ \ \ \sigma_Y=\sqrt{e-1} \ e^{2.5}

    E(Y^2)=e^6

    E(Y^3)=e^{10.5}

    E(Y^4)=e^{16}

Right away, CV = \sqrt{e-1}=1.31. The following shows the calculation for skewness and kurtosis.

    \displaystyle \begin{aligned} \gamma_1&=\frac{E(Y^3)-3 \mu_Y \sigma_Y^2-\mu_Y^3}{(\sigma_Y^2)^{\frac{3}{2}}}  \\&=\frac{e^{10.5}-3 \ e^{2.5} \ (e-1) \ e^5-(e^{2.5})^3}{[(e-1) \ e^5]^{\frac{3}{2}}}\\&=6.1849     \end{aligned}

    \displaystyle \begin{aligned} \beta_2&=\frac{E(Y^4)-4 \ \mu_Y \ E(Y^3) + 6 \ \mu_Y^2 \ E(Y^2) - 3 \ \mu_Y^4}{(\sigma_Y^2)^{2}}  \\&=\frac{e^{16}-4 \ e^{2.5} \ e^{10.5} + 6 \ (e^{2.5})^2 \ e^{6} - 3 \ (e^{2.5})^4}{[(e-1) \ e^5]^{2}}\\&=113.9364    \end{aligned}

________________________________________________________________________

Is there a moment generating function for the lognormal distribution?

Because the normal distribution has a moment generating function, all moments exist for the lognormal distribution (see formula (10) above). Does the moment generating function exist for the lognormal distribution? Whenever the mgf exists for a distribution, its moments can be derived from the mgf. What about the converse? That is, when all moments exist for a given distribution, does it mean that its moment generating function would always exist? The answer is no. It turns out that the lognormal distribution is a counterexample. We conclude this post by showing this fact.

Let Y be the standard lognormal distribution, i.e., X=\log(Y) has the standard normal distribution. We show that the expectation E(e^{tY}) converges to infinity when t>0.

    \displaystyle \begin{aligned} E(e^{t \ Y})&=E(e^{t \ e^X}) \\&=\frac{1}{\sqrt{2 \pi}} \ \int_{-\infty}^\infty e^{t \ e^x} \ e^{-0.5 x^2} \ dx \\&=\frac{1}{\sqrt{2 \pi}} \ \int_{-\infty}^\infty e^{t \ e^x - 0.5x^2} \ dx \\&> \frac{1}{\sqrt{2 \pi}} \ \int_{0}^\infty e^{t \ e^x - 0.5x^2} \ dx \\&> \frac{1}{\sqrt{2 \pi}} \ \int_{0}^\infty e^{\biggl(t \ ( \displaystyle 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}) - 0.5x^2 \biggr)} \ dx=\infty \end{aligned}

The last integral in the above derivation converges to infinity. Note that the Taylor’s series expansion of e^x is \displaystyle 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots. In the last step, e^x is replaced by \displaystyle 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}. Then the exponent in the last integral is a third degree polynomial with a positive coefficient for the x^3 term. Thus this third degree polynomial converges to infinity as x goes to infinity. With the last integral goes to infinity, the mgf E(e^{t \ Y}) goes to infinity as well.

________________________________________________________________________

Practice problems

Practice problems to reinforce the calculation are found in the companion blog to this blog.

________________________________________________________________________
\copyright \ \text{2015 by Dan Ma}

Advertisements