Defining the Poisson distribution

The Poisson distribution is a family of discrete distributions with positive probabilities on the non-negative numbers 0,1,2,\cdots. Each distribution in this family is indexed by a positive number \lambda>0. One way to define this distribution is to give its probability function given the parameter \lambda and then derive various distributional quantities such as mean and variance. Along with other mathematical facts, it can be shown that both the mean and the variance are \lambda. In this post, we take a different tack. We look at two view points that give rise to the Poisson distribution. Taking this approach will make it easier to appreciate some of the possible applications of the Poisson distribution. The first view point is that the Poisson distribution is the limiting case of the binomial distribution. The second view point is through the Poisson process, a stochastic process that, under some conditions, counts the number of events and the time points at which these events occur in a given time (or physical) interval.

________________________________________________________________________

Poisson as a limiting case of binomial

A binomial distribution where the number of trials n is large and the probability of success p is small such that np is moderate in size can be approximated using the Poisson distribution with mean \lambda=np. This fact follows from Theorem 1, which indicates that the Poisson distribution is the limiting case of the binomial distribution.

Theorem 1
Let \lambda be a fixed positive constant. Then for each integer x=0,1,2,\cdots, the following is true:

    \displaystyle \lim_{n \rightarrow \infty} \binom{n}{x} \ p^x \  (1-p)^{n-x}=\lim_{n \rightarrow \infty} \frac{n!}{x! \ (n-x)!} \ p^x \  (1-p)^{n-x}=\frac{e^{-\lambda} \ \lambda^x}{x!}

where \displaystyle p=\frac{\lambda}{n}.

Proof of Theorem 1
We start with a binomial distribution with n trials and with \displaystyle p=\frac{\lambda}{n} being the probability of success, where n>\lambda. Let X_n be the count of the number of successes in these n Bernoulli trials. The following is the probability that X_n=k.

    \displaystyle \begin{aligned} P(X_n=k)&=\binom{n}{k} \biggl(\frac{\lambda}{n}\biggr)^k \biggr(1-\frac{\lambda}{n}\biggr)^{n-k} \\&=\frac{n!}{k! (n-k)!} \biggl(\frac{\lambda}{n}\biggr)^k \biggr(1-\frac{\lambda}{n}\biggr)^{n-k} \\&=\frac{n(n-1)(n-2) \cdots (n-k+1)}{n^k} \biggl(\frac{\lambda^k}{k!}\biggr) \biggr(1-\frac{\lambda}{n}\biggr)^{n} \biggr(1-\frac{\lambda}{n}\biggr)^{-k} \\&=\biggl(\frac{\lambda^k}{k!}\biggr) \ \biggl[ \frac{n(n-1)(n-2) \cdots (n-k+1)}{n^k} \ \biggr(1-\frac{\lambda}{n}\biggr)^{n} \ \biggr(1-\frac{\lambda}{n}\biggr)^{-k} \biggr] \end{aligned}

In the last step, the terms that contain n are inside the square brackets. Let’s see what they are when n approaches infinity.

    \displaystyle \lim \limits_{n \rightarrow \infty} \ \frac{n(n-1)(n-2) \cdots (n-k+1)}{n^k}=1

    \displaystyle \lim \limits_{n \rightarrow \infty} \biggr(1-\frac{\lambda}{n}\biggr)^{n}=e^{-\lambda}

    \displaystyle \lim \limits_{n \rightarrow \infty} \biggr(1-\frac{\lambda}{n}\biggr)^{-k}=1

The reason that the first result is true is that the numerator is a polynomial where the leading term is n^k. Upon dividing by n^k and taking the limit, we get 1. The second result is true since the following limit is one of the definitions of the exponential function e^x.

    \displaystyle \lim \limits_{n \rightarrow \infty} \biggr(1+\frac{x}{n}\biggr)^{n}=e^{x}

The third result is true since the exponent -k is a constant. Thus the following is the limit of the probability P(X_n=k) as n \rightarrow \infty.

    \displaystyle \begin{aligned} \lim \limits_{n \rightarrow \infty} P(X_n=k)&= \biggl(\frac{\lambda^k}{k!}\biggr) \ \lim \limits_{n \rightarrow \infty} \biggl[ \frac{n(n-1)(n-2) \cdots (n-k+1)}{n^k} \ \biggr(1-\frac{\lambda}{n}\biggr)^{n} \ \biggr(1-\frac{\lambda}{n}\biggr)^{-k} \biggr] \\&=\biggl(\frac{\lambda^k}{k!}\biggr) \cdot 1 \cdot e^{-\lambda} \cdot 1 \\&=\frac{e^{-\lambda} \lambda^k}{k!} \end{aligned}

This above derivation completes the proof. \blacksquare

In a given binomial distribution, whenever the number of trials n is large and the probability p of success in each trial is small (i.e. each of the Bernoulli trial rarely results in a success), Theorem 1 tells us that we can use the Poisson distribution with parameter \lambda=np to estimate the binomial distribution.

Example 1
The probability of being dealt a full house in a hand of poker is approximately 0.001441. Out of 5000 hands of poker that are dealt at a certain casino, what is the probability that there will be at most 4 full houses?

Let X be the number of full houses in these 5000 poker hands. The exact distribution for X is the binomial distribution with n= 5000 and p= 0.001441. Thus example deals with a large number of trials where each trial is a rare event. So the Poisson estimation is applicable. Let \lambda= 5000(0.001441) = 7.205. Then P(X \le 4) can be approximated by the Poisson random variable Y with parameter \lambda. The following is the probability function of Y:

    \displaystyle P(Y=y)=e^{-7.205} \ \frac{7.205^y}{y!}

The following is the approximation of P(X \le 4):

    \displaystyle \begin{aligned} P(X \le 4)&\approx P(Y \le 4) \\&=P(Y=0)+P(Y=1)+P(Y=2)+P(Y=3)+P(Y=4) \\&= e^{-7.205} \biggl[ 1+7.205+\frac{7.205^2}{2!}+\frac{7.205^3}{3!}+\frac{7.205^4}{4!}\biggr] \\&=0.155098087  \end{aligned}

The following is a side by side comparison between the binomial distribution and its Poisson approximation. For all practical purposes, they are indistingusihable from one another.

    \displaystyle \begin{bmatrix} \text{Count of}&\text{ }&\text{ }&\text{Binomial } &\text{ }&\text{ }&\text{Poisson } \\\text{Full Houses}&\text{ }&\text{ }&P(X \le x) &\text{ }&\text{ }&P(Y \le x) \\\text{ }&\text{ }&\text{ }&n=5000 &\text{ }&\text{ }&\lambda=7.205 \\\text{ }&\text{ }&\text{ }&p=0.001441 &\text{ }&\text{ }&\text{ } \\\text{ }&\text{ }&\text{ } &\text{ }&\text{ } \\ 0&\text{ }&\text{ }&0.000739012&\text{ }&\text{ }&0.000742862 \\ 1&\text{ }&\text{ }&0.006071278&\text{ }&\text{ }&0.006095184 \\ 2&\text{ }&\text{ }&0.025304641&\text{ }&\text{ }&0.025376925 \\ 3&\text{ }&\text{ }&0.071544923&\text{ }&\text{ }&0.071685238 \\ 4&\text{ }&\text{ }&0.154905379&\text{ }&\text{ }&0.155098087  \\ 5&\text{ }&\text{ }&0.275104906&\text{ }&\text{ }&0.275296003 \\ 6&\text{ }&\text{ }&0.419508250&\text{ }&\text{ }&0.419633667 \\ 7&\text{ }&\text{ }&0.568176421 &\text{ }&\text{ }&0.568198363   \\ 8&\text{ }&\text{ }&0.702076190 &\text{ }&\text{ }&0.701999442 \\ 9&\text{ }&\text{ }&0.809253326&\text{ }&\text{ }&0.809114639 \\ 10&\text{ }&\text{ }&0.886446690&\text{ }&\text{ }&0.886291139 \\ 11&\text{ }&\text{ }&0.936980038&\text{ }&\text{ }&0.936841746 \\ 12&\text{ }&\text{ }&0.967298041&\text{ }&\text{ }&0.967193173 \\ 13&\text{ }&\text{ }&0.984085073&\text{ }&\text{ }&0.984014868 \\ 14&\text{ }&\text{ }&0.992714372&\text{ }&\text{ }&0.992672033 \\ 15&\text{ }&\text{ }&0.996853671&\text{ }&\text{ }&0.996830358 \end{bmatrix}

The above table is calculated using the functions BINOM.DIST and POISSON.DIST in Excel. The following shows how it is done. The parameter TRUE indicates that the result is a cumulative distribution. When it is set to FALSE, the formula gives the probability function.

    P(X \le x)=\text{BINOM.DIST(x, 5000, 0.001441, TRUE)}

    P(Y \le x)=\text{POISSON.DIST(x, 7.205, TRUE)}

________________________________________________________________________

The Poisson distribution

The limit in Theorem 1 is a probability function and the resulting distribution is called the Poisson distribution. We now gives the formal definition. A random variable X that takes on one of the numbers 0,1,2,\cdots is said to be a Poisson random variable with parameter \lambda>0 if

    \displaystyle P(X=x)=\frac{e^{-\lambda} \ \lambda^x}{x!} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x=0,1,2,\cdots

It can be shown that the above function is indeed a probability function, i.e., the probabilities sum to 1. Any random variable that has a probability function of the above form is said to follow (or to have) a Poisson distribution. Furthermore, it can be shown that E(X)=var(X)=\lambda, i.e., the Poisson parameter is both the mean and variance. Thus the Poisson distribution may be a good fit if the observed data indicate that the sample mean and the sample variance are nearly identical.

The following is the moment generating function of the Poisson distribution with parameter \lambda.

    \displaystyle M(t)=E(e^{tX})=e^{\lambda \ (e^t-1)}

One consequence of the Poisson moment generating function is that any independent sum of Poisson distributions is again a Poisson distribution.

________________________________________________________________________

The Poisson process

Another way, the more important way, to look at the Poisson distribution is the view point of the Poisson process. Consider an experiment in which events that are of interest occur at random in a time interval. The goal here is to record the time of the occurrence of each random event and for the purpose at hand, count the number of random events occurring in a fixed time interval. Starting at time 0, note the time of the occurrence of the first event. Then the time at which the second random event occurs and so on. Out of these measurements, we can derive the length of time between the occurrences of any two consecutive random events. Such measurements belong to a continuous random variable. In this post, we focus on the discrete random variable of the count of the random events in a fixed time interval.

A good example of a Poisson process is the well known experiment in radioactivity conducted by Rutherford and Geiger in 1910. In this experiment, \alpha-particles were emitted from a polonium source and the number of \alpha-particles were counted during an interval of 7.5 seconds (2608 many such time intervals were observed). A Poisson process is a random process in which several criteria are satisfied. We will show that in a Poisson process, the number of these random occurrences in the fixed time interval will follow a Poisson distribution. First, we discuss the criteria to which a Poisson process must conform.

One of the criteria is that in a very short time interval, the chance of having more than one random event is essentially zero. So either one random event will occur or none will occur in a very short time interval. Considering the occurrence of a random event as a success, there is either a success or a failure in a very short time interval. So a very short time interval in a Poisson process can be regarded as a Bernoulli trial.

The second criterion is that the experiment remains constant over time. Specifically this means that the probability of a random event occurring in a given subinterval is proportional to the length of that subinterval and not on where the subinterval is in the original interval. For example, in the 1910 radioactivity study, \alpha-particles were emitted at the rate of \lambda= 3.87 per 7.5 seconds. So the probability of one \alpha-particle emitted from the radioactive source in a one-second interval is 3.87/7.5 = 0.516. Then the probability of observing one \alpha-particle in a half-second interval is 0.516/2 = 0.258. For a quarter-second interval, the probability is 0.258/2 = 0.129. So if we observe half as long, it will be half as likely to observe the occurrence of a random event. On the other hand, it does not matter when the quarter-second subinterval is, whether at the beginning or toward the end of the original interval of 7.5 seconds.

The third criterion is that non-overlapping subintervals are mutually independent in the sense that what happens in one subinterval (i.e. the occurrence or non-occurrence of a random event) will have no influence on the occurrence of a random event in another subinterval. To summarize, the following are the three criteria of a Poisson process:

    Suppose that on average \lambda random events occur in a time interval of length 1.

    1. The probability of having more than one random event occurring in a very short time interval is essentially zero.
    2. For a very short subinterval of length \frac{1}{n} where n is a sufficiently large integer, the probability of a random event occurring in this subinterval is \frac{\lambda}{n}.
    3. The numbers of random events occurring in non-overlapping time intervals are independent.

Consider a Poisson process in which the average rate is \lambda random events per unit time interval. Let Y be the number of random events occurring in the unit time interval. In the 1910 radioactivity study, the unit time interval is 7.5 seconds and Y is the count of the number of \alpha-particles emitted in 7.5 seconds. It follows that Y has a Poisson distribution with parameter \lambda. To see this, subdivide the unit interval into n non-overlapping subintervals of equal length where n is a sufficiently large integer. Let X_{n,j} be the number of random events in the the jth subinterval (1 \le j \le n). Based on the three assumptions, X_{n,1},X_{n,2},\cdots,X_{n,n} are independent Bernoulli random variables, where the probability of success for each X_{n,j} is \frac{\lambda}{n}. Then X_n=X_{n,1}+X_{n,2}+\cdots+X_{n,n} has a binomial distribution with parameters n and p=\frac{\lambda}{n}. Theorem 1 tells us that the limiting case of the binomial distributions for X_n is the Poisson distribution with parameter \lambda. This Poisson distribution should agree with the distribution for Y. The Poisson is also discussed in quite a lot of details in the previous post called Poisson as a Limiting Case of Binomial Distribution.

We now examine the 1910 radioactivity study a little more closely.

Example 2
The basic idea of the 1910 radioactivity study conducted by Rutherford and Geiger is that a polonium source was placed a short distance from an observation point. The number of \alpha-particles emitted from the source were counted in 7.5-second intervals for 2608 times. The following is the tabulated results.

    \displaystyle \begin{bmatrix}   \text{Number of alpha particles}&\text{ }&\text{Observed}   \\ \text{recorded per 7.5 seconds }&\text{ }&\text{counts}    \\ \text{ }&\text{ }&\text{ }   \\ 0&\text{ }&57    \\ 1&\text{ }&203    \\ 2&\text{ }&383    \\ 3&\text{ }&525    \\ 4&\text{ }&532    \\ 5&\text{ }&408   \\ 6&\text{ }&273   \\ 7&\text{ }&139   \\ 8&\text{ }&45   \\ 9&\text{ }&27   \\ 10&\text{ }&10  \\ 11+&\text{ }&6  \\ \text{ }&\text{ }&\text{ }  \\ \text{Total }&\text{ }&2608    \end{bmatrix}

What is the average number of particles observed per 7.5 seconds? The total number of \alpha-particles in these 2608 periods is

    0 \times 57+1 \times 203+2 \times 383+ 3 \times 525 + \cdots=10097.

The mean count per period is \lambda=\frac{10097}{2608}=3.87. Consider the Poisson distribution with parameter 3.87. The following is its probability function.

    \displaystyle P(X=x)=\frac{e^{-3.87} \ 3.87^x}{x!} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x=0,1,2,\cdots

Out of 2608 periods, the expected number of periods with x particles in emission is 2608P(X=x). The following is a side by side comparison in the observed counts and the expected counts.

    \displaystyle \begin{bmatrix}   \text{Number of alpha particles}&\text{ }&\text{Observed}&\text{ }&\text{Expected}    \\ \text{recorded per 7.5 seconds }&\text{ }&\text{counts}&\text{ }&\text{counts}    \\ \text{ }&\text{ }&\text{ }&\text{ }&2608 \times P(X=x)  \\ \text{ }&\text{ }&\text{ }&\text{ }&\text{ }   \\ 0&\text{ }&57&\text{ }&54.40    \\ 1&\text{ }&203&\text{ }&210.52  \\ 2&\text{ }&383&\text{ }&407.36    \\ 3&\text{ }&525&\text{ }&525.50    \\ 4&\text{ }&532&\text{ }&508.42    \\ 5&\text{ }&408&\text{ }&393.52   \\ 6&\text{ }&273&\text{ }&253.82   \\ 7&\text{ }&139&\text{ }&140.32   \\ 8&\text{ }&45&\text{ }&67.88   \\ 9&\text{ }&27&\text{ }&29.19   \\ 10&\text{ }&10&\text{ }&11.30  \\ 11+&\text{ }&6&\text{ }&5.78  \\ \text{ }&\text{ }&\text{ }&\text{ }&\text{ }  \\ \text{Total }&\text{ }&2608&\text{ }&2608    \end{bmatrix}

The expected counts are quite close to the observed counts, showing that the Poisson distribution is a very good fit to the observed data from the 1910 study.

________________________________________________________________________

More comments about the Poisson process

We have described the Poisson process as the distribution of random events in a time interval. The same idea can be used to describe random events occurring along a spatial interval, i.e. intervals in terms of distance or volume or other spatial measurements (see Examples 5 and 6 below).

Another point to make is that sometimes it may be necessary to consider an interval other than the unit length. Instead of counting the random events occurring in an interval of length 1, we may want to count the random events in an interval of length t. As before, let \lambda be the rate of occurrences in a unit interval. Then the rate of occurrences of the random events is over the interval of length t is \lambda t. The same idea will derive that fact that the number of occurrences of the random events of interest in the interval of length t is a Poisson distribution with parameter \lambda t. The following is its probability function.

    \displaystyle P(X_t=x)=\frac{e^{-\lambda t} \ (\lambda t)^x}{x!} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x=0,1,2,\cdots

where X_t is the count of the random events in an interval of length t.

For example, in the 1910 radioactive study, the unit length is 7.5 seconds. The number of \alpha-particles observed in a half unit interval (3.75 seconds) will follow a Poisson distribution with parameter 0.5 \lambda= 0.5(3.87) = 1.935 with the following probability function:

    \displaystyle P(X_{0.5}=x)=\frac{e^{-1.935} \ (1.935)^x}{x!}  \ \ \ \ \ \ \ \ \ \ \ \ \ x=0,1,2,\cdots

________________________________________________________________________

More examples

Example 3
A radioactive source is metered for 5 hours. During this period, 9638 \alpha-particles are counted. What is the probability that during the next minute, between 30 and 34 particles (both inclusive ) will be counted?

The average number of \alpha-particles counted per minute is \lambda=\frac{9638}{300}=32.12. Let X be the number of \alpha-particles counted per minute. Then X has a Poisson distribution with parameter \lambda=32.12. The following calculates P(30 \le X \le 34).

    \displaystyle \begin{aligned} P(30 \le X \le 34)&=e^{-32.12} \biggl[ \frac{32.12^{30}}{30!}+\frac{32.12^{31}}{31!}+\frac{32.12^{32}}{32!}+\frac{32.12^{33}}{33!}+\frac{32.12^{34}}{34!}   \biggr] \\&=0.341118569  \end{aligned}

Alternatively, the POISSON.DIST function in Excel can be used as follows:

    \displaystyle \begin{aligned} P(30 \le X \le 34)&=P(X \le 34)-P(X \le 29) \\&=\text{POISSON.DIST(34,32.12,TRUE)} \\& \ \ \ \ \ \ -\text{POISSON.DIST(29,32.12,TRUE)} \\&=0.671501917-0.330383348 \\&=0.341118569  \end{aligned}

Example 4
The side effect of dry mouth is known to be experienced, on the average, by 5 out of 10,000 individuals taking a certain medication. About 20,000 patients are expected to take this medication next year. What is the probability that between 12 and 16 (both inclusive) patients will experience the side effect of dry mouth? What is the exact probability model that can also be used to work this problem?

The exact model is a binomial distribution. The number of trials n= 20000 and the probability of success in each trial is p= 0.0005 (experiencing the side effect). Here, we use Poisson to estimate the binomial. The average number of patients experiencing side effect is \lambda=20000(0.0005)=10. Let X be the number of patients experiencing the side effect. The following calculates the Poisson probability for P(12 \le X \le 16) in two different ways.

    \displaystyle \begin{aligned} P(12 \le X \le 16)&=e^{-10} \biggl[ \frac{10^{12}}{12!}+\frac{10^{13}}{13!}+\frac{10^{14}}{14!}+\frac{10^{15}}{15!}+\frac{10^{16}}{16!}   \biggr] \\&=0.276182244  \end{aligned}
    \displaystyle \begin{aligned} P(12 \le X \le 16)&=P(X \le 11)-P(X \le 16) \\&=\text{POISSON.DIST(16,10,TRUE)} \\& \ \ \ \ \ \ -\text{POISSON.DIST(11,10,TRUE)} \\&=0.97295839-0.696776146 \\&=0.276182244  \end{aligned}

Example 5
In a 10-mile stretch of a highway, car troubles (e.g. tire punctures, dead batteries, and mechanical breakdown) occur at a rate of 1.5 per hour. A tow truck driver can respond to such car troubles and offer roadside assistance, which can include towing and minor repair. Assume that the number of such incidences per hour follows a Poisson distribution. At the beginning of the hour, three tow trucks (and their drivers) are available to respond to any car troubles in this stretch of highway. What is the probability that in the next hour all three tow trick drivers will be busy helping motorists with car troubles in this stretch of highway?

Let X be the number of car troubles that occur in this 10-mile stretch of highway in the one-hour period in question. If in this one hour there are 3 or more car troubles (X \ge 3), then all three tow truck drivers will be busy.

    \displaystyle \begin{aligned} P(X \ge 3)&=1-P(X \le 2) \\&=1-e^{-1.5} \biggl[ 1+1.5+\frac{1.5^{2}}{2!}   \biggr] \\&=1-0.808846831\\&=0.191153169  \end{aligned}

Example 6
Continuing Example 5. Considering that there is only 19% chance that all 3 tow truck drivers will be busy, there is a good chance that the resources are under utilized. What if one of the drivers is assigned to another stretch of highway?

With only two tow trucks available for this 10-mile stretch of highway, the following is the probability that all two tow truck drivers will be busy:

    \displaystyle \begin{aligned} P(X \ge 2)&=1-P(X \le 1) \\&=1-e^{-1.5} \biggl[ 1+1.5   \biggr] \\&=1-0.5578254\\&=0.4421746  \end{aligned}

Assigning one driver to another area seems to be a better way to make good use of the available resources. With only two tow truck drivers available, there is much reduced chance (56%) that one of the drivers will be idle, and there is a much increased chance (44%) that all available drivers will be busy.

________________________________________________________________________

Remarks

The Poisson distribution is one of the most important of all probability models and has shown to be an excellent model for a wide array of phenomena such as

  • the number of \alpha-particles emitted from radioactive source in a given amount of time,
  • the number of vehicles passing a particular location on a busy highway,
  • the number of traffic accidents in a stretch of highway in a given period of time,
  • the number of phone calls arriving at a particular point in a telephone network in a fixed time period,
  • the number of insurance losses/claims in a given period of time,
  • the number of customers arriving at a ticket window,
  • the number of earthquakes occurring in a fixed period of time,
  • the number of mutations on a strand of DNA.
  • the number of hurricanes in a year that originate in the Atlantic ocean.

What is the Poisson distribution so widely applicable in these and many other seemingly different and diverse phenomena? What is the commonality that ties all these different and diverse phenomena? The commonality is that all these phenomena are basically a series of independent Bernoulli trials. If a phenomenon is a Binomial model where n is large and p is small, then it has a strong connection to Poisson model mathematically through Theorem 1 above (i.e. it has a Poisson approximation). On the other hand, if the random phenomenon follows the criteria in a Poisson process, then the phenomenon is also approximately a Binomial model, which means that in the limiting case it is Poisson.

In both view points discussed in this post, the Poisson distribution can be regarded as a binomial distribution taken at a very granular level. This connection with the binomial distribution points to a vast arrays of problems that can be solved using the Poisson distribution.

________________________________________________________________________

Exercises

Practice problems for the Poisson concepts discussed above can be found in the companion blog (go there via the following link). Working on these exercises is strongly encouraged (you don’t know it until you can do it).

________________________________________________________________________
\copyright \ \text{2015 by Dan Ma}

Advertisements

A natural look at the negative binomial survival function

The negative binomial distribution is a discrete distribution with two parameters r and p where r>0 and 0<p<1. It has positive probabilities at the non-negative integers 0,1,2,\cdots. So it can potentially be used as a model for the random count of a phenomenon of interest. In some cases, the negative binomial distribution has a natural interpretation. In fact, the natural interpretation should be how the negative binomial distribution is introduced. With the parameter r being a positive integer, the negative binomial distribution can be naturally interpreted as the discrete waiting time until the rth success. But this natural interpretation does not apply to the general case of r being only a positive real number but not necessarily an integer. However, once the natural case that r being a positive integer is understood, it is easy to make a leap to the general case that r is any positive real number. In this post, we focus on the “natural” version of the negative binomial distribution, where r is a positive integer. In this case, there is a natural way to look at the cumulative distribution function (cdf) and the survival function. The discussion in this post will complement the following posts on the negative binomial distribution.

________________________________________________________________________

The probability functions

A Bernoulli trial is a random experiment in which there are two distinct outcomes. For convenience, they are called success (S) and failure (F). Consider performing a sequence of independent Bernoulli trials such that each trial has the same probability of success p. Fix a positive integer r. In some cases, we would like to count the number of trials that produce the rth success. Let’s call this count X_r. In other cases, we may want instead to count the number of failures before getting the rth success. Let’s call this count Y_r. According to the discussion in the two posts listed above, the probability functions of X_r and Y_r are:

    \displaystyle P(X_r=x)= \binom{x-1}{r-1} p^r (1-p)^{x-r} \ \ \ \ \ \ \ x=r,r+1,r+2,\cdots \ \ \ \ \ \ \ (1)

    \displaystyle P(Y_r=y)=\binom{y+r-1}{y} p^r (1-p)^y \ \ \ \ \ \ \ y=0,1,2,\cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)

When r=1, the resulting distribution is called the geometric distribution.

    \displaystyle P(X_1=x)= p (1-p)^{x-1} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x=1,2,3,\cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1a)

    \displaystyle P(Y_1=y)= p (1-p)^y \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ y=0,1,2,\cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2a)

Because the parameter r is a positive integer, all the above probability functions have an combinatorial explanation. For the probability function in (1), there are x many trials. The last Bernoulli trial is the rth success. Then all the preceding x-1 trails have at most r-1 successes. Hence we have the binomial coefficient \binom{x-1}{r-1} in (1). For the probability function in (2), there are y+r trials (r successes and y failures). Again, the last Bernoulli trial is the rth success. So in the preceding y+r-1 trials, there are exactly r-1 successes and exactly y failures. Hence we have the binomial coefficient \binom{y+r-1}{y} in (2). Let’s look at some examples.

Example 1
Four different prizes are randomly put into boxes of cereal. One of the prizes is a free ticket to the local zoo. Suppose that a family of four (called Family A) decides to buy this cereal until obtaining four free tickets to the zoo. What is the probability that the family will have to buy 10 boxes of cereal to obtain the four free tickets to the zoo? What is the probability that the family will have to buy 16 boxes of cereal to obtain the four free tickets to the zoo?

In this example, the success is a box of cereal with a free ticket to the zoo. So getting a ticket to the zoo is considered a success. Any one of the other three prizes is considered undesirable or a failure. Any one of the four prizes is equally likely. The negative binomial distribution in this example has r=4 and p=0.25. The count of the boxes of cereal to be purchased is the random variable X_4 as described in (1) above. The following gives the answers.

    \displaystyle \begin{aligned} P(X_4=10)&=\binom{10-1}{3} \ (0.25)^4 \ (0.75)^{10-4} \\&=\binom{9}{3} \ (0.25)^4 \ (0.75)^{6} \\&=84 \ (0.25)^4 \ (0.75)^{6} \\&=0.0583992  \end{aligned}

    \displaystyle \begin{aligned} P(X_4=16)&=\binom{16-1}{3} \ (0.25)^4 \ (0.75)^{16-4} \\&=\binom{15}{3} \ (0.25)^4 \ (0.75)^{12} \\&=455 \ (0.25)^4 \ (0.75)^{12} \\&=0.056299766  \end{aligned}

Example 2 (Example 1 continued)
Suppose Family A agrees to give any one of the undesirable prizes away to another family (called Family B). What is the probability that Family A will give 5 undesirable prizes to Family B before obtaining the four desirable tickets to the zoo? What is the probability that Family A will give 12 undesirable prizes to Family B before obtaining the four tickets to the zoo?

The negative binomial distribution in this example has r=4 and p=0.25. The interest here is to count the number failures (undesirable prizes) before getting 4 successes. Thus the random variable of interest is Y_4 as described in (2) above. The following gives the answers.

    \displaystyle \begin{aligned} P(Y_4=5)&=\binom{5+4-1}{5} \ (0.25)^4 \ (0.75)^{5} \\&=\binom{8}{5} \ (0.25)^4 \ (0.75)^{5} \\&=56 \ (0.25)^4 \ (0.75)^{5} \\&=0.0519104  \end{aligned}

    \displaystyle \begin{aligned} P(Y_4=12)&=\binom{12+4-1}{10} \ (0.25)^4 \ (0.75)^{12} \\&=\binom{15}{12} \ (0.25)^4 \ (0.75)^{12} \\&=455 \ (0.25)^4 \ (0.75)^{12} \\&=0.056299766  \end{aligned}

Here’s the mean and variance for both examples.

    \displaystyle E(X_4)=4 \ \frac{1}{0.25}=16

    \displaystyle Var(X_4)=Var(Y_4)=4 \ \frac{0.75}{0.25^2}=48

    \displaystyle E(Y_4)=4 \ \frac{0.75}{0.25}=12

Thus Family A is expected to buy 16 boxes of cereal to get the 4 tickets to the zoo and is expected to give 12 prizes to the other family. However, the variance is fairly large. As a result, the actual number of boxes purchased can vary from the mean by a large amount.

_______________________________________________________________________

The survival functions and the cumulative distribution functions

For any random variable T, the cumulative distribution function is P(T \le t) where T can range over all real numbers t or a relevant set of real numbers t. The survival function is P(T > t). The term survival comes from the interpretation that P(T > t) is the probability of a life surviving beyond time t if T is meant to model the lifetime of a biological life or some system. Even when T is not a lifetime random variable, we still use the term survival function for P(T > t).

Example 1 asks the probability of a certain number of trials in order to get rth success and the probability of a certain number of failures before getting the rth success. Sometimes it is more informative to know how many trials that are required to be performed in order to achieve one’s goal. For example, it may be useful to know the mean number of trials or the probability of achieving the goal in x trials or less. In some cases, it may take time and resource to perform the random Bernoulli trials. It will be helpful to know ahead of time the likelihood of achieving one’s goal given the resources that are available. In the above examples, it will be helpful for Family A to have a better and clearer sense of how many boxes of cereal are to be purchased. Therefore, it is worthwhile to look at the cdf and the survival function of the negative binomial distribution.

In terms of basic principle, here’s the survival functions for the distribution described in (1) and (2).

    \displaystyle \begin{aligned} P(X_r>x)&=\sum \limits_{k=x+1}^\infty P(X_r=k) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x=r,r+1,r+2,\cdots \\&=\sum \limits_{k=x+1}^\infty \binom{k-1}{r-1} p^r (1-p)^{k-r} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3) \end{aligned}

    \displaystyle \begin{aligned} P(Y_r>y)&=\sum \limits_{j=y+1}^\infty P(Y_r=j) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ y= 0,1,2,3,\cdots \\&=\sum \limits_{j=y+1}^\infty \binom{j+r-1}{j} p^r (1-p)^{j} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (4) \end{aligned}

The cumulative distribution functions can be defined by basic principle or by taking the complement as follows:

    \displaystyle \begin{aligned} P(X_r \le x)&=\sum \limits_{k=r}^x P(X_r=k) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x=r,r+1,r+2,\cdots \\&=1-P(X_r>x)  \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (5)\end{aligned}

    \displaystyle \begin{aligned} P(Y_r \le y)&=\sum \limits_{j=0}^y P(Y_r=j) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ y=0,1,2,3,\cdots \\&=1-P(Y_r>y)  \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (6)\end{aligned}

Example 3 (the above examples continued)
What is the probability that Family A will buy more than 16 boxes in order to obtain the 4 free tickets to the zoo? What is the probability that Family A will give away at most 10 undesirable prizes before obtaining the 4 free tickets to the zoo?

Working from the definition, here’s the answers:

    \displaystyle P(X_4>16)=1-P(X_4 \le 16)=0.40498711

    \displaystyle P(Y_4 \le 10)=0.47866004

The mean number of boxes of cereal purchased is 16 as indicated earlier. Since the variance is large, there is still a significance chance (about 40%) that Family A will have to buy more than 16 boxes of cereal before achieving their goal. On the other hand, there is good chance (about 48%) that Family will give away at most 10 undesirable prizes.

Note that the calculation for P(X_4>16) is based on P(X_4 \le 16), which requires the calculation of 17 probabilities. The calculation for P(Y_4 \le 10), which requires 11 probabilities. Such calculation can be done by software of course. There is a natural way of looking at the calculation for (3), (4), (5) and (6). This alternative approach will give much better insight on the negative binomial distribution.

_______________________________________________________________________

A more natural way of interpreting the survival function

We now discuss a better way to look at the survival functions in (3) and (4). Consider the event X_r>x and the event Y_r>y. We will see that the negative binomial survival function can be related to the cdf of a binomial distribution.

For the event X_r>x to occur, the rth success occurs after performing x trials. So it will take x+1 trials or more to get the rth success. This means that in the first x trials, there are at most r-1 successes. The following highlights the equivalent statements.

    \displaystyle \begin{aligned} X_r>x&\equiv \text{the } r \text{th success occurs after performing } x \text{ trials} \\&\equiv \text{it takes at least } x+1 \text{ trials to get the } r \text{th success}  \\&\equiv \text{in the first } x \text{ trials, there are at most } r-1 \text{ successes}  \end{aligned}

The last statement is a binomial distribution. Specifically, it is the binomial distribution with x trials and the probability of success p. Let’s denote the count of successes of this binomial distribution by B_{x,p}. Thus we can relate the survival function in (3) with the cdf of B_{x,p} as follows:

    \displaystyle \begin{aligned} P(X_r>x)&=P(B_{x,p} \le r-1) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x=r,r+1,r+2,\cdots \\&=\sum \limits_{k=0}^{r-1} \ P(B_{x,p}=k)  \\&=\sum \limits_{k=0}^{r-1} \ \binom{x}{k} \ p^k (1-p)^{x-k} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (7)   \end{aligned}

The advantage of (7) is that it gives us the insight of relating the negative binomial distribution with the binomial distribution. In terms of computation, (7) only requires r many binomial probabilities. Thus Example 3 only requires the computation of 4 probabilities (versus 17 previously).

Note that X_r=Y_r+r. Thus the event Y_r>y is the same as the event X_r>y+r. So we can just piggy back on the work done in (7). For the sake of the more clarity, here’s a translation for the event Y_r>y.

    \displaystyle \begin{aligned} Y_r>y&\equiv X_r>y+r \\&\equiv \text{the } r \text{th success occurs after performing } y+r \text{ trials} \\&\equiv \text{it takes at least } y+r+1 \text{ trials to get the } r \text{th success}  \\&\equiv \text{in the first } y+r \text{ trials, there are at most } r-1 \text{ successes}  \end{aligned}

As before, let B_{n,p} denote the number of successes in performing n Bernoulli trials with p as the probability of success. Based on the above translation, the following gives the survival function for the negative binomial random variable Y_r.

    \displaystyle \begin{aligned} P(Y_r>y)&=P(X_r>y+r)   \\&=P(B_{y+r,p} \le r-1) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ y=0,1,2,\cdots \\&=\sum \limits_{j=0}^{r-1} \ P(B_{y+r,p}=j)  \\&=\sum \limits_{j=0}^{r-1} \ \binom{y+r}{j} \ p^j (1-p)^{y+r-j} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (8)   \end{aligned}

For both negative binomial random variables X_r and Y_r, the survival functions can be computed using (7) and (8), respectively. Rote memorization of the formulas (7) and (8) is not recommended. Instead, focus on the thought process that translate the events X_r>x and Y_r>y into a binomial distribution.

Example 4 (the above examples continued)
We now rework Example 3 using the ideas presented in (7) and (8).

    \displaystyle \begin{aligned} P(X_4>16)&=\sum \limits_{k=0}^{3} \ \binom{16}{k} \ 0.25^k \ 0.75^{16-k}=0.40498711    \end{aligned}

    \displaystyle \begin{aligned} P(Y_4 \le 10)&=1-P(Y_4>10)   \\&=1-\sum \limits_{j=0}^{3} \ \binom{14}{j} \ 0.25^j \ 0.75^{14-j} \\&=1-0.52133996  \\&=0.47866004   \end{aligned}

Example 5 (the above examples continued)
What is the median number of boxes of cereal purchased by Family A in order to obtain 4 boxes with the prize of free ticket to the zoo? What is the median number of boxes of cereal with undesirable prizes that are purchased by Family A?

We have the following probabilities.

    \displaystyle P(X_4>14)=\sum \limits_{k=0}^{3} \ \binom{14}{k} \ 0.25^k \ 0.75^{16-k}=0.52133996

    \displaystyle P(X_4>15)=\sum \limits_{k=0}^{3} \ \binom{15}{k} \ 0.25^k \ 0.75^{16-k}=0.461286876

    \displaystyle P(X_4 \le 14)=1-0.52133996=0.47866004

    \displaystyle P(X_4 \le 15)=1-0.461286876=0.538713124

This means that the median number of boxes to be purchased is 15. One way to look at it is that x= 15 is the first number such that P(X_4 \le x) is greater than 0.5. Then the median number of boxes with undesirable prizes is 11 (15 less 4).

_______________________________________________________________________

Comparing with the Gamma distribution

The thought process discussed in (7) and (8) certainly gives a more efficient way to calculate the cumulative distribution function and the survival function of the negative binomial distribution. Even though the negative binomial cdf can be calculated easily by software, the ideas in (7) and (8) provides a formulation that gives more insight on the negative binomial distribution.

The though process in (7) and (8) is analogous to the relationship between the Gamma distribution and the Poisson distribution. Consider the Gamma distribution where the shape parameter n is a positive integer and the rate parameter \beta can be any positive real number. Then the following is the density function of the Gamma distribution under consideration:

    \displaystyle f(x)=\frac{\beta^n}{(n-1)!} \ x^{n-1} \ e^{-\beta \ x} \ \ \ \ \ x>0 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (9)

The Gamma distribution described by the density function in (9) can be interpreted as a waiting time – the waiting time until the nth change in a Poisson process. Thus, if the nth change takes place after time t, there can be at most n-1 arrivals in the time interval [0,t]. Thus the survival function of this Gamma distribution is the same as the cdf of a Poisson distribution. The survival function in (7) is analogous to the following relation.

    \displaystyle P(X>t)=\int_t^\infty \frac{\beta^n}{(n-1)!} \ x^{n-1} \ e^{-\beta x} \ dx=\sum \limits_{j=0}^{n-1} \frac{e^{-\beta t} \ (\beta t)^j}{j!} \ \ \ \ \ \ \ \ \ \ \ (10)

The idea for (7) and (8) is the waiting time until the rth success where each success or failure is based on a Bernoulli process. The resulting probability distribution is a discrete waiting time process. The idea for (10) is the waiting time until the nth change where each change is based on a Poisson counting process. The resulting probability distribution is a continuous waiting time process. It is not necessary to memorize these formulas. It is easy to reproduce (7), (8) and (10) from the underlying thought processes.

________________________________________________________________________
\copyright \ \text{2015 by Dan Ma}