# The birthday problem

Consider this random experiment. You ask people (one at a time) of their birthdays (month and day only). The process continues until there is a repeat in the series of birthdays, in other words, until two people you’ve surveyed share the same birthday. How many people you have to ask before finding a repeat? What is the probability that you will have to ask $k$ people before finding a repeat? What is the median number of people you have to ask? In this post, we discuss this random variable and how this random variable relates to the birthday problem.

In the problem at hand, we ignore leap year and assume that each of the 365 days in the year is equally likely to be the birthday for a randomly chosen person. The birthday problem is typically the following question. How many people do we need to choose in order to have a 50% or better chance of having a common birthday among the selected individuals?

The random experiment stated at the beginning can be restated as follows. Suppose that balls are randomly thrown (one at a time) into $n$ cells (e.g. boxes). The random process continues until a ball is thrown into a cell that already has one ball (i.e. until a repeat occurs). Let $X_n$ be the number of balls that are required to obtain a repeat. Some of the problems we discuss include the mean (the average number of balls that are to be thrown to get a repeat) and the probability function. We will also show how this random variable is linked to the birthday problem when $n=365$.

___________________________________________________________________________

The Birthday Problem

First, we start with the birthday problem. The key is to derive the probability that in a group of $k$ randomly selected people, there are at least two who share the same birthday. It is easier to do the complement – the probability of having different birthdays among the group of $k$ people. We call this probability $p_k$.

\displaystyle \begin{aligned} p_k&=\frac{365}{365} \ \frac{364}{365} \ \frac{363}{365} \cdots \frac{365-(k-1)}{365} \\&=\frac{364}{365} \ \frac{363}{365} \cdots \frac{365-(k-1)}{365} \\&=\biggl[1-\frac{1}{365} \biggr] \ \biggl[1-\frac{2}{365} \biggr] \cdots \biggl[1-\frac{k-1}{365} \biggr] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1) \end{aligned}

where $k=2,3,4,\cdots,365$. The reasoning for the first line is that there are 365 choices to be picked in the first selection. Each subsequent random selection has to avoid the previous birthday, thus 364 choices for the second person and only $365-(k-1)$ choices for the $k$th person.

To answer the birthday problem, just plug in values of $k$ to compute $p_k$ and $1-p_k$ until reaching the smallest $k$ such that $p_k<0.5$ and $1-p_k>0.5$. The calculation should be done using software (Excel for example). The smallest $k$ is 23 since

$p_{23}= 0.492702766$

$1-p_{23}= 0.507297234$

In a random group of 23 people, there is a less than 50% chance of having distinct birthdays, and thus a more than 50% chance of having at least one identical birthday. This may be a surprising result. Without the benefit of formula (1), some people may think that it will take a larger sample to obtain a repeat.

The benefit of (1) extends beyond the birthday problem. Let’s consider the case for $n$, i.e. randomly pick numbers from the set $\left\{1,2,3,\cdots,n-1,n \right\}$ with replacement until a number is chosen twice (until a repeat occurs). Similarly, let $p_{n,k}$ be the probability that in $k$ draws all chosen numbers are distinct. The probability $p_{n,k}$ is obtained by replacing 365 with $n$.

\displaystyle \begin{aligned} p_{n,k}&=\frac{n}{n} \ \frac{n-1}{n} \ \frac{n-2}{n} \cdots \frac{n-(k-1)}{n} \\&=\frac{n-1}{n} \ \frac{n-2}{n} \cdots \frac{n-(k-1)}{n} \\&=\biggl[1-\frac{1}{n} \biggr] \ \biggl[1-\frac{2}{n} \biggr] \cdots \biggl[1-\frac{k-1}{n} \biggr] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2) \end{aligned}

Formula (2) will be useful in the next section.

___________________________________________________________________________

The Random Variable

We now look into the random variable discussed at the beginning, either the one for picking people at random until a repeated birthday or throwing balls into cells until one cell has two balls. To illustrate the idea, let’s look at an example.

Example 1
Roll a fair die until obtaining a repeated face value. Let $X_6$ be the number of rolls to obtain the repeated value. Find the probability function $P[X_6=k]$ where $k=2,3,4,5,6,7$.

Note that $P[X_6=k]$ is the probability that it will take $k$ rolls to get a repeated die value.

$\displaystyle P(X_6=2)=\frac{6}{6} \times \frac{1}{6}=\frac{1}{6}$

$\displaystyle P(X_6=3)=\frac{6}{6} \times \frac{5}{6} \times \frac{2}{6}=\frac{10}{6^2}$

$\displaystyle P(X_6=4)=\frac{6}{6} \times \frac{5}{6} \times \frac{4}{6} \times \frac{3}{6}=\frac{60}{6^3}$

$\displaystyle P(X_6=5)=\frac{6}{6} \times \frac{5}{6} \times \frac{4}{6} \times \frac{3}{6} \times \frac{4}{6}=\frac{240}{6^4}$

$\displaystyle P(X_6=6)=\frac{6}{6} \times \frac{5}{6} \times \frac{4}{6} \times \frac{3}{6} \times \frac{2}{6} \times \frac{5}{6}=\frac{600}{6^5}$

$\displaystyle P(X_6=7)=\frac{6}{6} \times \frac{5}{6} \times \frac{4}{6} \times \frac{3}{6} \times \frac{2}{6} \times \frac{1}{6} \times \frac{6}{6}=\frac{720}{6^6}$

To get a repeat in 2 rolls, there are 6 choices for the first roll and the second has only one choice – the value of the first roll. To get a repeat in 3 rolls, there are 6 choices for the first roll, 5 choices for the second roll and the third roll must be out of the 2 previous two distinct values. The idea is that the first $k-1$ rolls are distinct and the last roll must be one of the previous values. $\square$

The reasoning process leads nicely to the general case. In the general case, let’s consider the occupancy interpretation. In throwing balls into $n$ cells, let $X_n$ be defined as above, i.e. the number of balls that are required to obtain a repeat. The following gives the probability $P[X_n=k]$.

\displaystyle \begin{aligned} P[X_n=k]&=\frac{n}{n} \times \frac{n-1}{n} \times \cdots \times \frac{n-(k-2)}{n} \times \frac{k-1}{n} \\&=\frac{n-1}{n} \times \cdots \times \frac{n-(k-2)}{n} \times \frac{k-1}{n} \\&=\frac{(n-1) \times (n-2) \times \cdots \times (n-(k-2)) \times (k-1)}{n^{k-1}} \\&=\biggl[1-\frac{1}{n} \biggr] \times \biggl[1-\frac{2}{n} \biggr] \times \cdots \times \biggl[1-\frac{k-2}{n}\biggr] \times \frac{k-1}{n} \ \ \ \ \ \ \ (3) \end{aligned}

where $k=2,3,\cdots,n+1$.

The reasoning is similar to Example 1. To get a repeat in throwing $k$ balls, the first $k-1$ balls must be go into different cells while the last ball would go into one of the $k-1$ occupied cells. For the first $k-1$ balls to go into different cells, there are $n (n-1) \cdots (n-(k-2))$ ways. There are $k-1$ cells for the last ball to land. Thus the product of these two quantities is in the numerator of (3).

Once the probability function (3) is obtained, the mean $E[X_n]$ can be derived accordingly. For the case of $n=365$, $E[X_{365}]=24.62$ (calculated by programming the probability function in Excel). On average it will be required to sample about 25 people to obtain a repeated birthday.

Another interesting quantity is $P[X_n>k]$. This is the probability that it will take throwing more than $k$ balls to get a repeat. Mathematically this can be obtained by first calculating $P[X_n \le k]$ by summing the individual probabilities via (3). This is a workable approach using software. There is another way that is more informative. For the event $X_n>k$ to happen, the first $k$ throws must be in different cells (no repeat). The event $X_n>k$ is identical to the event that there is no repeat in the first $k$ throws of balls. This is how the random variable $X_n$ is linked to the birthday problem since the probability $P[X_n>k]$ should agree with the probability $p_{n,k}$ in (2).

$\displaystyle P[X_n>k]=\biggl[1-\frac{1}{n} \biggr] \ \biggl[1-\frac{2}{n} \biggr] \cdots \biggl[1-\frac{k-1}{n} \biggr] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (4)$

___________________________________________________________________________

Back to The Birthday Problem

Consider the case for $X_{365}$. What is the median of $X_{365}$? That would be the median number of people surveyed to obtain a pair of identical birthday. The median of $X_{365}$ would be the least $k$ such that $P[X_{365} \le k]$ is at least 0.5. Note that $P[X_{365}>k]$ is identical to $p_k$ in (1). The above calculation shows that $p_{23}=0.4927$ and $1-p_{23}=0.5073$. Thus the median of $X_{365}$ is 23. Thus when performing the random sampling of surveying birthday, about half of the times you can expect to survey 23 or fewer than 23 people.

The birthday problem is equivalently about finding the median of the random variable $X_{365}$. A little more broadly, the birthday problem is connected to the percentiles of the variable $X_{n}$. In contrast, the mean of $X_{365}$ is $E[X_{365}]=24.62$. The following lists several percentiles for the random variable $X_{365}$.

$\begin{array}{ccccccc} k & \text{ } & P[X_{365}>k] & \text{ } & P[X_{365} \le k] & \text{ } & \text{Percentile} \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 14 & \text{ } & 0.77690 & & 0.22310 & \\ 15 & \text{ } & 0.74710 & & 0.25290 & \text{ } & \text{25th Percentile} \\ 16 & \text{ } & 0.71640 & & 0.28360 & \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 22 & \text{ } & 0.52430 & & 0.47570 & \\ 23 & \text{ } & 0.49270 & & 0.50730 & \text{ } & \text{50th Percentile} \\ 24 & \text{ } & 0.46166 & & 0.53834 & \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 31 & \text{ } & 0.26955 & & 0.73045 & \\ 32 & \text{ } & 0.24665 & & 0.75335 & \text{ } & \text{75th Percentile} \\ 33 & \text{ } & 0.22503 & & 0.77497 & \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 40 & \text{ } & 0.10877 & & 0.89123 & \\ 41 & \text{ } & 0.09685 & & 0.90315 & \text{ } & \text{90th Percentile} \\ 42 & \text{ } & 0.08597 & & 0.91403 & \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 46 & \text{ } & 0.05175 & & 0.94825 & \\ 47 & \text{ } & 0.04523 & & 0.95477 & \text{ } & \text{95th Percentile} \\ 48 & \text{ } & 0.03940 & & 0.96060 & \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 56 & \text{ } & 0.01167 & & 0.98833 & \\ 57 & \text{ } & 0.00988 & & 0.99012 & \text{ } & \text{99th Percentile} \\ 58 & \text{ } & 0.00834 & & 0.99166 & \\ \end{array}$

It is clear that in a group of 366 people, it is certain that there will be at least one repeated birthday (again ignoring leap year). This is due to the pigeon hole principle. As the percentiles in the above table shows, you do not need to survey anywhere close to 366 to get a repeat. The median is 23 as discussed. The 75th percentile of $X_{365}$ is 32.

The preceding calculation shows that you do not need a large group to have a repeated birthday. About 50% of the times, you will survey 23 or fewer people, about 75% of the time, 32 or fewer people, About 99% of the time, you will survey 57 or fewer people, much fewer than 365 or 366. So with around 50 in a random group, there is a near certainty of finding a shared birthday. In a random group of 100 people, there should be an almost absolute certainty that there is a shared birthday.

For a further demonstration, we simulated the random variable $X_{365}$ 10,000 times. The range of the simulated values is 2 to 78. Thus the odds for 100 people to survey is smaller than 1 in 10,000. To get a simulated value of 100, we will have to simulate more than 10,000 values of $X_{365}$. The median of the 10,000 simulated results is 23. The following table summarizes the results.

$\begin{array}{rrrrrr} \text{Interval} & \text{ } & \text{Count} & \text{ } & \text{Proportion} & \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ \text{2 to 9} & \text{ } &953 & & 0.09530 & \\ \text{10 to 19} & \text{ } & 2870 & & 0.28700 & \\ \text{20 to 29} & \text{ } & 3055 & & 0.30550 & \\ \text{30 to 39} & \text{ } & 1922 & & 0.19220 & \\ \text{40 to 49} & \text{ } & 886 & & 0.08860 & \\ \text{50 to 59} & \text{ } & 253 & & 0.02530 & \\ \text{60 to 69} & \text{ } & 48 & & 0.00480 & \\ \text{70 to 78} & \text{ } & 13 & & 0.00130 & \\ \end{array}$

Not shown in the table is that 33 of the simulated results are the value of 2. Thus it is possible to ask two people and they both have the same birthday. But the odds of that happening is 33 out of 10,000 according to this particular set of simulations (probability 0.0033). The theoretical probability of 2 is 1/365 = 0.002739726. There are 2 instances of 78 in the 10,000 simulated values. Thus the odds are 2 out of 10,000 with probability 0.0002. The theoretical probability is 0.000037 using (3).

___________________________________________________________________________
$\copyright \ 2017 \text{ by Dan Ma}$

# How long does it take to collect all coupons?

This post discusses the coupon collector problem, a classical problem in probability.

___________________________________________________________________________

The Coupon Collector Problem

The problem is usually stated as a coupon collector trying to collect the entire set of coupons. For example, each time the coupon collector buys a product (e.g. a box of breakfast cereal), he receives a coupon, which is a prize that can be a toy or a baseball card or other interesting item. Suppose that there are $n$ different types of coupons (prizes) and that the coupon collector wishes to collect the entire set. How many units of the product does the coupon collector have to buy in order to collect the entire set?

A simplified discussion of the coupon collector problem is found in another blog post. This post is more detailed discussion.

This blog post in another blog discusses the coupon collector problem from a simulation perspective.

As shown below, if there are 5 different coupons, it takes 12 purchases on average to get all coupons. If there are 10 different types of coupons, on average it would take 30 purchases. If there are 50 different types of coupons, it would take on average 225 purchases collect the entire set. The first few coupons are obtained fairly quickly. As more and more coupons are accumulated, it is harder to get the remaining coupons. For example, for the 50-coupon case, after the 49 coupons are obtained, it takes on average 50 purchases to get the last coupon.

Suppose that the coupon collector does not want to collect the entire set and only wishes to collect $r$ distinct coupons where $r \le n$. It turns out that this special case only requires a minor tweak to the case of collecting the entire set. Our strategy then is to focus on the main case. The special case will be discussed at the end of the post.

We first consider the main case that the coupon collector wishes to collect the entire set. The problem can be cast as a random sampling from the population $S=\left\{1,2,3,\cdots,n \right\}$. Selecting a number at random from $S$ with replacement is equivalent to the coupon collector receiving a coupon. Let $X_n$ be the minimum number of selections such that each number in $S$ is picked at least once. In this post we discuss the probability function of $X_n$ as well as its mean and variance.

Another interpretation of the problem is that it can be viewed as an occupancy problem, which involves throwing balls into $n$ cells at random. The random quantity of interest is the number of balls that are required to be thrown such that each cell has at least one ball. Clearly this formulation is identical to the coupon interpretation and the random sampling interpretation. The angle of occupancy problem is useful since we can leverage the formulas developed in this previous post. A description of the occupancy problem is given here.

Regardless of the interpretation, the goal is obtain information on the random variable $X_n$, the minimum number of random selections from $S=\left\{1,2,3,\cdots,n \right\}$ in order to have the complete set of $n$ distinct values represented in the sample.

___________________________________________________________________________

Mean and Variance

The mean and variance of $X_n$ are easier to obtain. So that is where we begin. The key is to break up $X_n$ into a sum as follows:

$X_n=C_{1}+C_{2}+C_{3}+\cdots+C_{i-1}+C_{i}+\cdots+C_{n} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (0)$

where $C_{i}$ is the additional selections from $S=\left\{1,2,3,\cdots,n \right\}$ to get a number that is distinct from the $i-1$ distinct numbers that have been chosen. For example, $C_3$ is the number of random selections to get a number that is distinct from the two distinct numbers obtained up to that point.

Note that each $C_i$ involves repeated sampling until some criterion is reached, thus resembling a geometric random variable. Indeed they are. As the sampling continues and as more distinct values are obtained, it is not as easy to obtain a new number. After $i-1$ distinct numbers have been obtained, the probability of drawing a new distinct number is $p=\frac{n-(i-1)}{n}$. As geometric random variables, each $C_i$ has the following mean and variance.

$\displaystyle E[C_i]=\frac{1}{p}=\frac{n}{n-(i-1)}$

$\displaystyle Var[C_i]=\frac{1-p}{p^2}=\frac{n(i-1)}{[n-(i-1)]^2}$

where $i=1,2,3,\cdots,n$. Note that the random variables $C_i$ are independent. The value of $C_i$ does not depend on how many trials it takes to draw the previous $i-1$ distinct numbers. The following gives the mean and variance of $X_n$.

$\displaystyle E[X_n]=\sum \limits_{i=1}^n E[C_i]=\sum \limits_{i=1}^n \frac{n}{n-(i-1)} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$

$\displaystyle Var[X_n]=\sum \limits_{i=1}^n Var[C_i]=\sum \limits_{i=1}^n \frac{n(i-1)}{[n-(i-1)]^2} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)$

The expectation $E[X_n]$ can be rearranged as follows to give more information.

$\displaystyle E[X_n]=n \ \biggl[\frac{1}{n}+ \cdots + \frac{1}{3}+ \frac{1}{2} + 1\biggr]=n \ H_n \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3)$

The quantity $H_n$ is the partial sum of the harmonic series. Note that $H_n \rightarrow \infty$ as $n \rightarrow \infty$. Thus $E[X_n] \rightarrow \infty$ as $n \rightarrow \infty$. The quantity $H_n$ can be interpreted as the average number of units of product that are required to purchase per coupon. The following table lists out the expected values for selected values of $n$.

Table 1

$\begin{array}{cccccc} \text{Number of} & \text{ } & \text{Expected Number of Trials} & \text{ } & \text{Expected Total Number} & \\ \text{Coupons} & \text{ } & \text{per Coupon} & \text{ } & \text{of Trials (rounded up)} & \\ n & \text{ } & E[H_n] & \text{ } & E[X_n] & \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 1 & \text{ } & 1.0000 & & 1 & \\ 2 & \text{ } & 1.5000 & & 3 & \\ 3 & \text{ } & 1.8333 & & 6 & \\ 4 & \text{ } & 2.0833 & & 9 & \\ 5 & \text{ } & 2.2833 & & 12 & \\ 6 & \text{ } & 2.4500 & & 15 & \\ 7 & \text{ } & 2.5929 & & 19 & \\ 8 & \text{ } & 2.7179 & & 22 & \\ 9 & \text{ } & 2.8290 & & 26 & \\ 10 & \text{ } & 2.9290 & & 30 & \\ 20 & \text{ } & 3.5977 & & 72 & \\ 30 & \text{ } & 3.9950 & & 120 & \\ 40 & \text{ } & 4.2785 & & 172 & \\ 50 & \text{ } & 4.4992 & & 225 & \\ 60 & \text{ } & 4.6799 & & 281 & \\ 70 & \text{ } & 4.8328 & & 339 & \\ 80 & \text{ } & 4.9655 & & 398 & \\ 90 & \text{ } & 5.0826 & & 458 & \\ 100 & \text{ } & 5.1874 & & 519 & \\ \end{array}$

Table 1 gives an estimate on how long to expect to collect the entire set of coupons for selected coupon sizes. The third column gives the expected total number of purchases to obtain the entire coupon set. The second column gives an estimate of how many purchases on average to obtain one coupon. For the 50-coupon case, it takes on average about 4.5 purchases to obtain one coupon. However, it does not tell the whole story. To get the 50th coupon, it takes on average 50 trials. Note that $E[C_{50}]=50$ in the 50-coupon case in formula (1). In a simulation of the 50-coupon problem, it took 54 trials to obtain the 50th coupon. To get the 49th coupon, it takes on average 50/2 = 25 trials.

___________________________________________________________________________

The Occupancy Problem

We now view the coupon collector problem as an occupancy problem in order to leverage a formula from a previous post. Suppose that we randomly throw $k$ balls into $n$ cells. Let $Y_{k,n}$ be the number of empty cells as a result of randomly assigning $k$ balls into $n$ cells. The following gives the probabilities $P(Y_{k,n}=j)$ where $j=0,1,2,\cdots,n-1$.

$\displaystyle P(Y_{k,n}=0)=\sum \limits_{i=0}^{n} (-1)^{i} \binom{n}{i} \biggl(1-\frac{i}{n}\biggr)^{k} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (4)$

$\displaystyle P(Y_{k,n}=j)=\binom{n}{j} \sum \limits_{i=0}^{n-j} (-1)^{i} \binom{n-j}{i} \biggl(1-\frac{j+i}{n}\biggr)^{k} \ \ \ \ \ \ \ \ \ \ \ \ \ (5)$

where $j=1,2, \cdots, n-1$.

The notation $\binom{n}{j}$ is the binomial coefficient, which is the number of ways to choose $j$ objects out of $n$ objects where order does not matter. The calculation is defined by $\displaystyle \binom{n}{j}=\frac{n!}{j! (n-j)!}$.

The formula (5) gives the probability of having $j$ empty cells. In throwing $k$ balls into $n$ cells, the probability of having $w$ occupied cells is then $P[Y_{k,n}=n-w]$.

___________________________________________________________________________

The Probability Function

We now discuss the probability function of the random variable $X_n$, namely $P[X_n=k]$ for $k=n,n+1,n+2,\cdots$. The event $X_n=k$ means that all $n$ cells are occupied after throwing $k$ balls with the first $k-1$ balls landing in $n-1$ cells. Putting it in another way, there are zero empty cells after throwing $k$ balls and there is 1 empty cell after throwing the first $k-1$ balls. This can be stated using the notation in the preceding section on the occupancy problem as follows:

$P(X_n=k]=P[Y_{k,n}=0 \text{ and } Y_{k-1,n}=1] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (6)$

Consider the following derivation.

\displaystyle \begin{aligned} P(X_n=k]&=P[Y_{k,n}=0 \text{ and } Y_{k-1,n}=1] \\&=P[Y_{k,n}=0 \ \lvert \ Y_{k-1,n}=1] \times P[Y_{k-1,n}=1] \\&=\frac{1}{n} \times \binom{n}{1} \sum \limits_{i=0}^{n-1} (-1)^i \binom{n-1}{i} \biggl[ 1-\frac{1+i}{n} \biggr]^{k-1} \\&=\sum \limits_{i=0}^{n-1} (-1)^i \binom{n-1}{i} \biggl[ 1-\frac{1+i}{n} \biggr]^{k-1} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (7) \end{aligned}

where $k=n,n+1,n+2,\cdots$.

Rather than memorizing the probability function in (7), a better way is to focus on the thought process that is inherent in (6).

One comment about the calculation for (7). The summation for $P[X_n=k]$ has $n$ terms. A given probability may involve multiple values of $k$, e.g.

$P[X_6>15]=1-P[6 \le X_6 \le 14]$.

Unless the number of values for $k$ is very small, the calculation should be done using software. Microsoft Excel is an excellent way to perform the calculation. The calculations for the examples below are programmed in Excel.

___________________________________________________________________________

Examples

Example 1
Suppose that a fair die is rolled until all 6 faces have appeared. Find the mean number of rolls and the variance of the number of rolls. What is the probability that it will take at least 12 rolls? What is the probability that it will take more than 15 rolls?

Using the notation developed above, the random variable of interest is $X_6$. Its mean and variance are:

$\displaystyle E[X_6]=6 \ \biggl[ 1 + \frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+\frac{1}{6} \biggr]=6 \times 2.45 = 14.7 \approx 15$

$\displaystyle Var[X_6]=\sum \limits_{i=1}^6 \frac{6(i-1)}{[6-(i-1)]^2}=38.99$

The following is the probability function for $X_6$.

$\displaystyle P[X_6=k]=\sum \limits_{i=0}^{5} (-1)^i \binom{5}{i} \biggl[ 1-\frac{1+i}{6} \biggr]^{k-1}$

where $k=6,7,8,\cdots$

For each $k$, the quantity $P[X_6=k]$ requires 6 calculations. Performing the calculations in Excel, the desired probabilities are:

$P[X_6 \ge 12]=1-P[6 \le X_6 \le 11]=1-0.356206419=0.643793581$

$P[X_6 > 15]=1-P[6 \le X_6 \le 15]=1-0.644212739=0.355787261$

Even though the average number of trials is 15, there is still a significant probability that it will take more than 15 trials. This is because the variance is quite large. $\square$

Example 2
An Internet startup is rapidly hiring new employees. What is the expected number of new employees until all birth months are represented? Assume that the 12 birth months are equally likely. What is the probability that the company will have to hire more than 25 employees? If the company currently has more than 25 employees with less than 12 birth months, what is the probability that it will have to hire more than 35 employees to have all 12 birth months represented in the company?

The random variable of interest is $X_{12}$. The following shows the mean and probability function.

$\displaystyle E[X_{12}]=12 \ \biggl[ 1 + \frac{1}{2}+\frac{1}{3}+\cdots +\frac{1}{12} \biggr]=37.23852814 \approx 38$

$\displaystyle P[X_{12}=k]=\sum \limits_{i=0}^{11} (-1)^i \binom{11}{i} \biggl[ 1-\frac{1+i}{12} \biggr]^{k-1}$

where $k=12,13,14,\cdots$

Performing the calculation in Excel, we obtain the following probabilities.

$P[X_{12} > 25]=1-P[12 \le X_{12} \le 25]=1-0.181898592=0.818101408$

$P[X_{12} > 35]=1-P[12 \le X_{12} \le 35]=1-0.531821149=0.468178851$

$\displaystyle P[X_{12} > 35 \ \lvert \ X_{12} > 25]=\frac{P[X_{12} > 35]}{P[X_{12} > 25]}=\frac{0.468178851}{0.818101408}=0.572274838$

___________________________________________________________________________

A Special Case

We now consider the special case that the coupon collector only wishes to collect $r$ distinct coupons where $r \le n$. Of course, $n$ is the total number of distinct coupon types. Let $X_{n,r}$ be the minimum number of purchases such that $r$ distinct coupons have been obtained. In the random sampling interpretation, $X_{n,r}$ would be the minimum sample size such that $r$ distinct elements have been chosen from the sample space $S=\left\{1,2,3,\cdots,n \right\}$. The mean and variance of $X_{n,r}$ follow from the same idea. Each $X_{n,r}$ is the independent sum of geometric random variables as in (0).

$X_{n,r}=C_{1}+C_{2}+C_{3}+\cdots+C_{i-1}+C_{i}+\cdots+C_{r} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (8)$

where $r \le n$.

Thus $E[X_{n,r}]$ and $Var[X_{n,r}]$ would be like (1) and (2) except that the summation is through $r$ instead of $n$.

For the probability function of $X_{n,r}$, we only need to tweak the thought process expressed in (6) slightly. For the event $X_{n,r}=k$ to happen, exactly $r$ cells are occupied after throwing $k$ balls with the first $k-1$ balls landing in $r-1$ cells. In other words, there are exactly $n-r$ empty cells after throwing $k$ balls and there are exactly $n-r+1$ empty cells after throwing $k-1$ balls. The following expresses this condition in terms of the occupancy problem, i.e. similar to (6).

$P[X_{n,r}=k]=P[Y_{k,n}=n-r \text{ and } Y_{k-1,n}=n-r+1] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (9)$

Here’s the important components that need to go into $P[X_{n,r}=k]$ with the first one coming from the occupancy formula (5).

$\displaystyle P[Y_{k-1,n}=n-r+1]=\binom{n}{n-r+1} \sum \limits_{i=0}^{r-1} (-1)^i \binom{r-1}{i} \biggl[ 1-\frac{n-r+1+i}{n} \biggr]^{k-1}$

$\displaystyle P[Y_{k,n}=n-r \ \lvert \ Y_{k-1,n}=n-r+1]=\frac{n-r+1}{n}$

Multiply the above two probabilities together produces the desired probability $P[X_{n,r}=k]$ for $k=r,r+1,r+2,\cdots$.

$\displaystyle P[X_{n,r}=k]=\binom{n-1}{r-1} \sum \limits_{i=0}^{r-1} (-1)^i \binom{r-1}{i} \biggl[ 1-\frac{n-r+1+i}{n} \biggr]^{k-1} \ \ \ \ \ \ \ \ (10)$

Note that when $n=r$ (collecting the entire set of coupons), formula (10) would be identical to (7). The following example demonstrates the calculation.

Example 3
Consider the 6-coupon case discussed in Example 1. Suppose that the coupon collector is interested in collecting $r=4$ coupons. What is the expected number of purchases to get 4 coupons? What is the probability that it will take more than 6 purchases to get 4 coupons? What is the probability that it will take more than 8 purchases to get 4 coupons? Compare these results with Example 1.

The random variable of interest is $X_{6,4}$. The mean is:

$\displaystyle E[X_{6,4}]=6 \biggl[ \frac{1}{6}+\frac{1}{5}+\frac{1}{4}+\frac{1}{3} \biggr]=5.7 \approx 6$

Note that it is much faster to obtain 4 coupons than the entire set of six. The following gives the probability function for $X_{6,4}$.

$\displaystyle P[X_{6,4}=k]=\binom{5}{3} \sum \limits_{i=0}^{3} (-1)^i \binom{3}{i} \biggl[ 1-\frac{3+i}{6} \biggr]^{k-1}$

where $k=4,5,6,\cdots$

Performing the calculations in Excel gives the following probabilities.

$P[X_{6,4} > 6]=1-P[12 \le X_{12} \le 25]=1-0.74845679=0.25154321$

$P[X_{6,4} > 8]=1-P[12 \le X_{12} \le 35]=1-0.928712277=0.071287723$

The first probability shows there is still a good chance that it will take more then the mean number of trials to get 4 coupons. The wait time is much less than in Example 1 since the probability of wait time more than 8 is fairly small. $\square$

___________________________________________________________________________

Moment Generating Function

One distributional quantity that is easy to obtain is the moment generating function (MGF) for the random variable $X_n$ (the case of collecting the entire set of coupons) as well as for the random variable $X_{n,r}$ (the partial case). Since both of these random variables are the independent sum of geometric random variables, their MGFs would simply be the product of the individual geometric MGFs. The following gives the results.

$\displaystyle M_{X_n}(t)=\prod \limits_{i=1}^n \frac{[n-(i-1)] e^t}{n-(i-1) e^t} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (11)$

$\displaystyle M_{X_{n,r}}(t)=\prod \limits_{i=1}^r \frac{[n-(i-1)] e^t}{n-(i-1) e^t} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (12)$

___________________________________________________________________________
$\copyright \ 2017 \text{ by Dan Ma}$

# The problem of points

There are two celebrated problems in probability that originated from the French professional gambler Chevalier de Méré (1607-1684). The problems were solved jointly by Blaise Pascal (1623-1662) and Pierre de Fermat (1601-1665) in a series of letters. The ideas discussed in these letters were often credited with having started probability theory. In a previous post, we discuss one of the problems posed by Chevalier de Méré to Pascal (the dice problem). In this post, we discuss the second problem – the problem of points.

___________________________________________________________________________

Describing the Problem

Here’s a description of the famous problem of points. Two players play a game of chance with the agreement that each player puts up equal stakes and that the first player who wins a certain number of rounds (or points) will collect the entire stakes. Suppose that the game is interrupted before either player has won. How do the players divide the stakes fairly?

It is clear that the player who is closer to winning should get a larger share of the stakes. Since the player who had won more points is closer to winning, the player with more points should receive a larger share of the stakes. How do we quantify the differential?

To describe the problem in a little more details, suppose that two players, A and B, play a series of points in a game such that player A wins each point with probability $p$ and that player B wins each point with probability $1-p$. The first player to win $T$ points wins the game. Suppose that the game is stopped for some reason. At the time of stopping, player A has won $a$ points and player B has won $b$ points with $a and $b. How do they divide the stakes? Note that the pot is contributed equally by the two players.

In attacking the problem, Pascal’s idea is that the share of the stakes that is received by a player should be proportional to his/her probability of winning if the game were to continue at the point of stopping. Let’s explore this idea by looking at examples.

___________________________________________________________________________

Looking at the Problem thru Examples

The following examples are based on the following rule. Let’s say two players (A and B) play a series of points with equal probability of winning a point at each round. Each player puts up a stake of 32. The first player who wins four points take the entire stakes.

Example 1 (Fermat’s Approach)
Suppose that player A has won 2 points and player B has won one point right before the termination of the game. How can the stakes be divided fairly?

In the analysis, we assume that the game continues. Then player A needs 2 more points to win while player B needs 3 more points to win. We would like to calculate the probability that player A wins 2 points before player B winning 3 points. Consider the next 2 + 3 – 1 = 4 rounds (assuming one point per round). If player A wins at least 2 points in the next 4 rounds, player A win the game. The complement of this probability would be the probability that player B wins the game.

Let S (success) be the event that player A wins a point and let F (failure) be the event that player A loses a point (i.e. player B winning the point). Let’s write out all the outcomes of playing 4 points (this was the approach of Fermat). There are 16 such outcomes.

$\begin{array}{rrrrrr} 1 & SSSS & * & & \text{ } & \\ 2 & SSSF & * & & \text{ } & \\ 3 & SSFS & * & & \text{ } & \\ 4 & SSFF & * & & \text{ } & \\ 5 & SFSS & * & & \text{ } & \\ 6 & SFSF & * & & \text{ } & \\ 7 & SFFS & * & & \text{ } & \\ 8 & SFFF & \text{ } & & \text{ } & \\ 9 & FSSS & * & & \text{ } & \\ 10 & FSSF & * & & \text{ } & \\ 11 & FSFS & * & & \text{ } & \\ 12 & FSFF & \text{ } & & \text{ } & \\ 13 & FFSS & * & & \text{ } & \\ 14 & FFSF & \text{ } & & \text{ } & \\ 15 & FFFS & \text{ } & & \text{ } & \\ 16 & FFFF & \text{ } & & \text{ } & \\ \end{array}$

Player A wins In eleven of the outcomes (the ones with asterisk). Note that there are at least 2 S’s in the outcomes with asterisk. Thus the probability of player A winning is 11/16 = 0.6875. At the time of stopping the game, player A has a 68.75% chance of winning (if the game were to continue). The share for player A is 0.6875 x 64 = 44.

The example demonstrates the approach taken by Fermat. He essentially converted the original problem of points into an equivalent problem, i.e. finding the probability of player A winning the game if the game were to continue. Then he used combinatorial methods to count the number of cases that result in player A winning. In this example, the additional four points that are to be played are fictitious moves (the moves don’t have to be made) but are useful for finding the solution. The only draw back in Fermat’s approach is that he used counting. What if the number of points involved is large?

Example 2 (Pascal’s Approach)
The specifics of the example are the same as in Example 1. The listing out all possible cases in Example 1 makes the solution easy to see. But if the number of points is large, then the counting could become difficult to manage. What we need is an algorithm that is easy to use and is easy to implement on a computer.

Pascal essentially had the same thinking as Fermat, i.e. to base the solution on the probability of winning if the game were to continue. Pascal also understood that the original problem of points is equivalent to the problem of playing an additional series of points. In this example, playing additional 2 + 3 – 1 = 4 points. As in Example 1, we find the probability that player A wins 2 or more points in this series of 4 points. Pascal’s way to find this probability was based on what are now known as the Pascal’s triangle and the binomial distribution. We would use the following modern day notation:

$\displaystyle \sum \limits_{j=2}^4 \ \binom{4}{j} \ \frac{1}{2^4}=6 \times \frac{1}{2^4}+4 \times \frac{1}{2^4}+\frac{1}{2^4}=\frac{11}{16}=0.6875$

Note that the above probability of 0.6875 is the probability of having at least 2 successes in 4 trials (with 0.5 probability of success in each trial). Anyone with a good understanding of the binomial distribution can carry out the calculation (or use software). Of course, this mathematical construct came from Pascal! To the contemporaries of Pascal and Fermat, this concept was definitely not commonplace.

Example 3
Suppose that player A has won 1 point and player B has won no point at the time of termination of the game. How can the prize money be divided fairly?

Based on the discussion of Example 1 and Example 2, player A needs to win at least 3 points to win the game and player B needs to win 4 or more points to win the game. The extended series of points would have 3 + 4 – 1 = 6 points. Player A then needs to win at least 3 points out of 6 points (at least 3 successes out of 6 trials). The following gives the probability of player A winning the extended series of plays.

$\displaystyle \sum \limits_{j=3}^6 \ \binom{6}{j} \ \frac{1}{2^6}=\frac{42}{64}=0.65625$

With the total stakes being 64, the share for player A would be 64 x 42/64 = 42. The share for player B would be 22.

___________________________________________________________________________

General Discussion

We now discuss the ideas that are brought up in the examples. As indicated above, two players, A and B, contribute equally to the stakes and then play a series of points until one of them wins $T$ points. The probability that player A wins a round (one point in each round) is $p$ and the probability that player B wins a point is $1-p$. Suppose that the game is stopped for some reason before either player has won. At the time of stopping, player A has won $a$ points and player B has won $b$ points with $a and $b. Let $n=T-a$ and $m=T-b$. The key to solving the problem of points is to look at an extended play of $n+m-1$ points.

Here’s the great insight that came from Pascal and Fermat. They looked forward and not backward. They did not base the solution on the number of points that are already won. Instead, they focused on an extended series of points to determine the share of the winning. This additional play of points is “fictitious” but it helps clarify the process. In essence, they turned the original problem of points into a problem about this additional play of $n+m-1$ points.

The original problem is: what is the fair share for player A when the game is stopped prematurely with player A having won $a$ points and player B having won $b$ points? The equivalent problem is: what is the probability of player A winning at least $n$ points of the next $n+m-1$ points where $n=T-a$ and $m=T-b$ (assuming the game has not stopped). Let’s call this probability $P(n,m)$. Let’s examine the setting of this probability. Each point is like a Bernoulli trial – either a success (player A winning it) or a failure (player B winning it). There are $n+m-1$ trials. The probability of success is $p$ in each trial. We wish to find the probability that are at least $n$ successes. What is being described is a binomial distribution. The probability being sought is:

$\displaystyle P(n,m)=\sum \limits_{j=n}^{n+m-1} \ \binom{n+m-1}{j} \ p^j \ (1-p)^{n+m-1-j} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$

where $n=T-a$, $m=T-b$ and that $a$ is the number of points won by player A and $b$ is the number of points won by player B at the time the game is terminated.

The probability $P(n,m)$ is the probability of player A winning the game if the game were to continue at the point of termination. This probability $P(n,m)$ is the proportion of the stakes that would be awarded to player A. Of course, the proportion that should be awarded to player B would be $1-P(n,m)$. The quantity $P(n,m)$ can be obtained from the given parameters and by using any software that has a function for the binomial distribution.

The problem of points seems to have an easy solution since the answer $P(n,m)$ is so accessible. Any one who understands the binomial distribution can comprehend. It is also easy to compute probabilities for a binomial distribution using calculator or software. One thing to keep in mind is that the solution looks accessible now because of the tools and concepts that came from trails blazed by Pascal and Fermat (and because of the computing tools that we have). Tools and concepts such as Pascal’s triangle and binomial distribution were unknown to people at the time of Pascal and Fermat.

___________________________________________________________________________

Working Backward

For us, the calculation in $(1)$ is easily done using calculator or software. Pascal did not calculate $P(n,m)$ directly and instead perform the calculation backward by using the following formula.

$P(n,m)=p \times P(n-1,m)+(1-p) \times P(n,m-1) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)$

The formula can be derived mathematically. But doing that is not necessary. The quantity $P(n,m)$ is the probability that player A will win $n$ points before player B winning $m$ points. We can derive the above formula by conditioning on the outcome of the first point. The quantity $P(2,3)$ is calculated in Example 1 and Example 2. It is the average of two similar probabilities with smaller parameters.

$P(2,3)=\frac{1}{2} \times P(1,3)+\frac{1}{2} \times P(2,2)$

Based on the recursive formula in $(1)$, Pascal built up the answer backward, similar to the way a computer program is written. In fact, this recursive approach allows us to solve not just for one scenario, but for all the scenarios in a game of $T$ points.

Example 4
We now revisit the problem in Examples 1 and 2. Recall that the game is to play for 4 points, i.e. the first player winning 4 points collects the entire stakes of 64 (32 is contributed by each player). Each player earns a point with probability 0.5. We now show how to divide the stakes when the game is stopped at every possible stopping point.

The following diagram (Figure 1) shows the table for the share awarded to player A. The table is empty except for the top row and rightmost column (the numbers in red). The number 64 shown in the last column would be the amount awarded to player A because player A has won 4 points. The number 0 in the top means that player B has won 4 games. So player A gets nothing. Note that the bottom row highlighted in orange shows the numbers of points that have been won player A. The bottom row highlighted in blue shows the remaining points that player A needs to win in order to win the entire stakes (these are the fictitious points). Similarly, the columns highlighted in orange and blue on the left show the same information for player B.

Figure 1 – The share of the stakes awarded to player A

Now, we can use the recursive formula in $(1)$ to fill the table. Basically each cell is the average of the number above it and the number to the right. The parameter $n$ in $P(n,m)$ is a number in the blue row. The parameter $m$ in $P(n,m)$ is a number in the blue column. The following shows the results.

Figure 2 – The share of the stakes awarded to player A

For example, when player A has won 2 points and player B has won 1 point, the share for player A is 44 (the average of 32 and 56), the same answer as in Example 1 and Example 2. When player A has won 2 points and player B as won 2 points, both players are in equally competitive positions. Then each player gets 32. When player A has won 2 points and player B as won 3 points, the share for player A is 16 (the average of 0 and 32).

Essentially the formula in $(2)$ is the idea of using smaller steps rather than the entire extended play of $n+m-1$ points. This idea of smaller steps was preferred by Pascal. At the point where player A needs $n$ more points to win and player B needs $m$ more points to win, the idea is to play one more point. Suppose that the players know the awards to the two players after one more round. Then they should split the difference between the future awards. The calculation should begin at the point where each player only needs one more point to win (the cell with 32 in Figure 2). In that cell, we know the awards after one additional round. Taking the average of 0 and 64 gives 32. From that cell, we move down and move left on the table. Keep repeating the process until the entire table is filled in.

There is a way to tweak the table approach to work for unequal winning probability of a point. Let’s say the probability of player A winning a point is 0.6. Then the probability of player B winning a point is 0.4. The value of a given cell in the table would be the weighted average of the cell on the right (0.6 weight) with the cell above it (0.4 weight). When we know the results from playing one more round, we assign 0.6 to the result of player A winning and 0.4 to the result of player B winning. The following table shows the results.

Figure 3 – The share of the stakes awarded to player A (with 0.6 weight)

The direct formula $(1)$ or the table approach using the recursive formula $(2)$ can be easily programmed in a computer. For Pascal, the table approach is a very attractive option since the calculation can be built up from lower parameter values. In the above configuration, simply fill in the right column (the entire stakes going to player A) and the top row (player A getting nothing). Then the remaining cells are obtained by weighted average as described in formula $(2)$.

We present one more example.

Example 5
Suppose that player B is a casino and player A is a visitor at the casino playing for 12 points. The house edge is 2% so that the probability of player A winning a round is 0.48. If player A desires to leave after player A has won 9 points and the house has won 6 points, what is the proportion of the stakes that should be awarded to player A?

Playing for 12 points, player A needs 3 more points to win and the house needs 6 more points to win. So we need to analyze an extended play of 3 + 6 – 1 = 8 points. For player A to win the extended play, he needs to win at least 3 points.

$\displaystyle P(3,6)=\sum \limits_{j=3}^8 \ \binom{8}{j} \ 0.48^j \ 0.52^{8-j}=0.827631917$

The answer can be obtained by computing each term in the sum (from $j=3$ to $j=8$). Another way is to use the BINOMDIST function in Excel as follows:

= 1-BINOMDIST(2, 8, 0.48, TRUE) = 1-0.172368083 = 0.827631917

Based on the fair division method discussed in this blog post, player A deserves 82.76% of the stakes.

___________________________________________________________________________

Remarks

In their correspondence, Pascal and Fermat came up with convincing and consistent solution to the problem of points. The earlier solutions to the problem of points were not satisfactory (to all concerned) and are sometimes inconsistent. Division of stakes only basing on the numbers points that have been won may produce extreme results. For example, if player A has won 1 point and player B has won no points, then player A would get the entire stakes. For the game described in Figure 2, for the same scenario, player A gets 42 out of 64 (42 / 64 = 0.65625), which is far from 100%.

For more detailed information on the history of the problem of points, see “A History of Mathematics” by Victor J. Katz.

___________________________________________________________________________
$\copyright \ 2016 \text{ by Dan Ma}$

# When a gambler asked a mathematician for help

When a gambler consistently loses large sum of money, what can he or she do? When one particular gambler, Chevalier de Méré (1607-1684), was losing a big fortune, he called a “mathematical help line”. In fact, his correspondence with Blaise Pascal (1623-1662) earned him a place in the history book. The problems that were presented by de Méré, jointly worked on by Pascal and Pierre de Fermat (1601-1665), are regarded as the beginning of the emerging academic field of probability. Chevalier de Méré was in need of help for two problems – the problem of points and on the dice problem that now bears his name. In this post we discuss the dice problem. The problem of points is discussed in the next post.

___________________________________________________________________________

The Dice Problem

There were two dice problems from Chevalier de Méré. The first game involves four rolls of a fair die. In this game, de Méré made bet with even odds on rolling at least one six when a fair die is rolled four times. His reasoning was that since getting a six in one roll of a die is $\frac{1}{6}$ (correct), the chance of getting a six in four rolls of a die would be $4 \times \frac{1}{6}=\frac{2}{3}$ (incorrect). With the favorable odds of 67% of winning, he reasoned that betting with even odds would be a profitable proposition. Though his calculation was incorrect, he made considerable amount of money over many years playing this game.

The second game involves twenty four rolls of a pair of fair dice. The success in the first game emboldened de Méré to make even bet on rolling one or more double sixes in twenty four rolls of a pair of dice. His reasoning was that the chance for getting a double six in one roll of a pair of dice is $\frac{1}{36}$ (correct). Then the chance of getting a double six in twenty four rolls of a pair of dice would be $24 \times \frac{1}{36}=\frac{2}{3}$ (incorrect). He again reasoned that betting with even odds would be profitable too.

But experience showed otherwise. As he lost a lot of money, he realized something was not quite right with the second game. In 1654, he challenged his renowned friend Blaise Pascal to find an explanation. The solution emerged in a series of letters between Pascal and Fermat. Out of this joint effort, a foundation was laid for the idea of probability as an academic subject. One particular idea that emerged was the Pascal triangle. Another one was the binomial distribution. In fact, anyone who understand the binomial distribution can very quickly see the faulty reasoning of de Méré.

___________________________________________________________________________

The Simulation

Before we get to the calculation, let’s simulate the games played by de Méré. Using random numbers generated from using the Rand() function in Excel, we simulated 100,000 iterations of each of the games. In our 100,000 simulations of the first game – rolling a die four times, there are 51,380 iterations with at least one six. This suggests that de Méré’s position would win more than half of the time, though not the $\frac{2}{3}$ odds that he believed. But it was profitable for him nonetheless.

In our 100,000 simulations of the second game – rolling a pair of dice 24 times, there are only 49,211 iterations with at least one double six. This seems to support that de Méré’s position is a losing proposition, that he would be losing his bets more than half the time.

Of course, de Méré could have done similar simulation, though in a much smaller scale, by rolling the dice himself (say, 100 times). He could have seen the light much sooner.

___________________________________________________________________________

The Calculation

Let’s see why the first game was profitable for de Méré and why the second game was not.

The First Game
In a roll of a die, there are six possible outcomes: 1, 2, 3, 4, 5, 6. If the die is fair, the probability of getting a six is $\frac{1}{6}$. Likewise, the probability of getting no six in one roll of a fair die is $\frac{5}{6}$.

The probability of getting no six in four rolls is:

$\displaystyle P(\text{no six in four rolls})=\frac{5}{6} \times \frac{5}{6} \times \frac{5}{6} \times \frac{5}{6}=\biggl(\frac{5}{6}\biggr)^4=0.482253$.

Thus in four rolls of a fair die, the probability of getting at least one six is:

\displaystyle \begin{aligned}P(\text{at least one six in four rolls})&=\displaystyle 1 - P(\text{no six in four rolls})\\&=1 - 0.482253\\&=0.517747\end{aligned}

Thus the probability of getting at least one six in four rolls of a fair die is 0.517747. Out of 100 games, de Méré would on average win 52 games. Out of 1000 games, he would on average win 518 games. Suppose each bet is one French franc. Then de Méré would gain 36 francs for each 1000 francs in wagered. Thus he had the house’s edge of about 3.6%.

The Second Game
In a roll of a pair of dice, there are a total of 36 possible outcomes (i.e. the six outcomes of the first die combined with the six outcomes of the second die). Out of these 36 outcomes, only one of them is a double six. So, the probability of getting a double six is $\frac{1}{36}$ in rolling a pair of dice. Likewise, the probability of not getting a double six is $\frac{35}{36}$.

The probability of getting no double six in 24 rolls of a pair of dice is:

\displaystyle \begin{aligned}P(\text{no double six in 24 rolls})= \biggl(\frac{35}{36}\biggr)^{24}=0.5086\end{aligned}

Thus the probability of getting at least one double six in 24 rolls is:

\displaystyle \begin{aligned}P(\text{at least one double six in 24 rolls})&=\displaystyle 1 - P(\text{no double six in 24 rolls})\\&=1 - 0.5086\\&=0.4914\end{aligned}

Thus the probability of getting at least one double six in 24 rolls of a pair of fair dice is 0.4914. On average, de Méré would only win about 49 games out of 100 and his opposing side would win about 51 games out of 100 games. Out of 1000 games, he would on average win 491 games (the opposing side would win on average 509 games). With each bet as one franc, the opposing side of de Méré would win 18 francs for each 1000 francs wagered (thus the opposing side having the house’s edge of about 1.8%).

The odds indicated by the simulations discussed above are in line with the calculated results. It would be interesting to known what action did de Méré take after learning the answers. Maybe he stopped playing the second game and only played the first game. Maybe he modified the second game so that the odds of winning for him was at least even (or better).

___________________________________________________________________________
$\copyright \ 2016 \text{ by Dan Ma}$

# Looking for a Match

I recently gave an exam in my statistics course, which turned out to be an excellent example for the matching problem, a classic problem in probability. There were 66 students taking the test. I wrote down the names of the students in the order of turning in the exam. The following table shows the positions of the students in alphabetical order and in the order they turned in the exam.

For example, the first student in the alphabetical order was the 12th student who turned in the exam. The first student who turned in the exam was the 37th student in the alphabetical order. It turned out that there was a student who had the same position in both orders. Such a student is called a match (see the following table).

This example captures the essence of a classic problem in probability called the matching problem. There are many other colorful descriptions of the matching problem. All of them involve the pairing of two orderings on a set of objects. We have a set of objects or items that are ordered in some natural way. Then we scramble the items randomly (or according to some ordering that is unpredictable in advance). Whenever an item has the same position in both the original order and in the scrambled order, it is called a match.

In our exam example, one version of the matching problem asks: what is the probability that there is at least one student that is a match?

In fact, in matching two orderings, such as the one described here, a match happens more often than not. The probability of having at least one match is roughly 0.63. Specifically, when $n$ is the number of students taking the exam, the probability of finding at least one match approaches $1-e^{-1}=0.632121$ as $n \rightarrow \infty$. The derivation of this probability is based on the inclusion-exclusion principle and is discussed in the blog post called The Matching Problem.

Even though the probability of having at least one match is a function of $n$ (the number of items), the probability converges to $1-e^{-1}$ pretty rapidly. Thus for all practical purposes, we can say that the probability of having at least one match is 0.63 (roughly two-third), whenever the number of items involved in the random scrambling is reasonably large (as in the 66 students taking the exam).

Instead of finding the probability of having at least one match, we can also ask for the probability of having exactly $k$ matches, where $k$ is any integer from $0$ to $n$. Let $X_n$ be the number of matches when we match the “exam turning in” order with the alphabetical order for $n$ students. The probability function $P(X_n=k)$ is derived in the blog post called More About the Matching Problem.

The blog post More About the Matching Problem also points out that $P(X_n=k)$ is approximated by the Poisson distribution with parameter $\alpha=1$. Thus we have:

$\displaystyle (1) \ \ \ \ \ P(X_n=k) \approx \frac{e^{-1}}{k!} \ \ \ \ \ \ \ \ \ \ k=0,1,2,\cdots,n$

The following are the first 4 probabilities of $P(X_n=k)$.

$\displaystyle (2) \ \ \ \ \ P(X_n=0) \approx \frac{e^{-1}}{0!}=0.3679$

$\displaystyle (3) \ \ \ \ \ P(X_n=1) \approx \frac{e^{-1}}{1!}=0.3679$

$\displaystyle (4) \ \ \ \ \ P(X_n=2) \approx \frac{e^{-1}}{2!}=0.1839$

$\displaystyle (5) \ \ \ \ \ P(X_n=3) \approx \frac{e^{-1}}{3!}=0.0313$

In the random experiment of matching two orderings on the same set of objects, about 37% of the time, there is no match and about 37% of the time there is exactly one match. Having no matches and having exactly one match are the mostly scenarios (occurring about 74% of the time). Having 2 matches is possible, but only happens about 18% of the time. It is rare to have 3 ore more matches.

Another interesting observation is that if a match occurs, it is mostly likely that there is only one match (such as the example discussed here).

There are many colorful descriptions of the matching problem. The possibilities are limitless. One example is that of $n$ married couples going to a ball room dancing class. The instructor randomly assign the ladies to the gentlemen as dancing partners. A match occurs if a wife is assigned as
the dancing partner of her husband.

A previous blog post (Tis the Season for Gift Exchange) presents an example involving gift exchange. Each person attending a party brings a gift for gift exchange. The gifts are put in a pile and each person randomly selects a gift from the pile. A match occurs if a person selects his or her own gift.

In another blog post (A lazy professor who lets students do their own grading), a professor randomly returns quizzes to the students for grading. A match occurs if a students is assigned his or her own quiz.

# A lazy professor who lets students do their own grading

After reading the example discussed here, any professor or instructor who is contemplating randomly distributing quizzes back to the students for grading perhaps should reconsider the idea! This is one of several posts discussing the matching problem. Go to the end of this post to find links to these previous posts.

Consider the following problem. Suppose that a certain professor is lazy and lets his students grade their own quizzes. After he collects the quizzess from his $n$ students, he randomly assigns the quizzes back to these $n$ students for grading. If a student is assigned his or her own quiz, we say that it is a match. We have the following questions:

• What is the probability that each of the $n$ students is a match?
• What is the probability that none of the $n$ students is a match?
• What is the probability that exactly $k$ of the $n$ students are matches?
• What is the probability that at least one of the $n$ students is a match?

The above problem is called the matching problem, which is a classic problem in probability. In this post we solve the last question indicated above. Though the answer is in terms of the total number of quizzes $n$, it turns out that the answer is independent of $n$ and is approximately $\frac{2}{3}$. Thus if the professor assigns the quizzes randomly, it will be very unusual that there is no match.

The last question above is usually stated in other matching situations. One is that there are $n$ couples (say, each couple consists of a husband and a wife) in a class for ballroom dancing. Suppose that the dance instructor randomly matches the men to the ladies. When a husband is assigned his own wife, we say that it is a match. What is the probability that there is at least one couple that is a match?

The key to answering this question is the theorem stated in Feller (page 99 in chapter 4 of [1]). We state the theorem and make use of it in the solution of the last question above. A sketch of the proof will be given at the end. For ideas on the solutions to the first three questions above, see this previous post.

The union of $n$ events
For any $n$ events $E_1,E_2, \cdots,E_n$ that are defined on the same sample space, we have the following formula:

$(1) \ \ \ \ \ P[E_1 \cup E_2 \cup \cdots \cup E_n]=\sum \limits_{m=1}^{n} (-1)^{m+1} \thinspace S_m$ where

$S_1=\sum \limits_{r=1}^{n}P[E_r]$,

$S_2=\sum \limits_{j,

$S_m=\sum P[E_{i(1)} \cap E_{i(2)} \cap \cdots \cap E_{i(m)}]$

Note that in the general term $S_m$, the sum is taken over all increasing sequence $i(\cdot)$, i.e. $1 \le i(1) < i(2) < \cdots < i(m) \le n$. For $n=2,3$, we have the following familiar formulas:

$\displaystyle P[E_1 \cup E_2]=P[E_1]+P[E_2]-P[E_1 \cap E_2]$

\displaystyle \begin{aligned}P[E_1 \cup E_2 \cup E_3]=& \ \ \ \ P[E_1]+P[E_2]+P[E_3]\\&-P[E_1 \cap E_2]-P[E_1 \cap E_3]-P[E_2 \cap E_3]\\&+P[E_1 \cap E_2 \cap E_3] \end{aligned}

The Matching Problem

Suppose that the $n$ students are labeled $1,2, \cdots, n$. Let $E_i$ be the even that the $i^{th}$ student is assigned his or her own quiz by the professor. Then $P[E_1 \cup E_2 \cup \cdots \cup E_n]$ is the probability that there is at least one correct match.

Note that $P[E_i]=\frac{(n-1)!}{n!}=\frac{1}{n}$. This is the case since we let the $i^{th}$ student be fixed and we permute the other $n-1$ students. Likewise, $P[E_i \cap E_j]=\frac{(n-2)!}{n!}$, since we fix the $i^{th}$ and $j^{th}$ students and let the other $(n-2)!$ students permute. In general, whenever $i(1),i(2),\cdots,i(m)$ are $m$ distinct integers and are increasing, we have:

$\displaystyle P[E_{i(1)} \cap E_{i(2)} \cdots \cap E_{i(m)}]=\frac{(n-m)!}{n!}$

We now apply the formula (1). First we show that for each $m$ where $1 \le m \le n$, $S_m=\frac{1}{m!}$. Since there are $\binom{n}{m}$ many ways to have $m$ matches out of $n$ students, we have:

\displaystyle \begin{aligned}S_m&=\sum P[E_{i(1)} \cap E_{i(2)} \cdots \cap E_{i(m)}]\\&=\binom{n}{m} \frac{(n-m)!}{n!}\\&=\frac{1}{m!} \end{aligned}

Applying the formula for the union of $n$ events, we have:

$\displaystyle P[E_1 \cup E_2 \cup \cdots \cup E_n]=1-\frac{1}{2!}+\frac{1}{3!}-\cdots+(-1)^{n+1}\frac{1}{n!}$

$\displaystyle 1-P[E_1 \cup E_2 \cup \cdots \cup E_n]=1-1+\frac{1}{2!}-\frac{1}{3!}+\cdots+(-1)^{n}\frac{1}{n!}$

Note that the left-hand side of the above equality is the first $n+1$ terms in the expansion of $e^{-1}$. Thus we have:

\displaystyle \begin{aligned}\lim_{n \rightarrow \infty}P[E_1 \cup E_2 \cup \cdots \cup E_n]&=\lim_{n \rightarrow \infty}\biggl(1-\frac{1}{2!}+\frac{1}{3!}-\cdots+(-1)^{n+1}\frac{1}{n!}\biggr)\\&=1-e^{-1}\\&=0.6321205588 \end{aligned}

The above limit converges quite rapidly. Let $P_n=P[E_1 \cup E_2 \cup \cdots \cup E_n]$. The following table lists out the first several terms of this limit.

$\displaystyle \begin{pmatrix} n&\text{ }&P_n \\{2}&\text{ }&0.50000 \\{3}&\text{ }&0.66667 \\{4}&\text{ }&0.62500 \\{5}&\text{ }&0.63333 \\{6}&\text{ }&0.63194 \\{7}&\text{ }&0.63214 \\{8}&\text{ }&0.63212\end{pmatrix}$

Sketch of Proof for the Formula
The key idea is that any sample point in the union $E_1 \cup E_2 \cdots \cup E_n$ is counted in exactly one time in the right hand side of (1). Suppose that a sample point is in exactly $t$ of the events $E_1,E_2,\cdots,E_n$. Then the following shows the number of times the sample point is counted in each expression:

$\displaystyle \sum \limits_{m=1}^{n}P[E_m] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ t \text{ times}$

$\displaystyle \sum \limits_{a

$\displaystyle \sum \limits_{a

Thus the sample point in question will be counted exactly $H$ times in the right hand side of the formula.

$\displaystyle H=\binom{t}{1}-\binom{t}{2}+\binom{t}{3}- \cdots + (-1)^{t+1}\binom{t}{t}$

The following is the derivation that $H=1$.

$\displaystyle 0=(1-1)^t=\sum \limits_{a=0}^{t} \binom{t}{a}(-1)^{a}(1)^{t-a}=\binom{t}{0}-\binom{t}{1}+\binom{t}{2}+ \cdots +(-1)^t \binom{t}{t}$

$\displaystyle 1=\binom{t}{0}=\binom{t}{1}-\binom{t}{2}- \cdots +(-1)^{t+1} \binom{t}{t}=H$

Reference

1. Feller, W., An Introduction to Probability Theory and its Applications, Vol. I, 3rd ed., John Wiley & Sons, Inc., New York, 1968

Previous Posts on The Matching Problem
The Matching Problem

Tis the Season for Gift Exchange

# Tis the Season for Gift Exchange

Suppose that there are 10 people in a holiday party with each person bearing a gift. The gifts are put in a pile and each person randomly selects one gift from the pile. In order not to bias the random selection of gifts, suppose that the partygoers select gifts by picking slips of papers with numbers identifying the gifts. What is the probability that there is at least one partygoer who ends up selecting his or her own gift? In this example, selecting one’s gift is called a match. What is the probability that there is at least one match if there are more people in the party, say 50 or 100 people? What is the behavior of this probability if the size of the party increases without bound?

The above example is a description of a classic problem in probability called the matching problem. There are many colorful descriptions of the problem. One such description is that there are $n$ married couples in a ballroom dancing class. The instructor pairs the ladies with the gentlemen at random. A match occurs if a married couple is paired as dancing partners.

Whatever the description, the matching problem involves two lists of items that are paired according to a particular order (the original order). When the items in the first list are paired with the items in the second list according to another random ordering, a match occurs if an item in the first list and an item in the second list are paired both in the original order and in the new order. The matching problem discussed here is: what is the probability that there is at least one match? Seen in this light, both examples described above are equivalent mathematically.

We now continue with the discussion of the random gift selection example. Suppose that there are $n$ people in the party. Let $E_i$ be the event that the $i^{th}$ person selects his or her own gift. The event $E=E_1 \cup E_2 \cup \cdots \cup E_n$ is the event that at least one person selects his or her own gift (i.e. there is at least one match). The probability $P(E)=P(E_1 \cup E_2 \cup \cdots \cup E_n)$ is the solution of the matching problem as described in the beginning. The following is the probability $P(E)=P(E_1 \cup E_2 \cup \cdots \cup E_n)$.

$\displaystyle (1) \ \ \ \ \ \ P[E_1 \cup E_2 \cup \cdots \cup E_n]=1-\frac{1}{2!}+\frac{1}{3!}-\cdots+(-1)^{n+1} \frac{1}{n!}$

The answer in $(1)$ is obtained by using an idea called the inclusion-exclusion principle. We will get to that in just a minute. First let’s look at the results of $(1)$ for a few iterations.

$\displaystyle (2) \ \ \ \ \ \ \begin{bmatrix} \text{n}&\text{ }& P[E_1 \cup E_2 \cup \cdots \cup E_n] \\\text{ }&\text{ }&\text{ } \\ 3&\text{ }&0.666667 \\ 4&\text{ }&0.625000 \\ 5&\text{ }&0.633333 \\ 6&\text{ }&0.631944 \\ 7&\text{ }&0.632143 \\ 8&\text{ }&0.632118 \\ 9&\text{ }&0.632121 \\ 10&\text{ }&0.632121 \\ 11&\text{ }&0.632121 \\ 12&\text{ }&0.632121 \end{bmatrix}$

In a party with random gift exchange, it appears that regardless of the size of the party, there is a very good chance that someone will end up picking his or her own gift! A match will happen about 63% of the time.

It turns out that the answers in $(1)$ are related to the Taylor’s series expansion of $e^{-1}$. We show that the probability in $(1)$ will always converge to $1-e^{-1}=0.632121$. Note that the Taylor’s series expansion of $e^{-1}$ is:

$\displaystyle (3) \ \ \ \ \ \ e^{-1}=\frac{1}{0!}-\frac{1}{1!}+\frac{1}{2!}-\frac{1}{3!}+\cdots+(-1)^n \frac{1}{n!}+\cdots$

Consequently, the Taylor’s series expansion of $1-e^{-1}$ is:

\displaystyle \begin{aligned} (4) \ \ \ \ \ \ 1-e^{-1}&=1-\biggl(1-1+\frac{1}{2!}-\frac{1}{3!}+\cdots+(-1)^n \frac{1}{n!}+\cdots \biggr) \\&=1-\frac{1}{2!}+\frac{1}{3!}-\cdots+(-1)^{n+1} \frac{1}{n!}+\cdots \end{aligned}

Note that the sum of the first $n$ terms of the series in $(4)$ is the probability in $(1)$. As the number of partygoers $n$ increases, the probability $P[E_1 \cup E_2 \cup \cdots \cup E_n]$ will be closer and closer to $1-e^{-1}=0.6321$. We have:

$\displaystyle (5) \ \ \ \ \ \ \lim \limits_{n \rightarrow \infty} P[E_1 \cup E_2 \cup \cdots \cup E_n]=1 - e^{-1}$

$\displaystyle (6) \ \ \ \ \ \ \lim \limits_{n \rightarrow \infty} P[(E_1 \cup E_2 \cup \cdots \cup E_n)^c]=e^{-1}$

The equation $(5)$ says that it does not matter how many people are in the random gift exchange, the answer to the matching problem is always $1-e^{-1}$ in the limit. The equation $(6)$ says that the probability of having no matches approaches $e^{-1}$. There is a better than even chance ($0.63$ to be more precise) that there is at least one match. So in a random gift exchange as described at the beginning, it is much easier to see a match than not to see one.

The Inclusion-Exclusion Principle

We now want to give some indication why $(1)$ provides the answers. The inclusion-exclusion principle is a formula for finding the probability of the union of events. For the union of two events and the union of three events, we have:

$\displaystyle (7) \ \ \ \ \ \ P[E_1 \cup E_2]=P[E_1]+P[E_1]-P[E_1 \cap E_2]$

\displaystyle \begin{aligned} (8) \ \ \ \ \ \ P[E_1 \cup E_2 \cup E_3]&=P[E_1]+P[E_1]+P[E_3] \\& \ \ \ \ -P[E_1 \cap E_2]-P[E_1 \cap E_3]-P[E_2 \cap E_3] \\& \ \ \ \ +P[E_1 \cap E_2 \cap E_3] \end{aligned}

To find the probability of the union of $n$ events, we first add up the probabilities of the individual events $E_i$. Because the resulting sum overshoots, we need to subtract the probabilities of the intersections $E_i \cap E_j$. Because the subtractions remove too much, we need to add back the probabilities of the intersections of three individual events $E_i \cap E_j \cap E_k$. The process of inclusion and exclusion continues until we reach the step of adding/removing of the intersection $E_1 \cap \cdots \cap E_n$. The following is the statement of the inclusion-exclusion principle.

$\displaystyle (9) \ \ \ \ \ \ P[E_1 \cup E_2 \cdots \cup E_n]=S_1-S_2+S_3- \ \ \cdots \ \ +(-1)^{n+1}S_n$

In $(9)$, $S_1$ is the sum of all probabilities $P[E_i]$, $S_2$ is the sum of all possible probabilities of the intersection of two events $E_i$ and $E_j$ and $S_3$ is the sum of all possible probabilities of the intersection of three events and so on.

We now apply the inclusion-exclusion principle to derive equation $(1)$. The event $E_i$ is the event that the $i^{th}$ person gets his or her own gift while the other $n-1$ people are free to select gifts. The probability of this event is $\displaystyle P[E_i]=\frac{(n-1)!}{n!}$. There are $\displaystyle \binom{n}{1}=\frac{n!}{1! (n-1)!}$ many ways of fixing 1 gift. So $\displaystyle S_1=\binom{n}{1} \times \frac{(n-1)!}{n!}=1$.

Now consider the intersection of two events. The event $E_i \cap E_j$ is the event that the $i^{th}$ person and the $j^{th}$ person get their own gifts while the other $(n-2)$ people are free to select gifts. The probability of this event is $\displaystyle P[E_i \cap E_j]=\frac{(n-2)!}{n!}$. There are $\displaystyle \binom{n}{2}=\frac{n!}{2! (n-2)!}$ many ways of fixing 2 gifts. So $\displaystyle S_2=\binom{n}{2} \times \frac{(n-2)!}{n!}=\frac{1}{2!}$.

By the same reasoning, we derive that $\displaystyle S_3=\frac{1}{3!}$ and $\displaystyle S_4=\frac{1}{4!}$ and so on. Then plugging $\displaystyle S_i=\frac{1}{i!}$ into $(9)$, we obtain the answers in $(1)$.

The matching problem had been discussed previously in the following posts.

The matching problem