The gambler’s ruin problem

This post discusses the problem of the gambler’s ruin. We start with a simple illustration.

Two gamblers, A and B, are betting on the tosses of a fair coin. At the beginning of the game, player A has 1 coin and player B has 3 coins. So there are 4 coins between them. In each play of the game, a fair coin is tossed. If the result of the coin toss is head, player A collects 1 coin from player B. If the result of the coin toss is tail, player A pays player B 1 coin. The game continues until one of the players has all the coins (or one of the players loses all his/her coins). What is the probability that player A ends up with all the coins?

Intuitively, one might guess that player B is more likely to win all the coins since B begins the game with more coin. For example, player A in the example has only 1 coin to start. Thus there is a 50% chance that he/she will lose everything on the first toss. On the other hand, player B has a cushion since he/she can afford to lose some tosses at the beginning.

___________________________________________________________________________

Simulated Plays of the Game

One way to get a sense of the winning probability of a player is to play the game repeatedly. Rather than playing with real money, why not simulate a series of coin tosses using random numbers generated in a computer? For example, the function =RAND() in Excel generates random numbers between 0 and 1. If the number is less than 0.5, we take it as a tail (T). Otherwise it is a head (H). As we simulate the coin tosses, we add or subtract to each player’s account accordingly. If the coin toss is a T, we subtract 1 from A and add 1 to B. If the coin toss is an H, we add 1 to A and subtract 1 from B. Here’s a simulation of the game.

$\begin{array}{ccccccc} \text{Game} & \text{ } & \text{Coin Tosses} & \text{ } & \text{Coins of Player A} & \text{ } & \text{Coins of Player B} \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ \text{ } & \text{ }& \text{ } & \text{ } & 1 & & 3 \\ 1 & \text{ }& H & \text{ } & 2 & & 2 \\ \text{ } & \text{ }& T & \text{ } & 1 & & 3 \\ \text{ } & \text{ }& H & \text{ } & 2 & & 2 \\ \text{ } & \text{ }& H & \text{ } & 3 & & 1 \\ \text{ } & \text{ }& H & \text{ } & 4 & & 0 \\ \end{array}$

In the first simulation, player A got lucky with 4 heads in 5 tosses. The following shows the next simulation.

$\begin{array}{ccccccc} \text{Game} & \text{ } & \text{Coin Tosses} & \text{ } & \text{Coins of Player A} & \text{ } & \text{Coins of Player B} \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ \text{ } & \text{ }& \text{ } & \text{ } & 1 & & 3 \\ 2 & \text{ }& H & \text{ } & 2 & & 2 \\ \text{ } & \text{ }& H & \text{ } & 3 & & 1 \\ \text{ } & \text{ }& T & \text{ } & 2 & & 2 \\ \text{ } & \text{ }& H & \text{ } & 3 & & 1 \\ \text{ } & \text{ }& H & \text{ } & 4 & & 0 \\ \end{array}$

Player A seems to be on a roll. Here’s the results of the next two simulated games.

$\begin{array}{ccccccc} \text{Game} & \text{ } & \text{Coin Tosses} & \text{ } & \text{Coins of Player A} & \text{ } & \text{Coins of Player B} \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ \text{ } & \text{ }& \text{ } & \text{ } & 1 & & 3 \\ 3 & \text{ }& T & \text{ } & 0 & & 4 \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ \text{ } & \text{ }& \text{ } & \text{ } & 1 & & 3 \\ 4 & \text{ }& H & \text{ } & 2 & & 2 \\ \text{ } & \text{ }& T & \text{ } & 1 & & 3 \\ \text{ } & \text{ }& T & \text{ } & 0 & & 4 \\ \end{array}$

Now player A loses both of these games. For good measure, we show one more simulation.

$\begin{array}{ccccccc} \text{Game} & \text{ } & \text{Coin Tosses} & \text{ } & \text{Coins of Player A} & \text{ } & \text{Coins of Player B} \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ \text{ } & \text{ }& \text{ } & \text{ } & 1 & & 3 \\ 5 & \text{ }& H & \text{ } & 2 & & 2 \\ \text{ } & \text{ }& H & \text{ } & 3 & & 1 \\ \text{ } & \text{ }& T & \text{ } & 2 & & 2 \\ \text{ } & \text{ }& H & \text{ } & 3 & & 1 \\ \text{ } & \text{ }& T & \text{ } & 2 & & 2 \\ \text{ } & \text{ }& H & \text{ } & 3 & & 1 \\ \text{ } & \text{ }& T & \text{ } & 2 & & 2 \\ \text{ } & \text{ }& T & \text{ } & 1 & & 3 \\ \text{ } & \text{ }& T & \text{ } & 0 & & 4 \\ \end{array}$

This one is a little more drawn out. Several tosses of tail in a row spells bad news for player A. When a coin is tossed long enough, there bounds to be some runs of tails. Since it is a fair coin, there bounds to be runs of head too, which benefit player A. Wouldn’t the runs of H cancel out the runs of T so that each player wins about half the time? To get an idea, we continue with 95 more simulations (to get a total of 100). Player A wins 23 of the simulated games and player B wins 77 of the games. So player A wins about 25% of the times. Note that A owns about 25% of the coins at the start of the game.

The 23 versus 77 in the 100 simulated games may be just due to luck for player B. We simulate 9 more runs of 100 games each (for a total of 10 runs with 100 games each). The following shows the results.

Ten Simulated Runs of 100 Games Each
$\begin{array}{ccccccc} \text{Simulated} & \text{ } & \text{Player A} & \text{ } & \text{Player B} & \text{ } & \text{Total} \\ \text{Runs} & \text{ } & \text{Winning} & \text{ } & \text{Winning} & \text{ } & \text{ } \\ \text{ } & \text{ } & \text{Frequency} & \text{ } & \text{Frequency} & \text{ } & \text{ } \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 1 & \text{ }& 23 & \text{ } & 77 & & 100 \\ 2 & \text{ }&34 & \text{ } & 66 & & 100 \\ 3 & \text{ }& 19 & \text{ } & 81 & & 100 \\ 4 & \text{ }& 24 & \text{ } & 76 & & 100 \\ 5 & \text{ }& 27 & \text{ } & 73 & & 100 \\ 6 & \text{ }& 22 & \text{ } & 78 & & 100 \\ 7 & \text{ }& 21 & \text{ } & 79 & & 100 \\ 8 & \text{ }& 24 & \text{ } & 76 & & 100 \\ 9 & \text{ }& 28 & \text{ } & 72 & & 100 \\ 10 & \text{ }& 24 & \text{ } & 76 & & 100 \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ \text{Total} & \text{ }& 246 & \text{ } & 754 & & 1000 \\ \end{array}$

The simulated runs fluctuate quite a bit. Player A wins any where from 19% to 34% of the times. The results are no where near even odds of winning for player A. In the 1,000 simulated games, player A wins 24.6% of the times, roughly identical to the player A’s share of the coins at the beginning. If the simulation is carried many more times (say 10,000 times or 100,000 times), the overall winning probability for player A would be very close to 25%.

___________________________________________________________________________

Gambler’s Ruin Problem

Here’s a slightly more general description of the problem of gambler’s ruin.

Gambler’s Ruin Problem

Two gamblers, A and B, are betting on the tosses of a fair coin. At the beginning of the game, player A has $n_1$ coins and player B has $n_2$ coins. So there are $n_1+n_2$ coins between them. In each play of the game, a fair coin is tossed. If the result of the coin toss is head, player A collects 1 coin from B. If the result of the coin toss is tail, player A pays B 1 coin. The game continues until one of the players has all the coins. What is the probability that player A ends up with all the coins? What is the probability that player B ends up with all the coins?

Solution: Gambler’s Ruin Problem

The long run probability of player A winning all the coins is $\displaystyle P_A=\frac{n_1}{n_1+n_2}$.
The long run probability of player B winning all the coins is $\displaystyle P_B=\frac{n_2}{n_1+n_2}$.

In other words, the probability of winning for a player is the ratio of the number of coins the player starts with to the total numbers of coins. Turning it around, the probability of a player losing all his/her coins is the ratio of the number of coins of the other player to the total number of coins. Thus the player with the fewer number of coins at the beginning has a greater chance of losing everything. Think a casino. Say, it has 10,000,000 coins. You, as a gambler, have 100 coins. Then you would have a 99.999% chance of losing all your coins.

The 99.999% chance of losing everything is based on the assumption that all bets are at even odds. This is certainly not the case in a casino. The casino enjoys a house edge. So the losing probabilities would be worse for a gambler playing in a casino.

___________________________________________________________________________

Deriving the Probabilities

We derive the winning probability for a more general case. The coin that is tossed does not have to be a fair coin. Let $p$ be the probability that the coin toss results in a head. Let $q=1-p$. Suppose that there are $n$ coins initially between player A and player B.

Let $A_i$ be the probability that player A will own all the $n$ coins when player A starts the game with $i$ coins and player B starts with $n-i$ coins. On the other hand, let $B_i$ be the probability that player B will own all the $n$ coins when player A starts the game with $i$ coins and player B starts with $n-i$ coins.

The goal here is to derive expressions for both $A_i$ and $B_i$ for $i=1,2,\cdots,n-1$ and to show that $A_i+B_i=1$. The last point, though seems like common sense, is not entirely trivial. It basically says that the game will end in finite number of moves (it cannot go on indefinitely).

Another point to keep in mind is that we can assume $A_0=0$ and $A_n=1$ as well as $B_0=0$ and $B_n=1$. In the following derivation, $H$ is the event that the first toss is a head and $T$ is the event that the first toss is a tail.

Let $E$ be the event that player A owning all the coins when player A starts the game with $i$ coins and player B starts with $n-i$ coins, i.e. $A_i=P(E)$. First condition on the outcome of the first toss of the coin.

\displaystyle \begin{aligned} A_i&=P(E | H) \ P(H)+P(E | T) \ P(T) \\&=P(E | H) \ p+P(E | T) \ q \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1) \end{aligned}

Consider the conditional probabilities $P(E | H)$ and $P(E | T)$. Note that $P(E | H)=A_{i+1}$. If the first toss is a head, then player A has $i+1$ coins and player B would have $n-(i+1)$ coins. At that point, the probability of player A owning all the coins would be the same as the probability of winning as if the game begins with player A having $i+1$ coins. Similarly, $P(E | T)=A_{i-1}$. Plug these into (1), we have:

$A_i=p \ A_{i+1}+q \ A_{i-1} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)$

Further derivation can be made on equation (2).

$A_i=p \ A_i+ q \ A_i=p \ A_{i+1}+q \ A_{i-1}$

$p \ A_{i+1}-p \ A_i=q \ A_i - q \ A_{i-1}$

$\displaystyle A_{i+1}-A_{i}=\frac{q}{p} \ (A_i-A_{i-1}),\ \ \ \ i=1,2,3,\cdots,n-1$

Plugging in the values $1,2,3,\cdots,n-1$ for $i$ produces the following $n-1$ equations.

\displaystyle \begin{aligned} &A_{2}-A_{1}=\frac{q}{p} \ (A_1-A_{0})=\frac{q}{p} \ A_1 \\&A_{3}-A_{2}=\frac{q}{p} \ (A_2-A_{1})=\bigg[ \frac{q}{p} \biggr]^2 \ A_1 \\&\ \ \ \cdots \\&A_{i}-A_{i-1}=\frac{q}{p} \ (A_{i-1}-A_{i-2})=\bigg[ \frac{q}{p} \biggr]^{i-1} \ A_1 \\&\ \ \ \cdots \\&A_{n}-A_{n-1}=\frac{q}{p} \ (A_{n-1}-A_{n-2})=\bigg[ \frac{q}{p} \biggr]^{n-1} \ A_1 \end{aligned}

Adding the first $i-1$ equations produces the following equations.

$\displaystyle A_i-A_1=A_1 \ \biggl[\frac{q}{p}+\bigg( \frac{q}{p} \biggr)^{2}+\cdots+\bigg( \frac{q}{p} \biggr)^{i-1} \biggr]$

$\displaystyle A_i=A_1 \ \biggl[1+\frac{q}{p}+\bigg( \frac{q}{p} \biggr)^{2}+\cdots+\bigg( \frac{q}{p} \biggr)^{i-1} \biggr] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3)$

There are two cases to consider. Either $\frac{q}{p}=1$ or $\frac{q}{p} \ne 1$. The first case means $p=\frac{1}{2}$ and the second case means $p \ne \frac{1}{2}$. The following is an expression for $A_i$ for the two cases.

$\displaystyle A_i = \left\{ \begin{array}{ll} \displaystyle i \ A_1 &\ \ \ \ \ \ \displaystyle \frac{q}{p}=1 \\ \text{ } & \text{ } \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (4) \\ \displaystyle \frac{ \displaystyle 1- \biggl(\frac{q}{p} \biggr)^i}{\displaystyle 1-\frac{q}{p}} A_1 &\ \ \ \ \ \ \displaystyle \frac{q}{p} \ne 1 \end{array} \right.$

The above two-case expression for $A_i$ is applicable for $i=2,3,\cdots,n$. Plugging $n$ into $i$ with $A_n=1$ gives the following expression for $A_1$:

$\displaystyle A_1 = \left\{ \begin{array}{ll} \displaystyle \frac{1}{n} &\ \ \ \ \ \ p=\frac{1}{2} \\ \text{ } & \text{ } \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (5) \\ \displaystyle \frac{\displaystyle 1-\frac{q}{p}}{1-\biggl( \displaystyle\frac{q}{p} \biggr)^n} &\ \ \ \ \ \ p \ne \frac{1}{2} \end{array} \right.$

Plugging $A_1$ from (5) into (4) gives the following:

$\displaystyle A_i = \left\{ \begin{array}{ll} \displaystyle \frac{i}{n} &\ \ \ \ \ \ p=\frac{1}{2} \\ \text{ } & \text{ } \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (6) \\ \displaystyle \frac{1-\biggl( \displaystyle \frac{q}{p} \biggr)^i}{1-\biggl( \displaystyle \frac{q}{p} \biggr)^n} &\ \ \ \ \ \ p \ne \frac{1}{2} \end{array} \right.$

The expression for $A_i$ in (6) is applicable for $i=1,2,\cdots,n$. For the case of $p=\frac{1}{2}$, the probability of player A owning all the coin is the ratio of the initial amount of coins of player A to the total number of coins, identical to what is stated in the previous section. Recall that $B_i$ is the probability that player B will end up owning all the coins when initially player A has $i$ coins and player B has $n-i$ coins. By symmetry, the following gives the expression for $B_i$.

$\displaystyle B_i = \left\{ \begin{array}{ll} \displaystyle \frac{n-i}{n} &\ \ \ \ \ \ q=\frac{1}{2} \\ \text{ } & \text{ } \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (7) \\ \displaystyle \frac{1- \biggl(\displaystyle \frac{p}{q} \biggr)^{n-i}}{\displaystyle 1- \biggl(\frac{p}{q} \biggr)^n} &\ \ \ \ \ \ q \ne \frac{1}{2} \end{array} \right.$

For the case $p=\frac{1}{2}$, which is equivalent to $q=\frac{1}{2}$, it is clear that $A_i+B_i=1$. For the case $p \ne \frac{1}{2}$, the following shows that $A_i+B_i=1$.

\displaystyle \begin{aligned} A_i+B_i&=\frac{1-\biggl( \displaystyle \frac{q}{p} \biggr)^i}{1-\biggl( \displaystyle \frac{q}{p} \biggr)^n}+\frac{1- \biggl(\displaystyle \frac{p}{q} \biggr)^{n-i}}{\displaystyle 1- \biggl(\frac{p}{q} \biggr)^n} \\&\text{ } \\&=\frac{p^n-p^n\biggl( \displaystyle \frac{q}{p} \biggr)^i}{p^n-q^n}+\frac{q^n- q^n \biggl(\displaystyle \frac{p}{q} \biggr)^{n-i}}{\displaystyle q^n- p^n}\\&\text{ } \\&=\frac{p^n-p^{n-i} \ q^i}{p^n-q^n}-\frac{q^n- q^i \ p^{n-i}}{\displaystyle p^n- q^n} \\&\text{ } \\&=\frac{p^n-q^n}{p^n-q^n}=1 \end{aligned}

The fact that $A_i+B_i=1$ shows that the gambling game discussed here has to end after certain number of plays and that it will not continue indefinitely.

___________________________________________________________________________

Further Discussion

The formulas derived here are quite profound, not to mention impactful and informative for anyone who gambles. More calculations and discussion can be found here in a companion blog on statistics.

___________________________________________________________________________
$\copyright$ 2017 – Dan Ma

The birthday problem

Consider this random experiment. You ask people (one at a time) of their birthdays (month and day only). The process continues until there is a repeat in the series of birthdays, in other words, until two people you’ve surveyed share the same birthday. How many people you have to ask before finding a repeat? What is the probability that you will have to ask $k$ people before finding a repeat? What is the median number of people you have to ask? In this post, we discuss this random variable and how this random variable relates to the birthday problem.

In the problem at hand, we ignore leap year and assume that each of the 365 days in the year is equally likely to be the birthday for a randomly chosen person. The birthday problem is typically the following question. How many people do we need to choose in order to have a 50% or better chance of having a common birthday among the selected individuals?

The random experiment stated at the beginning can be restated as follows. Suppose that balls are randomly thrown (one at a time) into $n$ cells (e.g. boxes). The random process continues until a ball is thrown into a cell that already has one ball (i.e. until a repeat occurs). Let $X_n$ be the number of balls that are required to obtain a repeat. Some of the problems we discuss include the mean (the average number of balls that are to be thrown to get a repeat) and the probability function. We will also show how this random variable is linked to the birthday problem when $n=365$.

___________________________________________________________________________

The Birthday Problem

First, we start with the birthday problem. The key is to derive the probability that in a group of $k$ randomly selected people, there are at least two who share the same birthday. It is easier to do the complement – the probability of having different birthdays among the group of $k$ people. We call this probability $p_k$.

\displaystyle \begin{aligned} p_k&=\frac{365}{365} \ \frac{364}{365} \ \frac{363}{365} \cdots \frac{365-(k-1)}{365} \\&=\frac{364}{365} \ \frac{363}{365} \cdots \frac{365-(k-1)}{365} \\&=\biggl[1-\frac{1}{365} \biggr] \ \biggl[1-\frac{2}{365} \biggr] \cdots \biggl[1-\frac{k-1}{365} \biggr] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1) \end{aligned}

where $k=2,3,4,\cdots,365$. The reasoning for the first line is that there are 365 choices to be picked in the first selection. Each subsequent random selection has to avoid the previous birthday, thus 364 choices for the second person and only $365-(k-1)$ choices for the $k$th person.

To answer the birthday problem, just plug in values of $k$ to compute $p_k$ and $1-p_k$ until reaching the smallest $k$ such that $p_k<0.5$ and $1-p_k>0.5$. The calculation should be done using software (Excel for example). The smallest $k$ is 23 since

$p_{23}= 0.492702766$

$1-p_{23}= 0.507297234$

In a random group of 23 people, there is a less than 50% chance of having distinct birthdays, and thus a more than 50% chance of having at least one identical birthday. This may be a surprising result. Without the benefit of formula (1), some people may think that it will take a larger sample to obtain a repeat.

The benefit of (1) extends beyond the birthday problem. Let’s consider the case for $n$, i.e. randomly pick numbers from the set $\left\{1,2,3,\cdots,n-1,n \right\}$ with replacement until a number is chosen twice (until a repeat occurs). Similarly, let $p_{n,k}$ be the probability that in $k$ draws all chosen numbers are distinct. The probability $p_{n,k}$ is obtained by replacing 365 with $n$.

\displaystyle \begin{aligned} p_{n,k}&=\frac{n}{n} \ \frac{n-1}{n} \ \frac{n-2}{n} \cdots \frac{n-(k-1)}{n} \\&=\frac{n-1}{n} \ \frac{n-2}{n} \cdots \frac{n-(k-1)}{n} \\&=\biggl[1-\frac{1}{n} \biggr] \ \biggl[1-\frac{2}{n} \biggr] \cdots \biggl[1-\frac{k-1}{n} \biggr] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2) \end{aligned}

Formula (2) will be useful in the next section.

___________________________________________________________________________

The Random Variable

We now look into the random variable discussed at the beginning, either the one for picking people at random until a repeated birthday or throwing balls into cells until one cell has two balls. To illustrate the idea, let’s look at an example.

Example 1
Roll a fair die until obtaining a repeated face value. Let $X_6$ be the number of rolls to obtain the repeated value. Find the probability function $P[X_6=k]$ where $k=2,3,4,5,6,7$.

Note that $P[X_6=k]$ is the probability that it will take $k$ rolls to get a repeated die value.

$\displaystyle P(X_6=2)=\frac{6}{6} \times \frac{1}{6}=\frac{1}{6}$

$\displaystyle P(X_6=3)=\frac{6}{6} \times \frac{5}{6} \times \frac{2}{6}=\frac{10}{6^2}$

$\displaystyle P(X_6=4)=\frac{6}{6} \times \frac{5}{6} \times \frac{4}{6} \times \frac{3}{6}=\frac{60}{6^3}$

$\displaystyle P(X_6=5)=\frac{6}{6} \times \frac{5}{6} \times \frac{4}{6} \times \frac{3}{6} \times \frac{4}{6}=\frac{240}{6^4}$

$\displaystyle P(X_6=6)=\frac{6}{6} \times \frac{5}{6} \times \frac{4}{6} \times \frac{3}{6} \times \frac{2}{6} \times \frac{5}{6}=\frac{600}{6^5}$

$\displaystyle P(X_6=7)=\frac{6}{6} \times \frac{5}{6} \times \frac{4}{6} \times \frac{3}{6} \times \frac{2}{6} \times \frac{1}{6} \times \frac{6}{6}=\frac{720}{6^6}$

To get a repeat in 2 rolls, there are 6 choices for the first roll and the second has only one choice – the value of the first roll. To get a repeat in 3 rolls, there are 6 choices for the first roll, 5 choices for the second roll and the third roll must be out of the 2 previous two distinct values. The idea is that the first $k-1$ rolls are distinct and the last roll must be one of the previous values. $\square$

The reasoning process leads nicely to the general case. In the general case, let’s consider the occupancy interpretation. In throwing balls into $n$ cells, let $X_n$ be defined as above, i.e. the number of balls that are required to obtain a repeat. The following gives the probability $P[X_n=k]$.

\displaystyle \begin{aligned} P[X_n=k]&=\frac{n}{n} \times \frac{n-1}{n} \times \cdots \times \frac{n-(k-2)}{n} \times \frac{k-1}{n} \\&=\frac{n-1}{n} \times \cdots \times \frac{n-(k-2)}{n} \times \frac{k-1}{n} \\&=\frac{(n-1) \times (n-2) \times \cdots \times (n-(k-2)) \times (k-1)}{n^{k-1}} \\&=\biggl[1-\frac{1}{n} \biggr] \times \biggl[1-\frac{2}{n} \biggr] \times \cdots \times \biggl[1-\frac{k-2}{n}\biggr] \times \frac{k-1}{n} \ \ \ \ \ \ \ (3) \end{aligned}

where $k=2,3,\cdots,n+1$.

The reasoning is similar to Example 1. To get a repeat in throwing $k$ balls, the first $k-1$ balls must be go into different cells while the last ball would go into one of the $k-1$ occupied cells. For the first $k-1$ balls to go into different cells, there are $n (n-1) \cdots (n-(k-2))$ ways. There are $k-1$ cells for the last ball to land. Thus the product of these two quantities is in the numerator of (3).

Once the probability function (3) is obtained, the mean $E[X_n]$ can be derived accordingly. For the case of $n=365$, $E[X_{365}]=24.62$ (calculated by programming the probability function in Excel). On average it will be required to sample about 25 people to obtain a repeated birthday.

Another interesting quantity is $P[X_n>k]$. This is the probability that it will take throwing more than $k$ balls to get a repeat. Mathematically this can be obtained by first calculating $P[X_n \le k]$ by summing the individual probabilities via (3). This is a workable approach using software. There is another way that is more informative. For the event $X_n>k$ to happen, the first $k$ throws must be in different cells (no repeat). The event $X_n>k$ is identical to the event that there is no repeat in the first $k$ throws of balls. This is how the random variable $X_n$ is linked to the birthday problem since the probability $P[X_n>k]$ should agree with the probability $p_{n,k}$ in (2).

$\displaystyle P[X_n>k]=\biggl[1-\frac{1}{n} \biggr] \ \biggl[1-\frac{2}{n} \biggr] \cdots \biggl[1-\frac{k-1}{n} \biggr] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (4)$

___________________________________________________________________________

Back to The Birthday Problem

Consider the case for $X_{365}$. What is the median of $X_{365}$? That would be the median number of people surveyed to obtain a pair of identical birthday. The median of $X_{365}$ would be the least $k$ such that $P[X_{365} \le k]$ is at least 0.5. Note that $P[X_{365}>k]$ is identical to $p_k$ in (1). The above calculation shows that $p_{23}=0.4927$ and $1-p_{23}=0.5073$. Thus the median of $X_{365}$ is 23. Thus when performing the random sampling of surveying birthday, about half of the times you can expect to survey 23 or fewer than 23 people.

The birthday problem is equivalently about finding the median of the random variable $X_{365}$. A little more broadly, the birthday problem is connected to the percentiles of the variable $X_{n}$. In contrast, the mean of $X_{365}$ is $E[X_{365}]=24.62$. The following lists several percentiles for the random variable $X_{365}$.

$\begin{array}{ccccccc} k & \text{ } & P[X_{365}>k] & \text{ } & P[X_{365} \le k] & \text{ } & \text{Percentile} \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 14 & \text{ } & 0.77690 & & 0.22310 & \\ 15 & \text{ } & 0.74710 & & 0.25290 & \text{ } & \text{25th Percentile} \\ 16 & \text{ } & 0.71640 & & 0.28360 & \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 22 & \text{ } & 0.52430 & & 0.47570 & \\ 23 & \text{ } & 0.49270 & & 0.50730 & \text{ } & \text{50th Percentile} \\ 24 & \text{ } & 0.46166 & & 0.53834 & \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 31 & \text{ } & 0.26955 & & 0.73045 & \\ 32 & \text{ } & 0.24665 & & 0.75335 & \text{ } & \text{75th Percentile} \\ 33 & \text{ } & 0.22503 & & 0.77497 & \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 40 & \text{ } & 0.10877 & & 0.89123 & \\ 41 & \text{ } & 0.09685 & & 0.90315 & \text{ } & \text{90th Percentile} \\ 42 & \text{ } & 0.08597 & & 0.91403 & \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 46 & \text{ } & 0.05175 & & 0.94825 & \\ 47 & \text{ } & 0.04523 & & 0.95477 & \text{ } & \text{95th Percentile} \\ 48 & \text{ } & 0.03940 & & 0.96060 & \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 56 & \text{ } & 0.01167 & & 0.98833 & \\ 57 & \text{ } & 0.00988 & & 0.99012 & \text{ } & \text{99th Percentile} \\ 58 & \text{ } & 0.00834 & & 0.99166 & \\ \end{array}$

It is clear that in a group of 366 people, it is certain that there will be at least one repeated birthday (again ignoring leap year). This is due to the pigeon hole principle. As the percentiles in the above table shows, you do not need to survey anywhere close to 366 to get a repeat. The median is 23 as discussed. The 75th percentile of $X_{365}$ is 32.

The preceding calculation shows that you do not need a large group to have a repeated birthday. About 50% of the times, you will survey 23 or fewer people, about 75% of the time, 32 or fewer people, About 99% of the time, you will survey 57 or fewer people, much fewer than 365 or 366. So with around 50 in a random group, there is a near certainty of finding a shared birthday. In a random group of 100 people, there should be an almost absolute certainty that there is a shared birthday.

For a further demonstration, we simulated the random variable $X_{365}$ 10,000 times. The range of the simulated values is 2 to 78. Thus the odds for 100 people to survey is smaller than 1 in 10,000. To get a simulated value of 100, we will have to simulate more than 10,000 values of $X_{365}$. The median of the 10,000 simulated results is 23. The following table summarizes the results.

$\begin{array}{rrrrrr} \text{Interval} & \text{ } & \text{Count} & \text{ } & \text{Proportion} & \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ \text{2 to 9} & \text{ } &953 & & 0.09530 & \\ \text{10 to 19} & \text{ } & 2870 & & 0.28700 & \\ \text{20 to 29} & \text{ } & 3055 & & 0.30550 & \\ \text{30 to 39} & \text{ } & 1922 & & 0.19220 & \\ \text{40 to 49} & \text{ } & 886 & & 0.08860 & \\ \text{50 to 59} & \text{ } & 253 & & 0.02530 & \\ \text{60 to 69} & \text{ } & 48 & & 0.00480 & \\ \text{70 to 78} & \text{ } & 13 & & 0.00130 & \\ \end{array}$

Not shown in the table is that 33 of the simulated results are the value of 2. Thus it is possible to ask two people and they both have the same birthday. But the odds of that happening is 33 out of 10,000 according to this particular set of simulations (probability 0.0033). The theoretical probability of 2 is 1/365 = 0.002739726. There are 2 instances of 78 in the 10,000 simulated values. Thus the odds are 2 out of 10,000 with probability 0.0002. The theoretical probability is 0.000037 using (3).

___________________________________________________________________________
$\copyright \ 2017 \text{ by Dan Ma}$

How long does it take to collect all coupons?

This post discusses the coupon collector problem, a classical problem in probability.

___________________________________________________________________________

The Coupon Collector Problem

The problem is usually stated as a coupon collector trying to collect the entire set of coupons. For example, each time the coupon collector buys a product (e.g. a box of breakfast cereal), he receives a coupon, which is a prize that can be a toy or a baseball card or other interesting item. Suppose that there are $n$ different types of coupons (prizes) and that the coupon collector wishes to collect the entire set. How many units of the product does the coupon collector have to buy in order to collect the entire set?

A simplified discussion of the coupon collector problem is found in another blog post. This post is more detailed discussion.

This blog post in another blog discusses the coupon collector problem from a simulation perspective.

As shown below, if there are 5 different coupons, it takes 12 purchases on average to get all coupons. If there are 10 different types of coupons, on average it would take 30 purchases. If there are 50 different types of coupons, it would take on average 225 purchases collect the entire set. The first few coupons are obtained fairly quickly. As more and more coupons are accumulated, it is harder to get the remaining coupons. For example, for the 50-coupon case, after the 49 coupons are obtained, it takes on average 50 purchases to get the last coupon.

Suppose that the coupon collector does not want to collect the entire set and only wishes to collect $r$ distinct coupons where $r \le n$. It turns out that this special case only requires a minor tweak to the case of collecting the entire set. Our strategy then is to focus on the main case. The special case will be discussed at the end of the post.

We first consider the main case that the coupon collector wishes to collect the entire set. The problem can be cast as a random sampling from the population $S=\left\{1,2,3,\cdots,n \right\}$. Selecting a number at random from $S$ with replacement is equivalent to the coupon collector receiving a coupon. Let $X_n$ be the minimum number of selections such that each number in $S$ is picked at least once. In this post we discuss the probability function of $X_n$ as well as its mean and variance.

Another interpretation of the problem is that it can be viewed as an occupancy problem, which involves throwing balls into $n$ cells at random. The random quantity of interest is the number of balls that are required to be thrown such that each cell has at least one ball. Clearly this formulation is identical to the coupon interpretation and the random sampling interpretation. The angle of occupancy problem is useful since we can leverage the formulas developed in this previous post. A description of the occupancy problem is given here.

Regardless of the interpretation, the goal is obtain information on the random variable $X_n$, the minimum number of random selections from $S=\left\{1,2,3,\cdots,n \right\}$ in order to have the complete set of $n$ distinct values represented in the sample.

___________________________________________________________________________

Mean and Variance

The mean and variance of $X_n$ are easier to obtain. So that is where we begin. The key is to break up $X_n$ into a sum as follows:

$X_n=C_{1}+C_{2}+C_{3}+\cdots+C_{i-1}+C_{i}+\cdots+C_{n} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (0)$

where $C_{i}$ is the additional selections from $S=\left\{1,2,3,\cdots,n \right\}$ to get a number that is distinct from the $i-1$ distinct numbers that have been chosen. For example, $C_3$ is the number of random selections to get a number that is distinct from the two distinct numbers obtained up to that point.

Note that each $C_i$ involves repeated sampling until some criterion is reached, thus resembling a geometric random variable. Indeed they are. As the sampling continues and as more distinct values are obtained, it is not as easy to obtain a new number. After $i-1$ distinct numbers have been obtained, the probability of drawing a new distinct number is $p=\frac{n-(i-1)}{n}$. As geometric random variables, each $C_i$ has the following mean and variance.

$\displaystyle E[C_i]=\frac{1}{p}=\frac{n}{n-(i-1)}$

$\displaystyle Var[C_i]=\frac{1-p}{p^2}=\frac{n(i-1)}{[n-(i-1)]^2}$

where $i=1,2,3,\cdots,n$. Note that the random variables $C_i$ are independent. The value of $C_i$ does not depend on how many trials it takes to draw the previous $i-1$ distinct numbers. The following gives the mean and variance of $X_n$.

$\displaystyle E[X_n]=\sum \limits_{i=1}^n E[C_i]=\sum \limits_{i=1}^n \frac{n}{n-(i-1)} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$

$\displaystyle Var[X_n]=\sum \limits_{i=1}^n Var[C_i]=\sum \limits_{i=1}^n \frac{n(i-1)}{[n-(i-1)]^2} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)$

The expectation $E[X_n]$ can be rearranged as follows to give more information.

$\displaystyle E[X_n]=n \ \biggl[\frac{1}{n}+ \cdots + \frac{1}{3}+ \frac{1}{2} + 1\biggr]=n \ H_n \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3)$

The quantity $H_n$ is the partial sum of the harmonic series. Note that $H_n \rightarrow \infty$ as $n \rightarrow \infty$. Thus $E[X_n] \rightarrow \infty$ as $n \rightarrow \infty$. The quantity $H_n$ can be interpreted as the average number of units of product that are required to purchase per coupon. The following table lists out the expected values for selected values of $n$.

Table 1

$\begin{array}{cccccc} \text{Number of} & \text{ } & \text{Expected Number of Trials} & \text{ } & \text{Expected Total Number} & \\ \text{Coupons} & \text{ } & \text{per Coupon} & \text{ } & \text{of Trials (rounded up)} & \\ n & \text{ } & E[H_n] & \text{ } & E[X_n] & \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & \\ 1 & \text{ } & 1.0000 & & 1 & \\ 2 & \text{ } & 1.5000 & & 3 & \\ 3 & \text{ } & 1.8333 & & 6 & \\ 4 & \text{ } & 2.0833 & & 9 & \\ 5 & \text{ } & 2.2833 & & 12 & \\ 6 & \text{ } & 2.4500 & & 15 & \\ 7 & \text{ } & 2.5929 & & 19 & \\ 8 & \text{ } & 2.7179 & & 22 & \\ 9 & \text{ } & 2.8290 & & 26 & \\ 10 & \text{ } & 2.9290 & & 30 & \\ 20 & \text{ } & 3.5977 & & 72 & \\ 30 & \text{ } & 3.9950 & & 120 & \\ 40 & \text{ } & 4.2785 & & 172 & \\ 50 & \text{ } & 4.4992 & & 225 & \\ 60 & \text{ } & 4.6799 & & 281 & \\ 70 & \text{ } & 4.8328 & & 339 & \\ 80 & \text{ } & 4.9655 & & 398 & \\ 90 & \text{ } & 5.0826 & & 458 & \\ 100 & \text{ } & 5.1874 & & 519 & \\ \end{array}$

Table 1 gives an estimate on how long to expect to collect the entire set of coupons for selected coupon sizes. The third column gives the expected total number of purchases to obtain the entire coupon set. The second column gives an estimate of how many purchases on average to obtain one coupon. For the 50-coupon case, it takes on average about 4.5 purchases to obtain one coupon. However, it does not tell the whole story. To get the 50th coupon, it takes on average 50 trials. Note that $E[C_{50}]=50$ in the 50-coupon case in formula (1). In a simulation of the 50-coupon problem, it took 54 trials to obtain the 50th coupon. To get the 49th coupon, it takes on average 50/2 = 25 trials.

___________________________________________________________________________

The Occupancy Problem

We now view the coupon collector problem as an occupancy problem in order to leverage a formula from a previous post. Suppose that we randomly throw $k$ balls into $n$ cells. Let $Y_{k,n}$ be the number of empty cells as a result of randomly assigning $k$ balls into $n$ cells. The following gives the probabilities $P(Y_{k,n}=j)$ where $j=0,1,2,\cdots,n-1$.

$\displaystyle P(Y_{k,n}=0)=\sum \limits_{i=0}^{n} (-1)^{i} \binom{n}{i} \biggl(1-\frac{i}{n}\biggr)^{k} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (4)$

$\displaystyle P(Y_{k,n}=j)=\binom{n}{j} \sum \limits_{i=0}^{n-j} (-1)^{i} \binom{n-j}{i} \biggl(1-\frac{j+i}{n}\biggr)^{k} \ \ \ \ \ \ \ \ \ \ \ \ \ (5)$

where $j=1,2, \cdots, n-1$.

The notation $\binom{n}{j}$ is the binomial coefficient, which is the number of ways to choose $j$ objects out of $n$ objects where order does not matter. The calculation is defined by $\displaystyle \binom{n}{j}=\frac{n!}{j! (n-j)!}$.

The formula (5) gives the probability of having $j$ empty cells. In throwing $k$ balls into $n$ cells, the probability of having $w$ occupied cells is then $P[Y_{k,n}=n-w]$.

___________________________________________________________________________

The Probability Function

We now discuss the probability function of the random variable $X_n$, namely $P[X_n=k]$ for $k=n,n+1,n+2,\cdots$. The event $X_n=k$ means that all $n$ cells are occupied after throwing $k$ balls with the first $k-1$ balls landing in $n-1$ cells. Putting it in another way, there are zero empty cells after throwing $k$ balls and there is 1 empty cell after throwing the first $k-1$ balls. This can be stated using the notation in the preceding section on the occupancy problem as follows:

$P(X_n=k]=P[Y_{k,n}=0 \text{ and } Y_{k-1,n}=1] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (6)$

Consider the following derivation.

\displaystyle \begin{aligned} P(X_n=k]&=P[Y_{k,n}=0 \text{ and } Y_{k-1,n}=1] \\&=P[Y_{k,n}=0 \ \lvert \ Y_{k-1,n}=1] \times P[Y_{k-1,n}=1] \\&=\frac{1}{n} \times \binom{n}{1} \sum \limits_{i=0}^{n-1} (-1)^i \binom{n-1}{i} \biggl[ 1-\frac{1+i}{n} \biggr]^{k-1} \\&=\sum \limits_{i=0}^{n-1} (-1)^i \binom{n-1}{i} \biggl[ 1-\frac{1+i}{n} \biggr]^{k-1} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (7) \end{aligned}

where $k=n,n+1,n+2,\cdots$.

Rather than memorizing the probability function in (7), a better way is to focus on the thought process that is inherent in (6).

One comment about the calculation for (7). The summation for $P[X_n=k]$ has $n$ terms. A given probability may involve multiple values of $k$, e.g.

$P[X_6>15]=1-P[6 \le X_6 \le 14]$.

Unless the number of values for $k$ is very small, the calculation should be done using software. Microsoft Excel is an excellent way to perform the calculation. The calculations for the examples below are programmed in Excel.

___________________________________________________________________________

Examples

Example 1
Suppose that a fair die is rolled until all 6 faces have appeared. Find the mean number of rolls and the variance of the number of rolls. What is the probability that it will take at least 12 rolls? What is the probability that it will take more than 15 rolls?

Using the notation developed above, the random variable of interest is $X_6$. Its mean and variance are:

$\displaystyle E[X_6]=6 \ \biggl[ 1 + \frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+\frac{1}{6} \biggr]=6 \times 2.45 = 14.7 \approx 15$

$\displaystyle Var[X_6]=\sum \limits_{i=1}^6 \frac{6(i-1)}{[6-(i-1)]^2}=38.99$

The following is the probability function for $X_6$.

$\displaystyle P[X_6=k]=\sum \limits_{i=0}^{5} (-1)^i \binom{5}{i} \biggl[ 1-\frac{1+i}{6} \biggr]^{k-1}$

where $k=6,7,8,\cdots$

For each $k$, the quantity $P[X_6=k]$ requires 6 calculations. Performing the calculations in Excel, the desired probabilities are:

$P[X_6 \ge 12]=1-P[6 \le X_6 \le 11]=1-0.356206419=0.643793581$

$P[X_6 > 15]=1-P[6 \le X_6 \le 15]=1-0.644212739=0.355787261$

Even though the average number of trials is 15, there is still a significant probability that it will take more than 15 trials. This is because the variance is quite large. $\square$

Example 2
An Internet startup is rapidly hiring new employees. What is the expected number of new employees until all birth months are represented? Assume that the 12 birth months are equally likely. What is the probability that the company will have to hire more than 25 employees? If the company currently has more than 25 employees with less than 12 birth months, what is the probability that it will have to hire more than 35 employees to have all 12 birth months represented in the company?

The random variable of interest is $X_{12}$. The following shows the mean and probability function.

$\displaystyle E[X_{12}]=12 \ \biggl[ 1 + \frac{1}{2}+\frac{1}{3}+\cdots +\frac{1}{12} \biggr]=37.23852814 \approx 38$

$\displaystyle P[X_{12}=k]=\sum \limits_{i=0}^{11} (-1)^i \binom{11}{i} \biggl[ 1-\frac{1+i}{12} \biggr]^{k-1}$

where $k=12,13,14,\cdots$

Performing the calculation in Excel, we obtain the following probabilities.

$P[X_{12} > 25]=1-P[12 \le X_{12} \le 25]=1-0.181898592=0.818101408$

$P[X_{12} > 35]=1-P[12 \le X_{12} \le 35]=1-0.531821149=0.468178851$

$\displaystyle P[X_{12} > 35 \ \lvert \ X_{12} > 25]=\frac{P[X_{12} > 35]}{P[X_{12} > 25]}=\frac{0.468178851}{0.818101408}=0.572274838$

___________________________________________________________________________

A Special Case

We now consider the special case that the coupon collector only wishes to collect $r$ distinct coupons where $r \le n$. Of course, $n$ is the total number of distinct coupon types. Let $X_{n,r}$ be the minimum number of purchases such that $r$ distinct coupons have been obtained. In the random sampling interpretation, $X_{n,r}$ would be the minimum sample size such that $r$ distinct elements have been chosen from the sample space $S=\left\{1,2,3,\cdots,n \right\}$. The mean and variance of $X_{n,r}$ follow from the same idea. Each $X_{n,r}$ is the independent sum of geometric random variables as in (0).

$X_{n,r}=C_{1}+C_{2}+C_{3}+\cdots+C_{i-1}+C_{i}+\cdots+C_{r} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (8)$

where $r \le n$.

Thus $E[X_{n,r}]$ and $Var[X_{n,r}]$ would be like (1) and (2) except that the summation is through $r$ instead of $n$.

For the probability function of $X_{n,r}$, we only need to tweak the thought process expressed in (6) slightly. For the event $X_{n,r}=k$ to happen, exactly $r$ cells are occupied after throwing $k$ balls with the first $k-1$ balls landing in $r-1$ cells. In other words, there are exactly $n-r$ empty cells after throwing $k$ balls and there are exactly $n-r+1$ empty cells after throwing $k-1$ balls. The following expresses this condition in terms of the occupancy problem, i.e. similar to (6).

$P[X_{n,r}=k]=P[Y_{k,n}=n-r \text{ and } Y_{k-1,n}=n-r+1] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (9)$

Here’s the important components that need to go into $P[X_{n,r}=k]$ with the first one coming from the occupancy formula (5).

$\displaystyle P[Y_{k-1,n}=n-r+1]=\binom{n}{n-r+1} \sum \limits_{i=0}^{r-1} (-1)^i \binom{r-1}{i} \biggl[ 1-\frac{n-r+1+i}{n} \biggr]^{k-1}$

$\displaystyle P[Y_{k,n}=n-r \ \lvert \ Y_{k-1,n}=n-r+1]=\frac{n-r+1}{n}$

Multiply the above two probabilities together produces the desired probability $P[X_{n,r}=k]$ for $k=r,r+1,r+2,\cdots$.

$\displaystyle P[X_{n,r}=k]=\binom{n-1}{r-1} \sum \limits_{i=0}^{r-1} (-1)^i \binom{r-1}{i} \biggl[ 1-\frac{n-r+1+i}{n} \biggr]^{k-1} \ \ \ \ \ \ \ \ (10)$

Note that when $n=r$ (collecting the entire set of coupons), formula (10) would be identical to (7). The following example demonstrates the calculation.

Example 3
Consider the 6-coupon case discussed in Example 1. Suppose that the coupon collector is interested in collecting $r=4$ coupons. What is the expected number of purchases to get 4 coupons? What is the probability that it will take more than 6 purchases to get 4 coupons? What is the probability that it will take more than 8 purchases to get 4 coupons? Compare these results with Example 1.

The random variable of interest is $X_{6,4}$. The mean is:

$\displaystyle E[X_{6,4}]=6 \biggl[ \frac{1}{6}+\frac{1}{5}+\frac{1}{4}+\frac{1}{3} \biggr]=5.7 \approx 6$

Note that it is much faster to obtain 4 coupons than the entire set of six. The following gives the probability function for $X_{6,4}$.

$\displaystyle P[X_{6,4}=k]=\binom{5}{3} \sum \limits_{i=0}^{3} (-1)^i \binom{3}{i} \biggl[ 1-\frac{3+i}{6} \biggr]^{k-1}$

where $k=4,5,6,\cdots$

Performing the calculations in Excel gives the following probabilities.

$P[X_{6,4} > 6]=1-P[12 \le X_{12} \le 25]=1-0.74845679=0.25154321$

$P[X_{6,4} > 8]=1-P[12 \le X_{12} \le 35]=1-0.928712277=0.071287723$

The first probability shows there is still a good chance that it will take more then the mean number of trials to get 4 coupons. The wait time is much less than in Example 1 since the probability of wait time more than 8 is fairly small. $\square$

___________________________________________________________________________

Moment Generating Function

One distributional quantity that is easy to obtain is the moment generating function (MGF) for the random variable $X_n$ (the case of collecting the entire set of coupons) as well as for the random variable $X_{n,r}$ (the partial case). Since both of these random variables are the independent sum of geometric random variables, their MGFs would simply be the product of the individual geometric MGFs. The following gives the results.

$\displaystyle M_{X_n}(t)=\prod \limits_{i=1}^n \frac{[n-(i-1)] e^t}{n-(i-1) e^t} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (11)$

$\displaystyle M_{X_{n,r}}(t)=\prod \limits_{i=1}^r \frac{[n-(i-1)] e^t}{n-(i-1) e^t} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (12)$

___________________________________________________________________________
$\copyright \ 2017 \text{ by Dan Ma}$

Looking for a Match

I recently gave an exam in my statistics course, which turned out to be an excellent example for the matching problem, a classic problem in probability. There were 66 students taking the test. I wrote down the names of the students in the order of turning in the exam. The following table shows the positions of the students in alphabetical order and in the order they turned in the exam.

For example, the first student in the alphabetical order was the 12th student who turned in the exam. The first student who turned in the exam was the 37th student in the alphabetical order. It turned out that there was a student who had the same position in both orders. Such a student is called a match (see the following table).

This example captures the essence of a classic problem in probability called the matching problem. There are many other colorful descriptions of the matching problem. All of them involve the pairing of two orderings on a set of objects. We have a set of objects or items that are ordered in some natural way. Then we scramble the items randomly (or according to some ordering that is unpredictable in advance). Whenever an item has the same position in both the original order and in the scrambled order, it is called a match.

In our exam example, one version of the matching problem asks: what is the probability that there is at least one student that is a match?

In fact, in matching two orderings, such as the one described here, a match happens more often than not. The probability of having at least one match is roughly 0.63. Specifically, when $n$ is the number of students taking the exam, the probability of finding at least one match approaches $1-e^{-1}=0.632121$ as $n \rightarrow \infty$. The derivation of this probability is based on the inclusion-exclusion principle and is discussed in the blog post called The Matching Problem.

Even though the probability of having at least one match is a function of $n$ (the number of items), the probability converges to $1-e^{-1}$ pretty rapidly. Thus for all practical purposes, we can say that the probability of having at least one match is 0.63 (roughly two-third), whenever the number of items involved in the random scrambling is reasonably large (as in the 66 students taking the exam).

Instead of finding the probability of having at least one match, we can also ask for the probability of having exactly $k$ matches, where $k$ is any integer from $0$ to $n$. Let $X_n$ be the number of matches when we match the “exam turning in” order with the alphabetical order for $n$ students. The probability function $P(X_n=k)$ is derived in the blog post called More About the Matching Problem.

The blog post More About the Matching Problem also points out that $P(X_n=k)$ is approximated by the Poisson distribution with parameter $\alpha=1$. Thus we have:

$\displaystyle (1) \ \ \ \ \ P(X_n=k) \approx \frac{e^{-1}}{k!} \ \ \ \ \ \ \ \ \ \ k=0,1,2,\cdots,n$

The following are the first 4 probabilities of $P(X_n=k)$.

$\displaystyle (2) \ \ \ \ \ P(X_n=0) \approx \frac{e^{-1}}{0!}=0.3679$

$\displaystyle (3) \ \ \ \ \ P(X_n=1) \approx \frac{e^{-1}}{1!}=0.3679$

$\displaystyle (4) \ \ \ \ \ P(X_n=2) \approx \frac{e^{-1}}{2!}=0.1839$

$\displaystyle (5) \ \ \ \ \ P(X_n=3) \approx \frac{e^{-1}}{3!}=0.0313$

In the random experiment of matching two orderings on the same set of objects, about 37% of the time, there is no match and about 37% of the time there is exactly one match. Having no matches and having exactly one match are the mostly scenarios (occurring about 74% of the time). Having 2 matches is possible, but only happens about 18% of the time. It is rare to have 3 ore more matches.

Another interesting observation is that if a match occurs, it is mostly likely that there is only one match (such as the example discussed here).

There are many colorful descriptions of the matching problem. The possibilities are limitless. One example is that of $n$ married couples going to a ball room dancing class. The instructor randomly assign the ladies to the gentlemen as dancing partners. A match occurs if a wife is assigned as
the dancing partner of her husband.

A previous blog post (Tis the Season for Gift Exchange) presents an example involving gift exchange. Each person attending a party brings a gift for gift exchange. The gifts are put in a pile and each person randomly selects a gift from the pile. A match occurs if a person selects his or her own gift.

In another blog post (A lazy professor who lets students do their own grading), a professor randomly returns quizzes to the students for grading. A match occurs if a students is assigned his or her own quiz.

A lazy professor who lets students do their own grading

After reading the example discussed here, any professor or instructor who is contemplating randomly distributing quizzes back to the students for grading perhaps should reconsider the idea! This is one of several posts discussing the matching problem. Go to the end of this post to find links to these previous posts.

Consider the following problem. Suppose that a certain professor is lazy and lets his students grade their own quizzes. After he collects the quizzess from his $n$ students, he randomly assigns the quizzes back to these $n$ students for grading. If a student is assigned his or her own quiz, we say that it is a match. We have the following questions:

• What is the probability that each of the $n$ students is a match?
• What is the probability that none of the $n$ students is a match?
• What is the probability that exactly $k$ of the $n$ students are matches?
• What is the probability that at least one of the $n$ students is a match?

The above problem is called the matching problem, which is a classic problem in probability. In this post we solve the last question indicated above. Though the answer is in terms of the total number of quizzes $n$, it turns out that the answer is independent of $n$ and is approximately $\frac{2}{3}$. Thus if the professor assigns the quizzes randomly, it will be very unusual that there is no match.

The last question above is usually stated in other matching situations. One is that there are $n$ couples (say, each couple consists of a husband and a wife) in a class for ballroom dancing. Suppose that the dance instructor randomly matches the men to the ladies. When a husband is assigned his own wife, we say that it is a match. What is the probability that there is at least one couple that is a match?

The key to answering this question is the theorem stated in Feller (page 99 in chapter 4 of [1]). We state the theorem and make use of it in the solution of the last question above. A sketch of the proof will be given at the end. For ideas on the solutions to the first three questions above, see this previous post.

The union of $n$ events
For any $n$ events $E_1,E_2, \cdots,E_n$ that are defined on the same sample space, we have the following formula:

$(1) \ \ \ \ \ P[E_1 \cup E_2 \cup \cdots \cup E_n]=\sum \limits_{m=1}^{n} (-1)^{m+1} \thinspace S_m$ where

$S_1=\sum \limits_{r=1}^{n}P[E_r]$,

$S_2=\sum \limits_{j,

$S_m=\sum P[E_{i(1)} \cap E_{i(2)} \cap \cdots \cap E_{i(m)}]$

Note that in the general term $S_m$, the sum is taken over all increasing sequence $i(\cdot)$, i.e. $1 \le i(1) < i(2) < \cdots < i(m) \le n$. For $n=2,3$, we have the following familiar formulas:

$\displaystyle P[E_1 \cup E_2]=P[E_1]+P[E_2]-P[E_1 \cap E_2]$

\displaystyle \begin{aligned}P[E_1 \cup E_2 \cup E_3]=& \ \ \ \ P[E_1]+P[E_2]+P[E_3]\\&-P[E_1 \cap E_2]-P[E_1 \cap E_3]-P[E_2 \cap E_3]\\&+P[E_1 \cap E_2 \cap E_3] \end{aligned}

The Matching Problem

Suppose that the $n$ students are labeled $1,2, \cdots, n$. Let $E_i$ be the even that the $i^{th}$ student is assigned his or her own quiz by the professor. Then $P[E_1 \cup E_2 \cup \cdots \cup E_n]$ is the probability that there is at least one correct match.

Note that $P[E_i]=\frac{(n-1)!}{n!}=\frac{1}{n}$. This is the case since we let the $i^{th}$ student be fixed and we permute the other $n-1$ students. Likewise, $P[E_i \cap E_j]=\frac{(n-2)!}{n!}$, since we fix the $i^{th}$ and $j^{th}$ students and let the other $(n-2)!$ students permute. In general, whenever $i(1),i(2),\cdots,i(m)$ are $m$ distinct integers and are increasing, we have:

$\displaystyle P[E_{i(1)} \cap E_{i(2)} \cdots \cap E_{i(m)}]=\frac{(n-m)!}{n!}$

We now apply the formula (1). First we show that for each $m$ where $1 \le m \le n$, $S_m=\frac{1}{m!}$. Since there are $\binom{n}{m}$ many ways to have $m$ matches out of $n$ students, we have:

\displaystyle \begin{aligned}S_m&=\sum P[E_{i(1)} \cap E_{i(2)} \cdots \cap E_{i(m)}]\\&=\binom{n}{m} \frac{(n-m)!}{n!}\\&=\frac{1}{m!} \end{aligned}

Applying the formula for the union of $n$ events, we have:

$\displaystyle P[E_1 \cup E_2 \cup \cdots \cup E_n]=1-\frac{1}{2!}+\frac{1}{3!}-\cdots+(-1)^{n+1}\frac{1}{n!}$

$\displaystyle 1-P[E_1 \cup E_2 \cup \cdots \cup E_n]=1-1+\frac{1}{2!}-\frac{1}{3!}+\cdots+(-1)^{n}\frac{1}{n!}$

Note that the left-hand side of the above equality is the first $n+1$ terms in the expansion of $e^{-1}$. Thus we have:

\displaystyle \begin{aligned}\lim_{n \rightarrow \infty}P[E_1 \cup E_2 \cup \cdots \cup E_n]&=\lim_{n \rightarrow \infty}\biggl(1-\frac{1}{2!}+\frac{1}{3!}-\cdots+(-1)^{n+1}\frac{1}{n!}\biggr)\\&=1-e^{-1}\\&=0.6321205588 \end{aligned}

The above limit converges quite rapidly. Let $P_n=P[E_1 \cup E_2 \cup \cdots \cup E_n]$. The following table lists out the first several terms of this limit.

$\displaystyle \begin{pmatrix} n&\text{ }&P_n \\{2}&\text{ }&0.50000 \\{3}&\text{ }&0.66667 \\{4}&\text{ }&0.62500 \\{5}&\text{ }&0.63333 \\{6}&\text{ }&0.63194 \\{7}&\text{ }&0.63214 \\{8}&\text{ }&0.63212\end{pmatrix}$

Sketch of Proof for the Formula
The key idea is that any sample point in the union $E_1 \cup E_2 \cdots \cup E_n$ is counted in exactly one time in the right hand side of (1). Suppose that a sample point is in exactly $t$ of the events $E_1,E_2,\cdots,E_n$. Then the following shows the number of times the sample point is counted in each expression:

$\displaystyle \sum \limits_{m=1}^{n}P[E_m] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ t \text{ times}$

$\displaystyle \sum \limits_{a

$\displaystyle \sum \limits_{a

Thus the sample point in question will be counted exactly $H$ times in the right hand side of the formula.

$\displaystyle H=\binom{t}{1}-\binom{t}{2}+\binom{t}{3}- \cdots + (-1)^{t+1}\binom{t}{t}$

The following is the derivation that $H=1$.

$\displaystyle 0=(1-1)^t=\sum \limits_{a=0}^{t} \binom{t}{a}(-1)^{a}(1)^{t-a}=\binom{t}{0}-\binom{t}{1}+\binom{t}{2}+ \cdots +(-1)^t \binom{t}{t}$

$\displaystyle 1=\binom{t}{0}=\binom{t}{1}-\binom{t}{2}- \cdots +(-1)^{t+1} \binom{t}{t}=H$

Reference

1. Feller, W., An Introduction to Probability Theory and its Applications, Vol. I, 3rd ed., John Wiley & Sons, Inc., New York, 1968

Previous Posts on The Matching Problem
The Matching Problem

Tis the Season for Gift Exchange

Tis the Season for Gift Exchange

Suppose that there are 10 people in a holiday party with each person bearing a gift. The gifts are put in a pile and each person randomly selects one gift from the pile. In order not to bias the random selection of gifts, suppose that the partygoers select gifts by picking slips of papers with numbers identifying the gifts. What is the probability that there is at least one partygoer who ends up selecting his or her own gift? In this example, selecting one’s gift is called a match. What is the probability that there is at least one match if there are more people in the party, say 50 or 100 people? What is the behavior of this probability if the size of the party increases without bound?

The above example is a description of a classic problem in probability called the matching problem. There are many colorful descriptions of the problem. One such description is that there are $n$ married couples in a ballroom dancing class. The instructor pairs the ladies with the gentlemen at random. A match occurs if a married couple is paired as dancing partners.

Whatever the description, the matching problem involves two lists of items that are paired according to a particular order (the original order). When the items in the first list are paired with the items in the second list according to another random ordering, a match occurs if an item in the first list and an item in the second list are paired both in the original order and in the new order. The matching problem discussed here is: what is the probability that there is at least one match? Seen in this light, both examples described above are equivalent mathematically.

We now continue with the discussion of the random gift selection example. Suppose that there are $n$ people in the party. Let $E_i$ be the event that the $i^{th}$ person selects his or her own gift. The event $E=E_1 \cup E_2 \cup \cdots \cup E_n$ is the event that at least one person selects his or her own gift (i.e. there is at least one match). The probability $P(E)=P(E_1 \cup E_2 \cup \cdots \cup E_n)$ is the solution of the matching problem as described in the beginning. The following is the probability $P(E)=P(E_1 \cup E_2 \cup \cdots \cup E_n)$.

$\displaystyle (1) \ \ \ \ \ \ P[E_1 \cup E_2 \cup \cdots \cup E_n]=1-\frac{1}{2!}+\frac{1}{3!}-\cdots+(-1)^{n+1} \frac{1}{n!}$

The answer in $(1)$ is obtained by using an idea called the inclusion-exclusion principle. We will get to that in just a minute. First let’s look at the results of $(1)$ for a few iterations.

$\displaystyle (2) \ \ \ \ \ \ \begin{bmatrix} \text{n}&\text{ }& P[E_1 \cup E_2 \cup \cdots \cup E_n] \\\text{ }&\text{ }&\text{ } \\ 3&\text{ }&0.666667 \\ 4&\text{ }&0.625000 \\ 5&\text{ }&0.633333 \\ 6&\text{ }&0.631944 \\ 7&\text{ }&0.632143 \\ 8&\text{ }&0.632118 \\ 9&\text{ }&0.632121 \\ 10&\text{ }&0.632121 \\ 11&\text{ }&0.632121 \\ 12&\text{ }&0.632121 \end{bmatrix}$

In a party with random gift exchange, it appears that regardless of the size of the party, there is a very good chance that someone will end up picking his or her own gift! A match will happen about 63% of the time.

It turns out that the answers in $(1)$ are related to the Taylor’s series expansion of $e^{-1}$. We show that the probability in $(1)$ will always converge to $1-e^{-1}=0.632121$. Note that the Taylor’s series expansion of $e^{-1}$ is:

$\displaystyle (3) \ \ \ \ \ \ e^{-1}=\frac{1}{0!}-\frac{1}{1!}+\frac{1}{2!}-\frac{1}{3!}+\cdots+(-1)^n \frac{1}{n!}+\cdots$

Consequently, the Taylor’s series expansion of $1-e^{-1}$ is:

\displaystyle \begin{aligned} (4) \ \ \ \ \ \ 1-e^{-1}&=1-\biggl(1-1+\frac{1}{2!}-\frac{1}{3!}+\cdots+(-1)^n \frac{1}{n!}+\cdots \biggr) \\&=1-\frac{1}{2!}+\frac{1}{3!}-\cdots+(-1)^{n+1} \frac{1}{n!}+\cdots \end{aligned}

Note that the sum of the first $n$ terms of the series in $(4)$ is the probability in $(1)$. As the number of partygoers $n$ increases, the probability $P[E_1 \cup E_2 \cup \cdots \cup E_n]$ will be closer and closer to $1-e^{-1}=0.6321$. We have:

$\displaystyle (5) \ \ \ \ \ \ \lim \limits_{n \rightarrow \infty} P[E_1 \cup E_2 \cup \cdots \cup E_n]=1 - e^{-1}$

$\displaystyle (6) \ \ \ \ \ \ \lim \limits_{n \rightarrow \infty} P[(E_1 \cup E_2 \cup \cdots \cup E_n)^c]=e^{-1}$

The equation $(5)$ says that it does not matter how many people are in the random gift exchange, the answer to the matching problem is always $1-e^{-1}$ in the limit. The equation $(6)$ says that the probability of having no matches approaches $e^{-1}$. There is a better than even chance ($0.63$ to be more precise) that there is at least one match. So in a random gift exchange as described at the beginning, it is much easier to see a match than not to see one.

The Inclusion-Exclusion Principle

We now want to give some indication why $(1)$ provides the answers. The inclusion-exclusion principle is a formula for finding the probability of the union of events. For the union of two events and the union of three events, we have:

$\displaystyle (7) \ \ \ \ \ \ P[E_1 \cup E_2]=P[E_1]+P[E_1]-P[E_1 \cap E_2]$

\displaystyle \begin{aligned} (8) \ \ \ \ \ \ P[E_1 \cup E_2 \cup E_3]&=P[E_1]+P[E_1]+P[E_3] \\& \ \ \ \ -P[E_1 \cap E_2]-P[E_1 \cap E_3]-P[E_2 \cap E_3] \\& \ \ \ \ +P[E_1 \cap E_2 \cap E_3] \end{aligned}

To find the probability of the union of $n$ events, we first add up the probabilities of the individual events $E_i$. Because the resulting sum overshoots, we need to subtract the probabilities of the intersections $E_i \cap E_j$. Because the subtractions remove too much, we need to add back the probabilities of the intersections of three individual events $E_i \cap E_j \cap E_k$. The process of inclusion and exclusion continues until we reach the step of adding/removing of the intersection $E_1 \cap \cdots \cap E_n$. The following is the statement of the inclusion-exclusion principle.

$\displaystyle (9) \ \ \ \ \ \ P[E_1 \cup E_2 \cdots \cup E_n]=S_1-S_2+S_3- \ \ \cdots \ \ +(-1)^{n+1}S_n$

In $(9)$, $S_1$ is the sum of all probabilities $P[E_i]$, $S_2$ is the sum of all possible probabilities of the intersection of two events $E_i$ and $E_j$ and $S_3$ is the sum of all possible probabilities of the intersection of three events and so on.

We now apply the inclusion-exclusion principle to derive equation $(1)$. The event $E_i$ is the event that the $i^{th}$ person gets his or her own gift while the other $n-1$ people are free to select gifts. The probability of this event is $\displaystyle P[E_i]=\frac{(n-1)!}{n!}$. There are $\displaystyle \binom{n}{1}=\frac{n!}{1! (n-1)!}$ many ways of fixing 1 gift. So $\displaystyle S_1=\binom{n}{1} \times \frac{(n-1)!}{n!}=1$.

Now consider the intersection of two events. The event $E_i \cap E_j$ is the event that the $i^{th}$ person and the $j^{th}$ person get their own gifts while the other $(n-2)$ people are free to select gifts. The probability of this event is $\displaystyle P[E_i \cap E_j]=\frac{(n-2)!}{n!}$. There are $\displaystyle \binom{n}{2}=\frac{n!}{2! (n-2)!}$ many ways of fixing 2 gifts. So $\displaystyle S_2=\binom{n}{2} \times \frac{(n-2)!}{n!}=\frac{1}{2!}$.

By the same reasoning, we derive that $\displaystyle S_3=\frac{1}{3!}$ and $\displaystyle S_4=\frac{1}{4!}$ and so on. Then plugging $\displaystyle S_i=\frac{1}{i!}$ into $(9)$, we obtain the answers in $(1)$.

The matching problem had been discussed previously in the following posts.

The matching problem

We started the discussion on the matching problem in a previous post (The matching problem). We would like to add a little bit more information. The matching problem has many colorful descriptions. Here are some examples:

• In a dance class with $n$ married couples, suppose that the dance instructor assigns partners randomly. We have a match if a married couple is paired as dancing partners.
• There are $n$ letters with $n$ matching envelops. Suppose that the letters are randomly stuffed into the envelops. We have a match if a letter is placed into the correct envelop.

In this post we use the generic example of a deck of shuffled cards. Suppose a deck of $n$ cards is numbered $1$ through $n$. After the deck is thoroughly shuffled, if the card numbered $i$ is the $i^{th}$ card in the shuffled deck, we say that it is a match. It is clear that the above two examples and the shuffled deck example are equivalent mathematically.

How many matches will there be in a shuffled deck? What is the probability that there will be at least one match? What is the probability that there will be exactly $k$ matches where $k=0,1, \cdots, n$? On average, how many matches will there be? The number of matches is a random variable. We comment on this random variable resulting from the matching problem. We also derive the probability density function and its moments. We also comment the relation this random variable with the Poisson distribution.

Consider the set $S=\lbrace{1,2, \cdots, n}\rbrace$. The matching problem corresponds to the random selection of all the points in $S$ without replacement. The random selection of points of the entire set $S$ results in ordered samples $(Y_1,Y_2, \cdots, Y_n)$ where each $Y_i \in S$ and $Y_i \neq Y_j$ for $i \neq j$. These ordered samples are precisely the permutations of the set $S$ and there are $n!$ many such permutations.

A match occurs when some element $i \in S$ is selected in the $i^{th}$ pick. In the card example, the $i^{th}$ card is a match when the card that is numbered $i$ is in the $i^{th}$ position in the shuffled deck ($Y_i=i$). The matching problem is a counting problem. We demonstrate how to count the $n!$ permutations that result in no matches, at least one match and exactly $k$ matches using the inclusion-exclusion principle.

For each $i \in S$, we define the following indicator variable $I_i$:

$\displaystyle I_i=\left\{\begin{matrix}1&\thinspace Y_i=i \ \ \ \text{(} i^{th} \text{card is a match)}\\{0}&\thinspace Y_i \neq i \ \ \ \text{(} i^{th} \text{card is not a match)}\end{matrix}\right.$

Furthermore, let $X_n=I_1+I_2+ \cdots +I_n$. The random variable $X_n$ is the number of matches when a deck of $n$ cards (numbered $1$ through $n$) are shuffled. We derive the probability function of $X_n$ as well as its first and second moments.

The Inclusion-Exclusion Principle
For any events $A_1,A_2, \cdots, A_n$, the following is the cardinality of the event $A_1 \cup A_2 \cup \cdots \cup A_n$:

$\displaystyle \lvert A_1 \cup A_2 \cup \cdots \cup A_n \lvert=\sum \limits_{m=0}^n (-1)^{m+1} S_m$ where the counts $S_m$ are defined by the following:

$\displaystyle S_1=\sum \limits_{i=1}^n \lvert A_i \lvert$

$\displaystyle S_2=\sum \limits_{i

$\displaystyle S_m=\sum \limits_{i(1)< \cdots

Remark
In the inclusion-exclusion formula, we sum the cardinalities of the sets, subtract the cardinalities of intersections of pairs, add the cardinalities of the intersection of triples and so on. When $n=2$, we have the familiar formula:

$\displaystyle \lvert A_1 \cup A_2 \lvert=\lvert A_1 \lvert +\lvert A_2 \lvert-\lvert A_1 \cap A_2\lvert$

The Probability Density Function
For each $i$ in $S=\lbrace{1,2, \cdots, n}\rbrace$, let $A_i$ be the event that the $i^{th}$ card is a match. The event $A_1 \cup \cdots \cup A_n$ is the event that there is at least one match in the deck of $n$ shuffled cards. We use the inclusion-exclusion principle to derive the count for this event. The union and its complement “no matches in the cards” will become the basis for deriving the density function of $X_n$.

For the event $A_i$, the $i^{th}$ card is fixed and the other $n-1$ cards are free to permute. Thus $\vert A_i \vert=(n-1)!$. As a result, $\displaystyle S_1=\binom{n}{1} (n-1)!=n!$.

For the event $A_i \cap A_j$, the $i^{th}$ card and the $j^{th}$ card are fixed and the other $n-2$ cards are free to permute. Thus we have $\vert A_i \cap A_j \vert=(n-2)!$ and $\displaystyle S_2=\binom{n}{2} (n-2)!=\frac{n!}{2!}$

In the general case, we have $\displaystyle S_i=\binom{n}{i} (n-i)!=\frac{n!}{i!}$

Applying the inclusion-exclusion principle, we have:

$\displaystyle \lvert A_1 \cup A_2 \cup \cdots \cup A_n \lvert=\sum \limits_{i=1}^n (-1)^{i+1} S_i$

\displaystyle \begin{aligned}\lvert (A_1 \cup A_2 \cup \cdots \cup A_n)^c \lvert&=n!-\sum \limits_{i=1}^n (-1)^{i+1} S_i\\&=n!-\sum \limits_{i=1}^n (-1)^{i+1} \frac{n!}{i!}\\&=n!+\sum \limits_{i=1}^n (-1)^{i} \frac{n!}{i!}\\&=\sum \limits_{i=0}^n (-1)^{i} \frac{n!}{i!} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)\end{aligned}

Out of the $n!$ permutations, $\displaystyle \lvert (A_1 \cup A_2 \cup \cdots \cup A_n)^c \lvert$ of them have no matches. Thus the probability of having no matches in a shuffled deck of $n$ cards is:

\displaystyle \begin{aligned}P(\text{no matches in n cards})&=\displaystyle \biggl(\sum \limits_{i=0}^n (-1)^{i} \frac{n!}{i!}\biggr) \frac{1}{n!}\\&=\sum \limits_{i=0}^n \frac{(-1)^{i}}{i!}\\&=P(X_n=0)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)\end{aligned}

To derive $P(X_n=1)$, we consider the event that $1$ of the cards is a match and the other $n-1$ cards are not matches. Using $(1)$ to obtain the count for the event that $n-1$ cards are not matches, we have:

\displaystyle \begin{aligned}P(X_n=1)&=\displaystyle \binom{n}{1}\displaystyle \biggl(\sum \limits_{i=0}^{n-1} (-1)^{i} \frac{(n-1)!}{i!}\biggr) \frac{1}{n!}\\&=\sum \limits_{i=0}^{n-1} \frac{(-1)^{i}}{i!}\end{aligned}

For the general case, we have:

\displaystyle \begin{aligned}P(X_n=k)&=\displaystyle \binom{n}{k}\displaystyle \biggl(\sum \limits_{i=0}^{n-k} (-1)^{i} \frac{(n-k)!}{i!}\biggr) \frac{1}{n!}\\&=\frac{1}{k!}\sum \limits_{i=0}^{n-k} \frac{(-1)^{i}}{i!}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3)\end{aligned}

We point out that $P(X_n=n-1)=0$. It is impossible to have exactly $n-1$ matches while the remaining card is not a match. This can also be verified by looking at the probability density function. On the other hand, $\displaystyle P(X_n=n)=\frac{1}{n!}$. There is only one permutation out of $n!$ many where all the cards are matches.

Poisson Distribution
We compare the probability density functions $P(X_{10}=k)$, $P(X_{20}=k)$ $P(X_{150}=k)$ and the Poisson desnity function with parameter $\lambda=1$. The following matrix shows the results (rounded to eight decimal places).

$\displaystyle \begin{pmatrix} \text{k}&P(X_{10}=k)&P(X_{20}=k)&P(X_{150}=k)&\displaystyle \frac{e^{-1}}{k!} \\{\text{ }}&\text{ }&\text{ } \\{0}&0.36787946&0.36787944&0.36787944&0.36787944 \\{1}&0.36787919&0.36787944&0.36787944&0.36787944 \\{2}&0.18394097&0.18393972&0.18393972&0.18393972 \\{3}&0.06130952&0.06131324&0.06131324&0.06131324 \\{4}&0.01533565&0.01532831&0.01532831&0.01532831 \\{5}&0.00305556&0.00306566&0.00306566&0.00306566 \\{6}&0.00052083&0.00051094&0.00051094&0.00051094 \\{7}&0.00006614&0.00007299&0.00007299&0.00007299 \\{8}&0.00001240&0.00000912&0.00000912&0.00000912 \\{9}&0&0.00000101&0.00000101&0.00000101 \\{10}&0.00000028&0.00000010&0.00000010&0.00000010 \\{\text{ }}&\text{ }&\text{ } \\{\text{Total}}&1.000000&0.99999997&0.99999997&0.99999997\end{pmatrix}$

There is remarkable agreement among the density functions arising from the match problems with the Poisson distribution. The density function $P(X_n=k)$ converges to the Poisson distribution with parameter $\lambda=1$ and the convergence occurs rapidly.

The probability of having exactly $k$ matches in $(3)$ above converges to $\displaystyle \frac{e^{-1}}{k!}$, which is the Poisson distribution with parameter $\lambda=1$.

The probability of no matches among the $n$ cards in $(2)$ above is the Taylor expansion of $e^{-1}$. Thus the probability of no matches in a shuffled deck of $n$ cards when $n$ is large is about $0.3678794412$ and the probability of at least one match is approximately $0.6321205588$. As the above example shows, the convergence occurs fairly rapidly.

The Moments
Since $P(X_n=k)$ converges to the Poisson distribution with parameter $\lambda=1$, we might guess that the mean and variance of $X_n$ approach $\lambda=1$. We now show that $E(X_n)=1$ and $Var(X_n)=1$ regardless of $n$.

Recall that $X_n=I_1+I_2+ \cdots +I_n$ where for each $j=1,2, \cdots, n$, $I_j$ is an indicator variable:

$\displaystyle I_j=\left\{\begin{matrix}1&\thinspace Y_j=j \ \ \ \text{(} j^{th} \text{card is a match)}\\{0}&\thinspace Y_j \neq j \ \ \ \text{(} j^{th} \text{card is not a match)}\end{matrix}\right.$

We claim that for all $j=1,2, \cdots, n$, $\displaystyle P(I_j=1)=\frac{1}{n}$ and for all $i, $\displaystyle P(I_i=1,I_j=1)=\frac{1}{n(n-1)}$. To see this, the event $(I_j=1)$ is the event $A_j$ defined in the derivation of the probability density function of $X_n$. We have $\lvert A_j \vert=(n-1)!$. Thus the probability that the $j^{th}$ card is a match is:

$\displaystyle P(I_j=1)=\frac{(n-1)!}{n!}=\frac{1}{n}$

The event $(I_i=1, I_j=1)$ is the event $A_i \cap A_j$ defined in the derivation of the probability density function of $X_n$. We have $\lvert A_i \cap A_j \vert=(n-2)!$. Thus the probability that both the $i^{th}$ card and the $j^{th}$ card are matches is:

$\displaystyle P(I_i=1,I_j=1)=\frac{(n-2)!}{n!}=\frac{1}{n(n-1)}$

It follows that for all $j=1,2, \cdots, n$, $I_j$ has the Bernoulli distribution with probability of success $\displaystyle P(I_j=1)=\frac{1}{n}$.

Knowing that $I_j$ is Bernoulli, we know its mean and variance. For $j=1,2, \cdots, n$, we have:

$\displaystyle E(I_j)=\frac{1}{n}$

$\displaystyle Var(I_j)=\frac{1}{n} \biggl(1-\frac{1}{n}\biggr)=\frac{n-1}{n^2}$

It is now easy to see that $E(X_n)=1$. To find $Var(X_n)$, we need to find the covariance $Cov(I_i,I_j)$. Note that the indicator variables $I_j$ are not independent. In evaluating $Cov(I_i,I_j)$, we need to consider four possibilities: $(I_i=0,I_j=0)$, $(I_i=0,I_j=1)$, $(I_i=1,I_j=0)$, $(I_i=1,I_j=1)$. The only relevant case is the last one. Thus we have:

\displaystyle \begin{aligned}Cov(I_i,I_j)&=E(I_i I_j)-E(I_i) E(I_j)\\&=\biggl(\sum \limits_{x,y=0,1} x \ y P(I_i=x,I_j=y)\biggr)-E(I_i) E(I_j)\\&=P(I_i=1,I_j=1)-E(I_i) E(I_j)\\&=\frac{1}{n(n-1)}-\frac{1}{n^2}\\&=\frac{1}{n^2 (n-1)} \end{aligned}

The following derives the variance:

\displaystyle \begin{aligned}Var(X_n)&=\sum \limits_{i=1}^n Var(I_i)+2 \sum \limits_{i \ne j} Cov(I_i,I_j)\\&=n \frac{n-1}{n^2}+2 \binom{n}{2} \frac{1}{n^2 (n-1)}\\&=1 \end{aligned}

Reference

1. Feller, W., An Introduction to Probability Theory and its Applications, Vol. I, 3rd ed., John Wiley & Sons, Inc., New York, 1968