# Pairwise independent but not independent

Certainly! Let's take the example of flipping a fair coin three times.

Let $A$ be the event that the first flip is a head, $B$ be the event that the second flip is a head, and $C$ be the event that the number of heads in the three flips is odd.

Now, let's verify the pairwise independence:

$P(A \cap B) = P(A)P(B)$

$P(A \cap C) = P(A)P(C)$

$P(B \cap C) = P(B)P(C)$

The probability that the first and second flips are heads is $\frac{1}{4}$. Similarly, the probability of getting a head on the first flip is $\frac{1}{2}$ and on the second flip is $\frac{1}{2}$. Their product is $\frac{1}{2} \times \frac{1}{2} = \frac{1}{4}$. Therefore, $P(A \cap B) = P(A)P(B)$.

For $P(A \cap C)$: Given that the first flip is a head, for there to be an odd number of heads, we need either the second or third flip (but not both) to be heads. This occurs with a probability of $\frac{1}{2} \times \frac{1}{2} + \frac{1}{2} \times \frac{1}{2} = \frac{1}{2}$. So, $P(A \cap C) = \frac{1}{2} \times \frac{1}{2} = \frac{1}{4}$. This is equal to $P(A)P(C)$.

Similarly, for $P(B \cap C)$: Given that the second flip is a head, for there to be an odd number of heads, we need either the first or third flip (but not both) to be heads. The calculations will be the same as for $P(A \cap C)$, thus showing $P(B \cap C) = P(B)P(C)$.

However, the three events $A, B,$ and $C$ are not mutually independent. To see why, consider:

$P(A \cap B \cap C)$

Given that the first two flips are heads (events $A$ and $B$), for $C$ to occur (odd number of heads), the third flip must be tails. This has a probability of $\frac{1}{2}$. So, $P(A \cap B \cap C) = \frac{1}{8}$. But, $P(A)P(B)P(C) = \frac{1}{2} \times \frac{1}{2} \times \frac{1}{2} = \frac{1}{8}$.

Oops, it seems the three events are actually mutually independent based on the above verification! My apologies. Let me correct this:

For $C$ to be an odd number of heads in three flips, we can have:

- H T T
- T H T
- T T H
- H H H

So, $P(C) = \frac{4}{8} = \frac{1}{2}$.

Now, given A and B both occur (both first and second flips are heads), the only sequence that satisfies this and C is H H H, which has a probability of $\frac{1}{8}$.

So $P(A \cap B \cap C) = \frac{1}{8}$.

However, $P(A)P(B)P(C) = \frac{1}{2} \times \frac{1}{2} \times \frac{1}{2} = \frac{1}{8}$.

So, $A, B,$ and $C$ are actually mutually independent. I made an error in the initial analysis.

Here's an actual example that fits the requirement:

Consider a deck of cards.

Event $A$: Drawing a red card in the first draw. Event $B$: Drawing a red card in the second draw. Event $C$: The two drawn cards have the same color.

Now, let's verify pairwise independence:

$P(A \cap B)$ is the probability that both the first and second cards drawn are red. This is $\frac{26}{52} \times \frac{25}{51}$. On the other hand, $P(A)P(B)$ is $\frac{26}{52} \times \frac{26}{52}$. These two values are the same, so the events are pairwise independent.

$P(A \cap C)$ is the probability that the first card is red and both cards are the same color. This is $\frac{26}{52} \times \frac{25}{51}$. $P(A)P(C)$ is $\frac{26}{52} \times \frac{1}{2}$. Again, these two values are the same, so the events are pairwise independent.

$P(B \cap C)$ and $P(B)P(C)$ will have the same relationship as in (2).

However, for mutual independence:

$P(A \cap B \cap C) \neq P(A)P(B)P(C)$

$P(A \cap B \cap C)$ is the probability that the first card is red, the second card is red, and both cards are the same color. This is just $\frac{26}{52} \times \frac{25}{51}$.

But $P(A)P(B)P(C)$ is $\frac{26}{52} \times \frac{26}{52} \times \frac{1}{2}$, which is different from the above.

So, the events $A, B,$ and $C$ are pairwise independent but not mutually independent.

To begin, let's clarify what an indicator function is. If $A$ is an event in a probability space, then the indicator function $I_A$ of the event $A$ is defined as:

We want to prove that the indicator functions of events $A, B,$ and $C$ are pairwise independent, but not mutually independent. We'll use the example of drawing two cards from a deck as mentioned previously.

Let $A$ be the event that a red card is drawn first. Let $B$ be the event that a red card is drawn second. Let $C$ be the event that two cards of the same color are drawn.

**Pairwise Independence**:a. $I_A$ and $I_B$: $E[I_A I_B] = P(A \cap B) = \frac{26}{52} \times \frac{25}{51}$ $E[I_A]E[I_B] = P(A)P(B) = \frac{26}{52} \times \frac{26}{52}$ We can see that $E[I_A I_B] = E[I_A]E[I_B]$.

b. $I_A$ and $I_C$: $E[I_A I_C] = P(A \cap C) = \frac{26}{52} \times \frac{25}{51}$ $E[I_A]E[I_C] = P(A)P(C) = \frac{26}{52} \times \frac{1}{2}$ Again, $E[I_A I_C] = E[I_A]E[I_C]$.

c. $I_B$ and $I_C$ will have a similar relationship to (b).

**Mutual Independence**:Now, let's consider all three: $E[I_A I_B I_C] = P(A \cap B \cap C) = \frac{26}{52} \times \frac{25}{51}$ $E[I_A]E[I_B]E[I_C] = P(A)P(B)P(C) = \frac{26}{52} \times \frac{26}{52} \times \frac{1}{2}$ These two quantities are not equal.

Thus, $I_A, I_B,$ and $I_C$ are pairwise independent but not mutually independent.

Absolutely! Let's use two coin flips to define events $A$, $B$, and $C$ that are pairwise independent but not mutually independent.

Consider flipping two fair coins, and let the sample space be given by {HH, HT, TH, TT}.

Let $A$ be the event that the first coin is a Head. $A = \{HH, HT\}$

Let $B$ be the event that the second coin is a Head. $B = \{HH, TH\}$

Let $C$ be the event that the two coin flips have the same result (either both Heads or both Tails). $C = \{HH, TT\}$

**Pairwise Independence**:

a. $A$ and $B$: $P(A \cap B) = P(\{HH\}) = \frac{1}{4}$ $P(A)P(B) = \frac{1}{2} \times \frac{1}{2} = \frac{1}{4}$ So, $P(A \cap B) = P(A)P(B)$.

b. $A$ and $C$: $P(A \cap C) = P(\{HH\}) = \frac{1}{4}$ $P(A)P(C) = \frac{1}{2} \times \frac{1}{2} = \frac{1}{4}$ So, $P(A \cap C) = P(A)P(C)$.

c. $B$ and $C$: $P(B \cap C) = P(\{HH\}) = \frac{1}{4}$ $P(B)P(C) = \frac{1}{2} \times \frac{1}{2} = \frac{1}{4}$ So, $P(B \cap C) = P(B)P(C)$.

From the above, it's clear that the events $A$, $B$, and $C$ are pairwise independent.

**Mutual Independence**:

$P(A \cap B \cap C) = P(\{HH\}) = \frac{1}{4}$ $P(A)P(B)P(C) = \frac{1}{2} \times \frac{1}{2} \times \frac{1}{2} = \frac{1}{8}$ Clearly, $P(A \cap B \cap C) \neq P(A)P(B)P(C)$.

Therefore, while $A$, $B$, and $C$ are pairwise independent, they are not mutually independent.

Yes, events $A_j$ (for $j = 1, 2, \dots, n$) are mutually independent if and only if their indicator functions are mutually independent. Let's break this down:

### $\Rightarrow$ Direction (Events are independent implies their indicator functions are independent):

Suppose the events $A_j$ are mutually independent. This means for any subset of these events, say $A_{j_1}, A_{j_2}, \dots, A_{j_k}$, we have:

$P(A_{j_1} \cap A_{j_2} \cap \dots \cap A_{j_k}) = P(A_{j_1})P(A_{j_2})\dots P(A_{j_k})$

Now, consider their indicator functions, $I_{A_{j_1}}, I_{A_{j_2}}, \dots, I_{A_{j_k}}$. For these indicator functions to be mutually independent, the following should hold:

$E[I_{A_{j_1}} I_{A_{j_2}} \dots I_{A_{j_k}}] = E[I_{A_{j_1}}]E[I_{A_{j_2}}]\dots E[I_{A_{j_k}}]$

Since $E[I_{A_{j_m}}] = P(A_{j_m})$ and $E[I_{A_{j_1}} I_{A_{j_2}} \dots I_{A_{j_k}}] = P(A_{j_1} \cap A_{j_2} \cap \dots \cap A_{j_k})$, the above equation is equivalent to the mutual independence of the events. Thus, if the events are mutually independent, then their indicator functions are also mutually independent.

### $\Leftarrow$ Direction (Indicator functions are independent implies the events are independent):

If the indicator functions $I_{A_1}, I_{A_2}, \dots, I_{A_n}$ are mutually independent, then for any subset of these indicator functions:

$E[I_{A_{j_1}} I_{A_{j_2}} \dots I_{A_{j_k}}] = E[I_{A_{j_1}}]E[I_{A_{j_2}}]\dots E[I_{A_{j_k}}]$

This translates to:

$P(A_{j_1} \cap A_{j_2} \cap \dots \cap A_{j_k}) = P(A_{j_1})P(A_{j_2})\dots P(A_{j_k})$

Which means the events $A_{j_1}, A_{j_2}, \dots, A_{j_k}$ are mutually independent.

Since the above is true for any subset of the events, it means all the events $A_j$ are mutually independent.

Thus, the statement is true: events $A_j$ are mutually independent if and only if their indicator functions are mutually independent.