# Here's a problem I thought of that I don't know how to approach: You have a fair coin that you keep

Here's a problem I thought of that I don't know how to approach:
You have a fair coin that you keep on flipping. After every flip, you perform a hypothesis test based on all coin flips thus far, with significance level $\alpha$, where your null hypothesis is that the coin is fair and your alternative hypothesis is that the coin is not fair. In terms of $\alpha$, what is the expected number of flips before the first time that you reject the null hypothesis?
Edit based on comment below: For what values of α is the answer to the question above finite? For those values for which it is infinite, what is the probability that the null hypothesis will ever be rejected, in terms of $\alpha$?
Edit 2: My post was edited to say "You believe that you have a fair coin." The coin is in fact fair, and you know that. You do the hypothesis tests anyway. Otherwise the problem is unapproachable because you don't know the probability that any particular toss will come up a certain way.
You can still ask an expert for help

• Questions are typically answered in as fast as 30 minutes

Solve your problem for the price of one coffee

• Math expert for every subject
• Pay only if we can solve it

sniokd
EDIT: This answer was unclear for OP at first, so I tried to make it clearer through a new approach. Apparently it arose another legitimate doubt, so I tried now to put both answers together and clarify them even more. (Still I might be wrong, but I'll try to express myself better)
What you look for, is the expected number of tosses before we do a Type I error (rejecting ${H}_{0}$ when it was true). The probability of that is precisely $\alpha$ (that's another way to define it).
So
Let ${X}_{n}$ be the event of rejecting ${n}^{th}$ test.
Now, $E\left[{X}_{1}\right]=\alpha$ stands for the expected number of games (a game is starting to test in the way we do a new coin) where ${H}_{0}$ was rejected on the first throw. $E\left[{X}_{1}+{X}_{2}\right]=E\left[{X}_{1}\right]+E\left[{X}_{2}\right]$ is the expected number of games where ${H}_{0}$ is rejected either on the first or the second throw. Note that with most $\alpha$ this will be lower than 1, so the expectation for a single game is not to reject ${H}_{0}$ yet.
When do we expect to have rejected ${H}_{0}$? Precisely when the number of expected games in which we reject ${H}_{0}$ is 1. Therefore, we look for n such as
$E\left[{X}_{1}+{X}_{2}+...+{X}_{n}\right]=1\phantom{\rule{0ex}{0ex}}E\left[{X}_{1}+{X}_{2}+...+{X}_{n}\right]=E\left[n{X}_{1}\right]=nE\left[{X}_{1}\right]=n\alpha =1\phantom{\rule{0ex}{0ex}}n=\frac{1}{\alpha }$
The other answer goes like this: Let the variable T count the number of tests before rejecting one. We look for E[T].
Also, using previous notation,$P\left({X}_{n}\right)=\alpha \left(1-\alpha {\right)}^{n-1}$ (I'm aware this implies independence between the events ${X}_{n}$ and ${X}_{n-1}$ but since I'm looking for the expected value, for the linearity of the Expected Value, it shouldn't be a problem, though I'm aware I'm not being polite with notation).
$E\left[T\right]=\sum _{n=1}^{\mathrm{\infty }}nP\left({X}_{n}\right)=\sum _{n=1}^{\mathrm{\infty }}n\alpha \left(1-\alpha {\right)}^{n-1}=\alpha \sum _{n=1}^{\mathrm{\infty }}n\left(1-\alpha {\right)}^{n-1}=\phantom{\rule{0ex}{0ex}}\alpha \sum _{n=0}^{\mathrm{\infty }}\left(n+1\right)\left(1-\alpha {\right)}^{n}=\alpha \left(\sum _{n=0}^{\mathrm{\infty }}n\left(1-\alpha {\right)}^{n}+\sum _{n=0}^{\mathrm{\infty }}\left(1-\alpha {\right)}^{n}\right)=\alpha \left(\frac{1-\alpha }{{\alpha }^{2}}+\frac{1}{\alpha }\right)\phantom{\rule{0ex}{0ex}}E\left[T\right]=\frac{1}{\alpha }$