Conclusion for confidence interval. If I got, let's say, a 95 % confidence interval for the mean and a 95 % confidence interval for the variance.

Conclusion for confidence interval
If I got, let's say, a 95 % confidence interval for the mean and a 95 % confidence interval for the variance.
Would it then be wrong to conclude:
The 95 % confidence interval for the mean contains with at least 95 % probability the true mean?
and
The 95 % confidence interval for the variance contains with at least 95 % probability the true variance?
What would be a more correct/precise way to express what the confidence intervals stand for?
You can still ask an expert for help

• Questions are typically answered in as fast as 30 minutes

Solve your problem for the price of one coffee

• Math expert for every subject
• Pay only if we can solve it

Quinn Alvarez
Step 1
The 'meaning' of interval estimates is a controversial topic on applied statistics. So there is no universally accepted answer to your important question.
Let's just use a proposed sample of size $n=31$ from a normal population with unknown population mean $\mu$ and unknown variance ${\sigma }^{2}.$.
Then $T=\frac{\overline{X}-\mu }{S/\sqrt{n}}\sim \mathsf{T}\left(n-1\right),$, so that $P\left(-2.042\le T=\frac{\overline{X}-\mu }{S/\sqrt{n}}\le 2.042\right)=0.95.$. Here $\overline{X}$ and S are the sample mean and variance, respectively.
Manipulating inequalities in the event, we get $P\left(\overline{X}-2.042\frac{S}{\sqrt{n}}\le \mu \le \overline{X}+2.042\frac{S}{\sqrt{n}}\right)=0.95.$. This is purely a probability statement. Specifically, it is a probability statement about the behavior of the random variable $\overline{X}$ and S: the random interval $\left(\overline{X}-2.042\frac{S}{\sqrt{n}},\overline{X}+2.042\frac{S}{\sqrt{n}}\right)$ has a 95% probability of covering (including) the unknown constant $\mu .$.
Step 2
Now suppose we take the sample and obtain $\overline{X}=21.3$ and ${S}^{2}=1.44.$. Then the random interval becomes $\left(21.3-2.042\left(0.1249\right),\left(21.3+2.042\left(0.1249\right)\right)$, $\left(21.3-2.042\left(0.1249\right),\left(21.3+2.042\left(0.1249\right)\right)$ or (20.860,21.740).
But now we are dealing with observed quantities. According to the usual frequentist interpretation of probability, this is no longer a probability statement: Either the interval (20.860,21.740) includes $\mu$ or it does not. Accordingly, the interval (20.860,21.740) is called a 95% confidence interval.
The confidence interval is a statement about the data. Over the long run, we will obtain data so that the manipulation in the emphasized paragraph will produce an interval that includes the true population $\mu$ in 95% of such experiments.
The reason for calling the interval estimate a 'confidence' interval instead of a 'probability' interval has to do with a strict interpretation by frequentist statisticians of the word 'probability'.
Bayesian statisticians treat $\mu$ as a random variable, begin with a 'prior' distribution on $\mu$, combine the data with the prior distribution to get a 'posterior' distribution, and use the posterior distribution to get a probability interval for $\mu$ (some say a credible interval). If the prior distribution is "flat" (containing little information), then the Bayesian and frequentist interval estimates will be numerically very similar. But philosophies as to the "meaning" of the interval estimate differ.
Both frequentists and Bayesians have their critics. Strictly speaking, frequentists are are not saying anything about the experiment at hand--only about what 'works' over the long run. A Bayesian is addressing the experiment at hand, but needs to explain how the prior distribution was obtained and what effect it has on the interval estimate.