Suppose we're studying the time it takes for a certain industrial process to complete. A recent study, which measured the time it took to complete 51 processes, gave a mean time of 2.396 minutes with standard deviation of 1.967 minutes. From past studies it has been observed that the standard deviation of the time it takes for the process to complete is 2.1 minutes. Calculate a confidence interval of 98% for the standard deviation of the time it takes to complete the process.
I don't understand why would they give me the value of the standard deviation of past processes, if I already have a value for (namely 1.967). By the way, I build the confidence interval by using the fact that follows a Chi squared distribution and then calculating the probability of being between the inferior and superior limits of the distribution for which the area under the curve is equal to (1−0.98)/2. I do not know why is that past value there if the interval can be calculated using only the first value.
I remember that in class, the professor said that is the single best estimator of the variance of a single population in terms of the variance of another population, though I didn't understand that. I suspect I can use the pooled variance in order to calculate a more accurate confidence interval taking in account both values of the standard deviation that they give me?
And in general, what sense does it make to calculate the confidence interval for a variable whose value I already know?