Degree of freedom and corrected standard deviation

It is often said that degree of freedom causes the need for standard deviation formula to be corrected. When explaining degree of freedom, it is often said that when one knows the mean of the formula, only $n-1$ data are actually needed, as the last data can be determined using mean and $n-1$ data. However, I see the same thing occuring in population - not just in sample. So what's going on here, and how is this justification really working?

For example, in simple linear regression model, variance of error terms are often sum of variance of each data divided by $n-2$. This is justified as said above. But if this justification is also true for population, not just sample, how is this really working?

It is often said that degree of freedom causes the need for standard deviation formula to be corrected. When explaining degree of freedom, it is often said that when one knows the mean of the formula, only $n-1$ data are actually needed, as the last data can be determined using mean and $n-1$ data. However, I see the same thing occuring in population - not just in sample. So what's going on here, and how is this justification really working?

For example, in simple linear regression model, variance of error terms are often sum of variance of each data divided by $n-2$. This is justified as said above. But if this justification is also true for population, not just sample, how is this really working?