Conflicting Results from t-test and F-based stepwise regression in multiple regression.

oliadas73

oliadas73

Answered question

2022-09-03

Conflicting Results from t-test and F-based stepwise regression in multiple regression.
I currently am tasked with building a multiple regression model with two predictor variables to consider. That means there are potentially three terms in the model, Predictor A (PA), Predictor B (PB) and PA*PB.
In one instance, I made a LS model containing all three terms, and did simple t-tests. I divided the parameter estimates by their standard errors to calculate t-statistics, and determined that only the intercept and PA*PB coefficients were significantly different from zero.
In another instance, I did stepwise regression by first creating a model with only PA, and then fit a model to PA and PB, and did an F-test based on the Sum of Squares between the two models. The F-test concluded that PB was a significant predictor to include in the model, and when I repeated the procedure, the PA*PB coefficient was found to reduce SSE significantly as well.
So in summary, the t-test approach tells me that only the cross-product term PA*PB has a significant regression coefficient when all terms are included in the model, but the stepwise approach tells me to include all terms in the model.
Based on these conflicting results, what course of action would you recommend?

Answer & Explanation

Krha77

Krha77

Beginner2022-09-04Added 8 answers

1.Removing variables just because they don't have marginal significance is bad. If you want to use a significance-based approach, the stepwise method is much better. The glaring problem with just whacking variables a bunch of variables at once cause they're individually insignificant is that they may well be jointly significant. The stepwise approach at least doesn't have this problem.
2.There's usually no good reason to use a significance based approach. If your goal is prediction, the best thing to do is to test each model for out of sample performance (according to some assorted metrics) and see which one does the best. There are also information-criteria (cp, aic, etc) that are supposed to evaluate out-of-sample performance based on in-sample performance and a model complexity penalty, but again, why use these if you have enough data to test out of sample performance directly? (As with most one-size-fits-all advice, this is a bit strong. These things and even stepwise regression have their place and can be good solutions sometimes. I'm just saying what I think is usually best in a generic situation, if there is such a thing.)
KesseTher12

KesseTher12

Beginner2022-09-05Added 6 answers

Backward elimination. Your first try with t tests is somewhat like backward elimination. In backward elimination, you begin with all three explanatory variables, and then start eliminating the weakest one. (You should have eliminated only the weakest one at the first step.) Then do the multiple regression with the two strongest explanatory variables, and see if either of them should be eliminated.
Forward selection. Your second try with an F test somewhat like forward selection. Do all three simple (one predictor) regressions. Select the strongest predictor variable. Then do two regressions in which each of the other variables is given a chance. If either makes a 'significant' improvement choose it. If so, do a third multiple regression with all three predictor variables, and see if adding the third variable helps.
Many software packages have the ability to do each of these stepwise procedures automatically with specified criteria for inclusion or exclusion at each step. However, in my experience, using something like eight or a dozen possible predictor variables, forward selection and backward elimination almost never give the same set of variables. At each step, there may be close calls, and predictor variables are typically somewhat correlated with one another.
Mandatory inclusion. A common approach is to designate a particular set of variables as mandatory to include because it seems clear in advance that they ought to have predictive potential. Then start forward selection with those mandatory variables. And never eliminate them in backward elimination. That can work well if the mandatory variables are selected wisely.
At the end, absent extraneous considerations, if you get two or three different sets of predictor variables from different stepwise procedures, you can check each of them to see which is best. (An 'extraneous consideration' would be if your boss has a strong preference he/she can't explain.)

Do you have a similar question?

Recalculate according to your conditions!

New Questions in Inferential Statistics

Ask your question.
Get an expert answer.

Let our experts help you. Answer in as fast as 15 minutes.

Didn't find what you were looking for?