While doing the analysis, I did a simple regression and check out that some variables have low p-val

Mara Cook 2022-06-25 Answered
While doing the analysis, I did a simple regression and check out that some variables have low p-value. Therefore I thought they are influential variables for dependent variable.
So I did Multiple regression with influential variables, but p-values become not influentially low and less significant (less descriptive). Is this case some kind of multicolinearity? I've heard a lot about the opposite result (the more variable added, the insignificant variable becomes more descriptive), but I'm not sure about this case.
You can still ask an expert for help

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

Solve your problem for the price of one coffee

  • Available 24/7
  • Math expert for every subject
  • Pay only if we can solve it
Ask Question

Answers (1)

Bornejecbo
Answered 2022-06-26 Author has 19 answers
Without seeing that data - no one can tell you why this happens. Multicolinearity is indeed a possible cause. If these variable are continuous then you can check the VIF factor for each one of them. If indeed it caused by multicolinerity, then possible remedies is adding up variables, e.g., using principle components instead of the original explanatory variables or simply dropping the problematic variables.

We have step-by-step solutions for your answer!

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

You might be interested in

asked 2022-07-03
What is the difference between multi-task lasso regression and ridge regression? The optimization function of multi-task lasso regression is
m i n w l = 1 L 1 / N t i = 1 N t J l ( w , x , y ) + γ l = 1 L | | w l | | 2
while ridge regression is
m i n w l = 1 L 1 / N t J l ( w , x , y ) + γ | | w l | | 2
which looks the same as the ridge regression. As for me, the problem of multi-task lasso regression is equivalent to solve global ridge regression. So what is the difference between these two regression methods? Both of them use L 2 function. Or does it mean that in multi-task lasso regression, the shape of W is (1,n)?
asked 2022-05-24
Is there a method for polynomial regression in 2 D dimensions (fitting a function f ( x , y ) to a set of data X , Y, and Z)? And is there a way to apply a condition to the regression in 2 D that requires all functions fitted to go through the axis line x = 0?
asked 2022-06-21
Given X and Y sample data as something like
[ 1 2 2 3 ]
[ 7 8 ]
In such an arrangement how do I include B3? I would think I would want to add it in as always 1 in X. IE
[ 1 2 2 3 1 1 ]
asked 2022-06-10
We have to determine the effect of a predictor variable on an outcome variable using simple linear regression. We have lots of data (about 300 variables) and we may include some other covariates in our regression model. Why would we include other covariates and how do you decide which of those 300 variables we want to include in our regression model?
asked 2022-07-09
In logistic regression, the regression coefficients ( β 0 ^ , β 1 ^ ) are calculated via the general method of maximum likelihood. For a simple logistic regression, the maximum likelihood function is given as
( β 0 , β 1 ) = i : y i = 1 p ( x i ) i : y i = 0 ( 1 p ( x i ) ) .
What is the maximum likelihood function for 2 predictors? Or 3 predictors?
asked 2022-06-24
Let a sample ( x , y ) R 2 n be given, where y only attains the values 0 and 1. We can try to model this data set by either linear regression
y i = α 0 + β 0 x i
with the coefficients determined by the method of least squares or by logistic regression
π i = exp ( α 1 + β 1 x i ) 1 + exp ( α 1 + β 1 x i ) ,
where π i denotes the probability that y i = 1 under the given value x i and the coefficients are determined by the Maximum-Likelihood method. My question is whether the following statement holds true.
Claim: If β 0 > 0 ( β 0 < 0), then β 1 > 0 ( β 1 > 0).
I figure this could be due to the sign of the correlation coefficient.
asked 2022-07-07

New questions