![]() However, you don’t have enough evidence to determine the specific variables that are statistically significant. However, if you have theoretical reasons to include them, or the goal of your study is to test those variables specifically, it is OK to leave the insignificant variables in the model.įor the second question, you can think of it in terms of having enough evidence to conclude that your model as a whole is an improvement over using the mean of the DV to explain the variability of the DV. You might consider removing those variables. Additionally, including IVs that are not significant can reduce the precision of your model. However, if you add IVs that are not significant, they can “dilute” the significance of the entire model. If you have one IV (simple regression) that is significant, the overall F-test will also be significant. Thank you very much in advance for your answer!įor your first question, I can infer that your model must have more than one IV and that at least some of them are not significant. Therefore, the correlation coefficient R, the coefficient of determinantion R-squared, the regression coefficient beta and the corresponding p-value of an conducted t-test should be sufficient to interpret the results of my study, right? If I understand your statement above correctly, the F-test is important to test overall significance when there are multiple IVs. In my paper I have only one independent variable, not multiple, hence a bivariate regression model. For example, the overall F-test can find that the coefficients are significant jointly while the t-tests can fail to find significance individually.” “This disagreement can occur because the F-test of overall significance assesses all of the coefficients jointly whereas the t-test for each coefficient examines them individually. In other words, your sample provides sufficient evidence to conclude that your model is significant, but not enough to conclude that any individual variable is significant.įirst of all thank you very much for all your videos and this website in general! You carry the analysis and interpretation part of my bachelor thesis! However, it’s possible that each variable isn’t predictive enough on its own to be statistically significant. The F-test sums the predictive power of all independent variables and determines that it is unlikely that all of the coefficients equal zero. These conflicting test results can be hard to understand, but think about it this way. For example, the overall F-test can find that the coefficients are significant jointly while the t-tests can fail to find significance individually. This disagreement can occur because the F-test of overall significance assesses all of the coefficients jointly whereas the t-test for each coefficient examines them individually. Occasionally, the tests can produce conflicting results. Generally speaking, if none of your independent variables are statistically significant, the overall F-test is also not statistically significant. ![]() This finding is good news because it means that the independent variables in your model improve the fit!
0 Comments
Leave a Reply. |