views:

1745

answers:

3

Could someone explain to the statistically naive what the difference between Multiple R-squared and Adjusted R-squared is? I am doing a single-variate regression analysis as follows:

 v.lm <- lm(epm ~ n_days, data=v)
 print(summary(v.lm))

Results:

Call:
lm(formula = epm ~ n_days, data = v)

Residuals:
    Min      1Q  Median      3Q     Max 
-693.59 -325.79   53.34  302.46  964.95 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  2550.39      92.15  27.677   <2e-16 ***
n_days        -13.12       5.39  -2.433   0.0216 *  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

Residual standard error: 410.1 on 28 degrees of freedom
Multiple R-squared: 0.1746,     Adjusted R-squared:

0.1451 F-statistic: 5.921 on 1 and 28 DF, p-value: 0.0216

Apologies for the newbiness of this question. It would probably belong on statsoverflow.com, if it existed.

+13  A: 

The "adjustment" in adjusted R-squared is related to the number of variables and the number of observations.

If you keep adding variables (predictors) to your model, R-squared will improve - that is, the predictors will appear to explain the variance - but some of that improvement may be due to chance alone. So adjusted R-squared tries to correct for this, by taking into account the ratio (N-1)/(N-k-1) where N = number of observations and k = number of variables (predictors).

It's probably not a concern in your case, since you have a single variate.

Some references:

  1. How high, R-squared?
  2. Goodness of fit statistics
  3. Multiple regression
  4. Re: What is "Adjusted R^2" in Multiple Regression
neilfws
+4  A: 

The R-squared is not dependent on the number of variables in the model. The adjusted R-squared is.

The adjusted R-squared adds a penalty for adding variables to the model that are uncorrelated with the variable your trying to explain. You can use it to test if a variable is relevant to the thing your trying to explain.

Adjusted R-squared is R-squared with some divisions added to make it dependent on the number of variables in the model.

Jay
Note: Adding a predictor to a regression will almost always increase r-squared, even if only by a little bit due to random sampling.
Jeromy Anglim
ty Jeromy, I meant to say "go down" instead of go up. The R-squared will never fall as a result of adding a new variable to the model. The adjusted R-squared can go up or down if a new variable is added. It was a bad example, so I removed it.
Jay
+4  A: 

The Adjusted R-squared is close to, but different from, the value of R2. Instead of being based on the explained sum of squares SSR and the total sum of squares SSY, it is based on the overall variance (a quantity we do not typically calculate), s2T = SSY/(n - 1) and the error variance MSE (from the ANOVA table) and is worked out like this: adjusted R-squared = (s2T - MSE) / s2T.

This approach provides a better basis for judging the improvement in a fit due to adding an explanatory variable, but it does not have the simple summarizing interpretation that R2 has.

If I haven't made a mistake, you should verify the values of adjusted R-squared and R-squared as follows:

s2T <- sum(anova(v.lm)[[2]]) / sum(anova(v.lm)[[1]])
MSE <- anova(v.lm)[[3]][2]
adj.R2 <- (s2T - MSE) / s2T

On the other side, R2 is: SSR/SSY, where SSR = SSY - SSE

attach(v)
SSE <- deviance(v.lm) # or SSE <- sum((epm - predict(v.lm,list(n_days)))^2)
SSY <- deviance(lm(epm ~ 1)) # or SSY <- sum((epm-mean(epm))^2)
SSR <- (SSY - SSE) # or SSR <- sum((predict(v.lm,list(n_days)) - mean(epm))^2)
R2 <- SSR / SSY 
gd047