簡單線性回歸
> fit<-lm(weight ~ height, data=women) #在R中,擬合線性模型最基本的函數(shù)就是lm( ),格式為:myfit <- lm(formula, data),其中,formula形式為Y~ X1 + X2 + XK
> summary(fit)
Call:
lm(formula = weight ~ height, data = women)
Residuals:
? ? Min? ? ? 1Q? Median? ? ? 3Q? ? Max
-1.7333 -1.1333 -0.3833? 0.7417? 3.1167
Coefficients:
? ? ? ? ? ? Estimate Std. Error t value Pr(>|t|)? ?
(Intercept) -87.51667? ? 5.93694? -14.74 1.71e-09 ***
height? ? ? ? 3.45000? ? 0.09114? 37.85 1.09e-14 ***
---
Signif. codes:? 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.525 on 13 degrees of freedom
Multiple R-squared:? 0.991,? ? Adjusted R-squared:? 0.9903
F-statistic:? 1433 on 1 and 13 DF,? p-value: 1.091e-14
> women$weight
[1] 115 117 120 123 126 129 132 135 139 142 146 150 154 159 164
> fitted(fit)#列出擬合模型的預測值
? ? ? 1? ? ? ? 2? ? ? ? 3? ? ? ? 4? ? ? ? 5? ? ? ? 6? ? ? ? 7? ? ? ? 8
112.5833 116.0333 119.4833 122.9333 126.3833 129.8333 133.2833 136.7333
? ? ? 9? ? ? 10? ? ? 11? ? ? 12? ? ? 13? ? ? 14? ? ? 15
140.1833 143.6333 147.0833 150.5333 153.9833 157.4333 160.8833
> residuals(fit)#列出擬合模型的殘殘值
? ? ? ? ? 1? ? ? ? ? 2? ? ? ? ? 3? ? ? ? ? 4? ? ? ? ? 5? ? ? ? ? 6
2.41666667? 0.96666667? 0.51666667? 0.06666667 -0.38333333 -0.83333333
? ? ? ? ? 7? ? ? ? ? 8? ? ? ? ? 9? ? ? ? ? 10? ? ? ? ? 11? ? ? ? ? 12
-1.28333333 -1.73333333 -1.18333333 -1.63333333 -1.08333333 -0.53333333
? ? ? ? 13? ? ? ? ? 14? ? ? ? ? 15
0.01666667? 1.56666667? 3.11666667
> plot(women$height, women$weight, xlab="Height(in inches)", ylab="Weight(in pounds)")
> abline(fit)#表示畫一條y=a+bx的直線
結果:Weight= -87.52 + 3.45*Height
多項式回歸
> fit2<-lm(weight~height + I(height^2), data=women)
> summary(fit2)
Call:
lm(formula = weight ~ height + I(height^2), data = women)
Residuals:
? ? Min? ? ? 1Q? Median? ? ? 3Q? ? ? Max
-0.50941 -0.29611 -0.00941? 0.28615? 0.59706
Coefficients:
? ? ? ? ? ? Estimate Std. Error t value Pr(>|t|)? ?
(Intercept) 261.87818? 25.19677? 10.393 2.36e-07 ***
height? ? ? -7.34832? ? 0.77769? -9.449 6.58e-07 ***
I(height^2)? 0.08306? ? 0.00598? 13.891 9.32e-09 ***
---
Signif. codes:? 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.3841 on 12 degrees of freedom
Multiple R-squared:? 0.9995,? ? Adjusted R-squared:? 0.9994
F-statistic: 1.139e+04 on 2 and 12 DF,? p-value: < 2.2e-16
> plot(women$height, women$weight, xlab="Height(in inches)", ylab="Weight(in lbs)")
> lines(women$height, fitted(fit2))#lines( )函數(shù)做的是一般連線圖,其輸入是x,y的點向量。
結果:Weight= 261.88 - 7.35*Height + 0.083*Height^2
> fit3<- lm(weight~ height + I(height^2)+I(height)^3, data=women)
> library(car) #package car needs to be installed
> scatterplot(weight~height, data=women, spread=FALSE, smoother.args=list(Ity=2), pch=19, main="Women Age 30-39", xlab="Height(inches)",ylab="Weight(lbs.)")
這個功能加強的圖形,既提供了身高與體重的散點圖、線性擬合曲線和平滑擬合曲線,還在相應邊界展示了每個變量的箱線圖。speard=FALSE選項刪除了正負均方根在平滑曲線上的展開和非對稱信息。smoother.args=list(Ity=2)選項設置loess擬合曲線為虛線。pch=19選項設置點為實心圓(默認為空心圓)??梢钥吹?,曲線的擬合比直線更好。
檢測二變量關系
> states<- as.data.frame(state.x77[,c("Murder","Population","Illiteracy","Income","Frost")])
> cor(states)
? ? ? ? ? ? ? Murder Population Illiteracy? ? Income? ? ? Frost
Murder? ? ? 1.0000000? 0.3436428? 0.7029752 -0.2300776 -0.5388834
Population? 0.3436428? 1.0000000? 0.1076224? 0.2082276 -0.3321525
Illiteracy? 0.7029752? 0.1076224? 1.0000000 -0.4370752 -0.6719470
Income? ? -0.2300776? 0.2082276 -0.4370752? 1.0000000? 0.2262822
Frost? ? ? -0.5388834 -0.3321525 -0.6719470? 0.2262822? 1.0000000
> library(car)
> scatterplotMatrix(states, spread=FALSE, smoother.args=list(Ity=2),main="scatter plot Matrix")
從圖中可以看到,謀殺率是雙峰曲線,每個預測變量都一定程度上出現(xiàn)了偏斜。謀殺率隨著人口和文盲率的增加而增加,隨著收入水平和結霜天數(shù)增加而下降。
多元線性回歸
<- lm(Murder~Population+Illiteracy+Income+Frost, data=states)
> summary(fit)
Call:
lm(formula = Murder ~ Population + Illiteracy + Income + Frost,
? ? data = states)
Residuals:
? ? Min? ? ? 1Q? Median? ? ? 3Q? ? Max
-4.7960 -1.6495 -0.0811? 1.4815? 7.6210
Coefficients:
? ? ? ? ? ? Estimate Std. Error t value Pr(>|t|)? ?
(Intercept) 1.235e+00? 3.866e+00? 0.319? 0.7510? ?
Population? 2.237e-04? 9.052e-05? 2.471? 0.0173 *?
Illiteracy? 4.143e+00? 8.744e-01? 4.738 2.19e-05 ***
Income? ? ? 6.442e-05? 6.837e-04? 0.094? 0.9253? ?
Frost? ? ? 5.813e-04? 1.005e-02? 0.058? 0.9541? ?
---
Signif. codes:? 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.535 on 45 degrees of freedom
Multiple R-squared:? 0.567,? ? Adjusted R-squared:? 0.5285
F-statistic: 14.73 on 4 and 45 DF,? p-value: 9.133e-08
#當預測變量不止一個時,回歸系數(shù)的含義為:一個預測變量增加一個單位,其他預測變量保持不變時,因變量要增加的數(shù)量。例如,人口率上升1%時,謀殺率會上升2.24%,在0.05水平下顯著。
有交互項的多元線性回歸
> fit<- lm(mpg ~ hp+ wt +hp:wt, data=mtcars)
> summary(fit)
Call:
lm(formula = mpg ~ hp + wt + hp:wt, data = mtcars)
Residuals:
? ? Min? ? ? 1Q? Median? ? ? 3Q? ? Max
-3.0632 -1.6491 -0.7362? 1.4211? 4.5513
Coefficients:
? ? ? ? ? ? Estimate Std. Error t value Pr(>|t|)? ?
(Intercept) 49.80842? ? 3.60516? 13.816 5.01e-14 ***
hp? ? ? ? ? -0.12010? ? 0.02470? -4.863 4.04e-05 ***
wt? ? ? ? ? -8.21662? ? 1.26971? -6.471 5.20e-07 ***
hp:wt? ? ? ? 0.02785? ? 0.00742? 3.753 0.000811 ***
---
Signif. codes:? 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.153 on 28 degrees of freedom
Multiple R-squared:? 0.8848,? ? Adjusted R-squared:? 0.8724
F-statistic: 71.66 on 3 and 28 DF,? p-value: 2.981e-13
> #Pr(>|t|)一欄中,馬力和車重的交互項是顯著的,這意味著應變量與其中一個預測變量的關系依賴于另一個預測變量的水平。此例說明:每加侖汽油行駛英里數(shù)與汽車馬力的關系依車重不同而不同。
#預測模型mpg=49.81-0.12*hp-8.22*wt+0.03*hp*wt
> library(carData)
> plot(effect("hp:wt", fit,, list(wt=c(2.2,3.2,4.2))), multiline=TRUE) #需要載入effects包。
得到如下的結果
> #從圖中可以看出,隨著車重的增加,馬力與每加侖汽油行駛英里數(shù)的關系減弱了。當wt=4.2時,直線幾乎是水平的,表明隨著hp的增加,mpg不會發(fā)生改變。
R語言OLS回歸的基本知識到這就結束了,咱們下期再見!O(∩_∩)O哈哈~