轉(zhuǎn)引自http://blog.csdn.net/meng4411yu/article/details/8851187
robustfit
Robust regression(穩(wěn)健回歸)
語法
b=robustfit(X,y)
b=robustfit(X,y,wfun,tune)
b=robustfit(X,y,wfun,tune,const)
[b,stats]=robustfit(...)
描述
b=robustfit(X,y)通過執(zhí)行穩(wěn)健回歸來估計(jì)線性模型y=Xb,并返回一個(gè)由回歸系數(shù)組成的向量b。X是一個(gè)np預(yù)測(cè)變量矩陣,y是一個(gè)n1觀測(cè)向量。計(jì)算使用的方法是加上bisquare加權(quán)函數(shù)的迭代重加權(quán)最小二乘法。默認(rèn)的情況下,robustfit函數(shù)把全1的一個(gè)列向量添加進(jìn)X中,此列向量與b中第一個(gè)元素的常數(shù)項(xiàng)對(duì)應(yīng)。注意不能直接對(duì)X添加一個(gè)全1的列向量。可以在下面的第三個(gè)描述中通過改變變量“const”來更改robustfit函數(shù)的操作。robustfit函數(shù)把X或y中的NaNs作為一個(gè)缺省值,接著把它刪除。
b=robustfit(X,y,wfun,tune)增加了一個(gè)加權(quán)函數(shù)“wfun”和常數(shù)“tune”?!皌une”是一個(gè)調(diào)節(jié)常數(shù),其在計(jì)算權(quán)重之前被分成殘差向量,如果“wfun”被指定為一個(gè)函數(shù),那么“tune”是必不可少的。權(quán)重函數(shù)“wfun”可以為下表中的任何一個(gè)權(quán)重函數(shù):

如果“tune”未被指定,那么其默認(rèn)值為表中對(duì)應(yīng)值。“wfun”也可以是一個(gè)把殘差向量作為輸入,并產(chǎn)生一個(gè)權(quán)重向量作為輸出的函數(shù)。通過標(biāo)準(zhǔn)誤差估計(jì)和調(diào)節(jié)參數(shù)來調(diào)整殘差?!皐fun”可以用@(@wyfun)。
b=robustfit(X,y,wfun,tune,const)增加一個(gè)“const”控制模式內(nèi)是否包含一個(gè)常數(shù)項(xiàng),默認(rèn)為包含(on)。
[b,stats]=robustfit(...)返回一個(gè)包含一下域的STATS結(jié)構(gòu)。
'ols_s' sigma estimate (rmse) from least squares fit
'robust_s' robust estimate of sigma
'mad_s' MAD estimate of sigma; used for scaling
residuals during the iterative fitting
's' final estimate of sigma, the larger of robust_s
and a weighted average of ols_s and robust_s
'se' standard error of coefficient estimates
't' ratio of b to stats.se
'p' p-values for stats.t
'covb' estimated covariance matrix for coefficient estimates
'coeffcorr' estimated correlation of coefficient estimates
'w' vector of weights for robust fit
'h' vector of leverage values for least squares fit
'dfe' degrees of freedom for error
'R' R factor in QR decomposition of X matrix
The ROBUSTFIT function estimates the variance-covariance matrix of the coefficient estimates as V=inv(X'X)STATS.S^2. The standard errors and correlations are derived from V.
matlab例子:
x = (1:10)';
y = 10 - 2*x + randn(10,1); y(10) = 0;
使用原始最小二乘估計(jì)和穩(wěn)健回歸估計(jì)結(jié)果如下:
bls = regress(y,[ones(10,1) x])
bls =
7.2481
-1.3208
brob = robustfit(x,y)
brob =
9.1063
-1.8231
顯示結(jié)果如下:
scatter(x,y,'filled'); grid on; hold on
plot(x,bls(1)+bls(2)x,'r','LineWidth',2);
plot(x,brob(1)+brob(2)x,'g','LineWidth',2)
legend('Data','Ordinary Least Squares','Robust Regression')

一個(gè)來自網(wǎng)的例子的matlab實(shí)現(xiàn):
估計(jì):
(K>0)
matlab實(shí)現(xiàn)代碼:
[plain] view plaincopy
<embed id="ZeroClipboardMovie_1" src="http://static.blog.csdn.net/scripts/ZeroClipboard/ZeroClipboard.swf" loop="false" menu="false" quality="best" bgcolor="#ffffff" name="ZeroClipboardMovie_1" allowscriptaccess="always" allowfullscreen="false" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" flashvars="id=1&width=16&height=16" wmode="transparent" style="box-sizing: border-box; outline: 0px;" width="16" height="16" align="middle">
<span style="font-size:12px;">function wf=robust(x,y,k)
% find starting values using Ordinary Least Squares
w = x\y;
r = y - x*w;
scale=1;
%optional I can compute the scale using MED
% scale = median(abs(r - median(r)))/0.6745;
cvg = 1;%convergence
while (cvg > 1e-5)
r = r/scale;
wf = w; %save w
WH=wfun(r,k);%diff(rho(xc)/x)
% do weighted least-squares
yst = y.*sqrt(WH);
xst = matmul(x,sqrt(WH));
w = xst\yst;
%the new residual
r = y - x*w;
% compute the convergence
cvg = max(abs(w-wf)./abs(wf));
end;
function W=wfun(r,k)
W=zeros(length(r),1);
for i=1:length(r)
if (r(i)>=-(k-1)) && (r(i)<=k)
W(i)=1;
elseif r(i)<-(k-1)
W(i)=(k-1)4/(r(i)4);
else
W(i)=k4/(r(i)4);
end
end</span>



。
另外,http://blog.csdn.net/abcjennifer/article/details/7449435#(M-estimator M估計(jì)法 用于幾何模型建立)
博客中對(duì)M估計(jì)法有蠻好的解釋。