Machine Learning - Week3 Logistic Regression ex2

sigmoid函數(shù):

function g = sigmoid(z)

g = zeros(size(z));

g = 1 ./ (1 + exp(-z));

end


cost和gradient函數(shù): 直接用向量化公式

function [J, grad] = costFunction(theta, X, y)

m = length(y);

J = 0;

grad = zeros(size(theta));

% h size: m * 1

h = sigmoid(X * theta);

% y' * log(h) size: 1 * 1

J = -1 / m * (y' * log(h) + (1 - y)' * log (1 - h));

% X' size: (n+1) * m, X*theta size: m * 1, grad size: (n+1) * 1

grad = (X' * (sigmoid(X * theta) - y)) / m;

end


predict函數(shù): 注意使用find()直接按條件批量處理

function p = predict(theta, X)

m = size(X, 1);

p = zeros(m, 1);

positiveIdx = find(sigmoid(X * theta) >= 0.5);

p(positiveIdx) = 1;

end


帶Regulation的cost和gradient函數(shù):直接使用向量化公式和切片計算, 注意分別計算grad(1)和grad(2:n+1)

function [J, grad] = costFunctionReg(theta, X, y, lambda)

m = length(y);

J = 0;

grad = zeros(size(theta));

n1 = length(theta);

% h size: m * 1

h = sigmoid(X * theta);

% regularization

bias = lambda / (2 * m) * sum((theta .* theta)(2:n1));

% y' * log(h) size: 1 * 1

J = -1 / m * (y' * log(h) + (1 - y)' * log (1 - h)) + bias;

% X size: (m * (n+1), X' size : (n+1) * m, X'(1) size: 1 * m, y size: m + 1

grad(1) = X'(1, :) * (sigmoid(X * theta) - y) / m;

% X'(2:n1) size: (n) * m, X*theta size: m * 1, grad(2:n1) size: (n) * 1

grad(2:n1) = (X'(2:n1, :) * (sigmoid(X * theta) - y)) / m + lambda * theta(2:n1) / m;

end

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容