吳恩達機器學習 - 邏輯迴歸 吳恩達機器學習 - 邏輯迴歸
阿新 • • 發佈:2018-11-10
原
吳恩達機器學習 - 邏輯迴歸
2018年06月19日 12:49:09 離殤灬孤狼 閱讀數:96 更多<div class="tags-box space"> <span class="label">個人分類:</span> <a class="tag-link" href="https://blog.csdn.net/wyg1997/article/category/7742222" target="_blank">吳恩達機器學習 </a> </div> </div> <div class="operating"> </div> </div> </div> </div> <article> <div id="article_content" class="article_content clearfix csdn-tracking-statistics" data-pid="blog" data-mod="popu_307" data-dsm="post" style="height: 2211px; overflow: hidden;"> <div class="article-copyright"> 版權宣告:如果感覺寫的不錯,轉載標明出處連結哦~blog.csdn.net/wyg1997 https://blog.csdn.net/wyg1997/article/details/80732422 </div> <div class="markdown_views"> <!-- flowchart 箭頭圖示 勿刪 --> <svg xmlns="http://www.w3.org/2000/svg" style="display: none;"><path stroke-linecap="round" d="M5,0 0,2.5 5,5z" id="raphael-marker-block" style="-webkit-tap-highlight-color: rgba(0, 0, 0, 0);"></path></svg> <p>題目連結:<a href="http://s3.amazonaws.com/spark-public/ml/exercises/on-demand/machine-learning-ex2.zip" rel="nofollow" target="_blank">點選開啟連結</a></p>
先貼筆記
然後是程式碼
plotData.m(視覺化資料):
function plotData(X, y)
%PLOTDATA Plots the data points X and y into a new figure
% PLOTDATA(x,y) plots the data points with + for the positive examples
% and o for the negative examples. X is assumed to be a Mx2 matrix.
% Create New Figure
figure; hold on;
% ====================== YOUR CODE HERE ======================
% Instructions: Plot the positive and negative examples on a
% 2D plot, using the option 'k+' for the positive
% examples and 'ko' for the negative examples.
%
% Find Indices of Positive and Negative Examples
pos = find(y==1);
neg = find(y == 0);
% Plot Examples
plot(X(pos, 1), X(pos, 2), 'k+','LineWidth', 2, 'MarkerSize', 7);
plot(X(neg, 1), X(neg, 2), 'ko', 'MarkerFaceColor', 'y','MarkerSize', 7);
% =========================================================================
hold off;
end
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
效果圖:
sigmoid.m(求g(z)的函式):
function g = sigmoid(z)
%SIGMOID Compute sigmoid function
% g = SIGMOID(z) computes the sigmoid of z.
% You need to return the following variables correctly
g = zeros(size(z));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the sigmoid of each value of z (z can be a matrix,
% vector or scalar).
[h,w] = size(z);
for i = 1:h
for j = 1:w
g(i,j) = 1.0/(1+exp(-z(i,j)));
end
end
% =============================================================
end
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
costFunction.m(計算當前θ的代價和各方向的梯度)(使用矩陣計算精簡):
function [J, grad] = costFunction(theta, X, y)
%COSTFUNCTION Compute cost and gradient for logistic regression
% J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the
% parameter for logistic regression and the gradient of the cost
% w.r.t. to the parameters.
% Initialize some useful values
m = length(y); % number of training examples
% You need to return the following variables correctly
J = 0;
grad = zeros(size(theta));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
% You should set J to the cost.
% Compute the partial derivatives and set grad to the partial
% derivatives of the cost w.r.t. each parameter in theta
%
% Note: grad should have the same dimensions as theta
%
J = (-y'*log(sigmoid(X*theta))-(1-y')*log(1-sigmoid(X*theta)))/m;
grad = X'*(sigmoid(X*theta)-y)./m;
% =============================================================
end
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
predict.m(預測):
function p = predict(theta, X)
%PREDICT Predict whether the label is 0 or 1 using learned logistic
%regression parameters theta
% p = PREDICT(theta, X) computes the predictions for X using a
% threshold at 0.5 (i.e., if sigmoid(theta'*x) >= 0.5, predict 1)
m = size(X, 1); % Number of training examples
% You need to return the following variables correctly
p = zeros(m, 1);
% ====================== YOUR CODE HERE ======================
% Instructions: Complete the following code to make predictions using
% your learned logistic regression parameters.
% You should set p to a vector of 0's and 1's
%
p = sigmoid(X*theta) >= 0.5;
% =========================================================================
end
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
問題
為什麼在ex1.m中,使用高階演算法時,程式碼寫成了這樣:
options = optimset('GradObj', 'on', 'MaxIter', 400);
[theta, cost] = fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);
- 1
- 2
看起來很奇怪有沒有,和筆記上不一樣哎。。。
其實是因為筆記上傳入的引數就一個theta,但是我們這裡的程式碼需要傳入theta,x,y三個引數呢,所以看一下fminunc的用法,發現可以傳入一個三個引數的函式控制代碼。用法是這樣:
第一個是Θ,然後是(X,y)。然而我們的costFunction函式引數是(X,y,theta),發現了吧,是順序問題
那麼這麼寫的含義就清楚了,就是換一下引數的位置,下面是我搜到的解釋,也是看這個才明白的: