1. 程式人生 > >Coursera Machine Learning Week 7

Coursera Machine Learning Week 7

Quiz

1.

Suppose you have trained an SVM classifier with a Gaussian kernel, and it learned the following decision boundary on the training set:

You suspect that the SVM is underfitting your dataset. Should you try increasing or decreasing C? Increasing or decreasing σ2?

  • It would be reasonable to try decreasing C. It would also be reasonable to try decreasing σ2.
  • It would be reasonable to try decreasing C. It would also be reasonable to try increasing σ2.
  • It would be reasonable to try increasing C. It would also be reasonable to try increasing σ2.
  • (CHECKED) It would be reasonable to try increasing C. It would also be reasonable to try decreasing σ2.

2.

The formula for the Gaussian kernel is given by similarity(x,l(1))=exp(−||x−l(1)||22σ2) .

The figure below shows a plot of f1=similarity(x,l(1)) when σ2=1.

Which of the following is a plot of f1 when σ2=0.25?

選瘦的那個

3.

The SVM solves

minθ C∑mi=1y(i)cost1(θTx(i))+(1−y(i))cost0(θTx(i))+∑nj=1θ2j
where the functions cost0(z) and cost1(z) look like this:

The first term in the objective is:

C∑mi=1y(i)cost1(θTx(i))+(1−y(i))cost0(θTx(i)).
This first term will be zero if two of the following four conditions hold true. Which are the two conditions that would guarantee that this term equals zero?

  • (CHECKED) For every example with y(i)=0, we have that θTx(i)≤−1.
  • (CHECKED) For every example with y(i)=1, we have that θTx(i)≥1.
  • For every example with y(i)=0, we have that θTx(i)≤0.
  • For every example with y(i)=1, we have that θTx(i)≥0.

4.

Suppose you have a dataset with n = 10 features and m = 5000 examples.

After training your logistic regression classifier with gradient descent, you find that it has underfit the training set and does not achieve the desired performance on the training or cross validation sets.

Which of the following might be promising steps to take? Check all that apply.

  • (CHECKED) Try using a neural network with a large number of hidden units.
  • (CHECKED) Create / add new polynomial features.
  • Use a different optimization method since using gradient descent to train logistic regression might result in a local minimum.
  • Reduce the number of examples in the training set.

5.

Which of the following statements are true? Check all that apply.

  • If you are training multi-class SVMs with the one-vs-all method, it is not possible to use a kernel.

  • (CHECKED) Suppose you have 2D input examples (ie, x(i)∈ℝ2). The decision boundary of the SVM (with the linear kernel) is a straight line.

  • (CHECKED) The maximum value of the Gaussian kernel (i.e., sim(x,l(1))) is 1.

  • If the data are linearly separable, an SVM using a linear kernel will return the same parameters θ regardless of the chosen value of C (i.e., the resulting value of θ does not depend on C).

ex6

gaussianKernel.m

sim =  exp( -1 * ((x1 - x2)'*(x1-x2)) / (2*sigma*sigma) );
% sim =  exp( -1 * sum((x1-x2).^2) / (2*sigma*sigma) );

dataset3Param.m

if(0)
min = inf;
v = [0.01 0.03 0.1 0.3 1 3 10 30];
fprintf('find min prediction error\n');
for c = v
  for s = v
    model = svmTrain(X, y, c, @(x1, x2) gaussianKernel(x1, x2, s));
    e = mean(double(svmPredict(model, Xval) ~= yval));
    fprintf('c, s, e: %f, %f, %f\n', c, s, e);
    if( e <= min )
      fprintf('## min c, s, e: %f %f %f\n', c, s, e);
      C = c;
      sigma = s;
      min = e;
    end
  end
end
fprintf('final C, sigma, min_error: %f %f %f\n', C, sigma, min);
% final C, sigma, min_error: 1.000000 0.100000 0.030000
endif

emailFeature.m

for i = word_indices
    x(i) = 1;
end
% x = arrayfun(@(i) ~isempty(word_indices(word_indices==i)), 1:n);

processEmail.m

    for i = 1:length(vocabList)
      if (strcmp(str, vocabList{i}))
        word_indices = [word_indices ; i];
      end
    end
    % word_indices = [word_indices strmatch(str, vocabList, 'exact')];

-eof-