1. 程式人生 > >CS231n Assiganment#1解析(一)——KNN

CS231n Assiganment#1解析(一)——KNN

前言

          前段時間學習了深度學習入門課程斯坦福CS231n,鞏固和理解課程的最佳方式就是完成課後程式碼作業。在這裡記錄下本人對作業的思考和解析,以供大家參考。

          關於CS231n學習筆記翻譯,強烈推薦 知乎專欄

          配合筆記看視訊,再完成作業,非常有效。

一、準備工作

1、作業Assignment1下載

        連結:https://pan.baidu.com/s/11tDkdBRy5ndwkKue_1vMuw 密碼:jvgk

2、環境Anaconda2安裝

         完成作業需要Python以及許多相關科學計算環境,建議大家直接安裝Anaconda2,能夠快速方便的配置,直接上手寫作業。程式碼是Python 2.x版本,故選擇Anaconda2-4.2.0-Windows-x86_64.exe,否則可能出現某些函式不相容,造成不必要的麻煩。

3、資料集CIFAR-10下載

         簡單介紹一下,CIFAR-10是一個只有10類圖片的彩色圖片資料庫,共6w張32*32畫素大小的圖片,其中5w張是訓練圖片,1w張是測試圖片,所有圖片均已標記。

        選擇CIFAR10 python version下載,解壓後將資料放在cs231n/datasets目錄下。

4、驗證WebSockets是否可用

        作業在Ipython Notebook內完成,需要支援WebSockets。開啟下述網頁,若顯示“WORK FOR YOU”,表示可用。

         測試連結:http://websocketstest.com/

        

         出現以上字樣即可。

        若無法使用,需要翻牆。網上可用VPN有很多,大家可以搜尋一下。

        至此準備工作全部完成,可以開始寫作業啦。

二、作業解析

1、連線伺服器

         如圖所示,開啟cmd,在[your path]/assignment1/目錄下輸入Ipython notebook 或者 Jupyter notebook,回車。

       

        更多Ipython notebook教程可參考 這裡

        注意:一定要在目錄下開啟,否則後面會出現找不到包的情況。

                    退出的話輸入ctrl+c,等待關閉即可。

       成功連上伺服器後,你將在瀏覽器下看到如下介面:

      

       這次的作業就需要在knn.ipynb下完成。

2、執行程式碼

        選中程式碼塊,ctrl+enter或shift+enter執行,區別是shift+enter會自動跳轉到下一部分。頁面右上角的圓圈表示執行狀態,空心表示結束,實心表示正在執行。

                                                                                             

          下面對各部分程式碼進行補充和執行。

           kNN分類器包括兩個部分:

  •     在訓練的時候,分類器載入資料並僅僅記住他們(不做其他處理)
  •     在測試的時候,分類器依靠對測試圖片和訓練圖片做對比,並且選出k個最相似的標籤
  •     另外,k是依靠交叉驗證確定的

         在這個作業中,我們將實現這些步驟並且理解基本的影象分類過程,理解交叉驗證,並且學會寫高效的向量化程式碼

3、程式碼解析

3.1  一些基本的初始化

 # Run some setup code for this notebook.
importrandom
importnumpy as np
fromcs231n.data_utils import load_CIFAR10
importmatplotlib.pyplot as plt

from__future__ import print_function
 

#This is a bit of magic to make matplotlib figures appear inline in the notebook
#rather than in a new window.
%matplotlibinline
plt.rcParams['figure.figsize']= (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation']= 'nearest'
plt.rcParams['image.cmap']= 'gray'

#Some more magic so that the notebook will reload external python modules;
#see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_extautoreload
%autoreload2

           一些初始化程式碼,載入必要的包,保證影象輸出在網頁中而不新建視窗。

3.2 載入資料

 # Load the raw CIFAR-10 data.
cifar10_dir= 'cs231n/datasets/cifar-10-batches-py'
X_train,y_train, X_test, y_test = load_CIFAR10(cifar10_dir)

#As a sanity check, we print out the size of the training and test data.
print('Trainingdata shape: ', X_train.shape)
print('Traininglabels shape: ', y_train.shape)
print('Testdata shape: ', X_test.shape)
print('Testlabels shape: ', y_test.shape)

   載入CIFAR-10資料。輸出資料格式:

           

   由於是彩圖3通道,故大小為32*32*3.

3.3  展示部分訓練圖

 # Visualize some examples fromthe dataset.
#We show a few examples of training images from each class.
classes= ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship','truck']
num_classes= len(classes)
samples_per_class= 7
fory, cls in enumerate(classes):
    idxs = np.flatnonzero(y_train == y)
    idxs = np.random.choice(idxs,samples_per_class, replace=False)
    for i, idx in enumerate(idxs):
        plt_idx = i * num_classes + y + 1
        plt.subplot(samples_per_class,num_classes, plt_idx)
        plt.imshow(X_train[idx].astype('uint8'))
        plt.axis('off')

        if i == 0:
            plt.title(cls)
plt.show()

       從每一類中展示7張訓練圖片。結果如下:

    

3.4    取樣資料

#Subsample the data for more efficient code execution in this exercise
num_training= 5000
mask= list(range(num_training))
X_train= X_train[mask]
y_train= y_train[mask]
 
num_test= 500
mask= list(range(num_test))
X_test= X_test[mask]
y_test= y_test[mask]

          在練習中,為了更高效地執行程式碼,我們只取樣部分資料。選取5000張測試圖片,500張測試圖片。

3.5  資料變形

#Reshape the image data into rows
X_train= np.reshape(X_train, (X_train.shape[0], -1))
X_test= np.reshape(X_test, (X_test.shape[0], -1))

print(X_train.shape,X_test.shape)

      將圖片轉化為行向量,所有圖片組成二維矩陣。32*32*3=3072.

     故結果如下:

(5000L,3072L) (500L, 3072L)

3.6  載入函式

fromcs231n.classifiers import KNearestNeighbor
#Create a kNN classifier instance.
#Remember that training a kNN classifier is a noop:
#the Classifier simply remembers the data and does no further processing
classifier= KNearestNeighbor()
classifier.train(X_train,y_train)

 載入KnearestNeighbour包,建立KNearestNeighbor類的物件classifier,呼叫train()函式。

3.7   二重迴圈計算距離矩陣

        現在我們開始計算代表測試圖片和訓練圖片之間距離的矩陣。如果有Ntr張訓練資料,Nte張測試資料,那麼應該得到Nte*Ntr的距離矩陣,其中(i,j)表示第 i 張測試圖距離第 j 張訓練圖片的距離。

       所以,我們需要實現的cs231n/classifiers/k_nearest_neighbor.py目錄下的函式compute_distances_two_loops()使用了二重迴圈。

defcompute_distances_two_loops(self, X):
   """
   Compute the distance between each test point in X and each trainingpoint
   in self.X_train using a nested loop over both the training data and the
   test data.
 

   Inputs:
   - X: A numpy array of shape (num_test, D) containing test data.

   Returns:
   - dists: A numpy array of shape (num_test, num_train) where dists[i, j]
      is the Euclidean distance between the ithtest point and the jth training
      point.
   """

   num_test = X.shape[0]
   num_train = self.X_train.shape[0]
   dists = np.zeros((num_test, num_train)) #500*5000
   for i in xrange(num_test):
      for j in xrange(num_train):
         dists[i,j] = np.sqrt(np.sum(np.square(self.X_train[j,:]- X[i,:])))
       #####################################################################
        # TODO:                                                            #
        # Compute the L2 distance between the ithtest point and the jth    #
        # training point, and store the resultin dists[i, j]. You should   #
        # not use a loop over dimension.                                   #
       #####################################################################
        #pass
       #####################################################################
        #                       END OF YOUR CODE                            #
        #####################################################################
   return dists

         使用的是L2距離,注意 i 和 j 分別代表測試集和訓練集。

3.8 測試距離矩陣

#Open cs231n/classifiers/k_nearest_neighbor.py and implement
#compute_distances_two_loops.

#Test your implementation:
dists= classifier.compute_distances_two_loops(X_test)
print(dists.shape)

 呼叫剛剛寫好的函式,列印距離矩陣的大小:

 (500L, 5000L)
#We can visualize the distance matrix: each row is a single test example and
#its distances to training examples
plt.imshow(dists,interpolation='none')
plt.show()

   將距離矩陣視覺化。每一行表示測試樣例距所有訓練圖片的距離。

      如上圖所示,縱座標表示500張測試圖片,橫座標表示5000張訓練圖片。越黑表示距離越接近,越亮表示距離越遠。

3.9  預測測試圖片的標籤

         實現預測函式predict_labels(),在cs231n/classifiers/k_nearest_neighbor.py目錄下。根據3.7~3.8算出的dists距離矩陣,選出離測試圖片最近的k張訓練圖,投票選出最可能的預測結果。若k=1,就選出距離最近的一張訓練圖,將該圖的標籤作為測試圖的標籤。

def predict_labels(self, dists, k=1):

    """
    Given a matrix of distances between testpoints and training points,
    predict a label for each test point.

    Inputs:
    - dists: A numpy array of shape (num_test,num_train) where dists[i, j]
      gives the distance betwen the ith testpoint and the jth training point.
 
    Returns:
    - y: A numpy array of shape (num_test,)containing predicted labels for the
      test data, where y[i] is the predictedlabel for the test point X[i]. 
    """
    num_test = dists.shape[0]
    y_pred = np.zeros(num_test) #500*1
    for i in xrange(num_test):
      # A list of length k storing the labelsof the k nearest neighbors to
      # the ith test point.
      closest_y = []
     #########################################################################
      # TODO:                                                                #
      # Use the distance matrix to find the knearest neighbors of the ith    #
      # testing point, and use self.y_train tofind the labels of these       #
      # neighbors. Store these labels inclosest_y.                           #
      # Hint: Look up the functionnumpy.argsort.                             #
     #########################################################################

      closest_y = np.argsort(dists[i,:]) # i'm socool
      #pass
     #########################################################################
      # TODO:                                                                #
      # Now that you have found the labels ofthe k nearest neighbors, you    #
      # need to find the most common label inthe list closest_y of labels.   #
      # Store this label in y_pred[i]. Breakties by choosing the smaller     #
      # label.                                                               #
     #########################################################################

      y_pred[i] =np.argmax(np.bincount(self.y_train[closest_y[:k]]))

      #pass
      #########################################################################
      #                           END OF YOURCODE                             #
     #########################################################################
    return y_pred

           對所有的測試圖片遍歷,將dists[]矩陣按行排序(由大到小),索引放於closet_y向量中。對cloest_y中前k個向量進行計數np.bincount(),最終用np.argmax()得到票數最多的下標,作為最終的標籤y_pred[i]。

         舉個例子:

        假如cloest_y向量中排名前五的數字分別為[1,1,1,3,2],那麼np.bincount()將會返回索引在該陣列內出現的次數:

         array([0, 3, 1, 1])

        因為[1,1,1,3,2]中最大數字為3,故bincount()結果有4個數字,索引值為0~3.陣列表示0出現0次,1出現3次,2出現1次,3出現1次。這時候取最大值下標np.argmax(),正好得到索引值1,就是我們希望的結果。

3.10  執行預測程式碼

# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))

結果如下:

Got 137/500 correct => accuracy: 0.274000

3.11 測試k=5的情況

y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)) 

   結果如下:

Got 139/500 correct => accuracy: 0.278000

   比起剛才的27.4%,準確度有了些微的提升。

3.12  半向量化程式碼

def compute_distances_one_loop(self, X):

    """
    Compute the distance between each test point in X and each training point
    in self.X_train using a single loop over the test data.

    Input / Output: Same as compute_distances_two_loops
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train))
    for i in xrange(num_test):
      #######################################################################
      # TODO:                                                               #
      # Compute the L2 distance between the ith test point and all training #
      # points, and store the result in dists[i, :].                        #
      #######################################################################
      dists[i,:] = np.sqrt(np.sum(np.square(self.X_train - X[i,:]),axis = 1))
      #pass
      #######################################################################
      #                         END OF YOUR CODE                            #
      #######################################################################
    return dist

       和雙重迴圈的區別在於,單層迴圈只遍歷了測試圖片,對訓練圖片的遍歷採用了向量化程式碼完成。axis=1表示沿著水平方向累加。這是因為要對某張測試圖片計算他距離每張訓練圖片的距離。

3.13  測試半向量化程式碼

 # Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
print(dists_one.shape)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
    print('Good! The distance matrices are the same')
else:
    print('Uh-oh! The distance matrices are different')

  證明我們的半向量化程式碼是正確的。

3.14  向量化程式碼

        這一部分是最核心、最難以理解,也是最高效的程式碼。要求不能包含任何迴圈,依靠numpy提供的廣播機制來計算矩陣。

        首先考慮兩個需要計算的矩陣。一個包含測試圖片的矩陣X,大小是500*3072,表示有500張測試圖,每一行代表圖片的畫素情況;另一個是包含訓練圖片的矩陣X_train,有5000張圖片,故大小為5000*3072。需要做的就是將測試矩陣的每一行和訓練矩陣的每一行做L2距離計算,結果存在dists矩陣(500*5000)中。

        考慮L2距離的計算公式:

            

        對求和符號內部公式展開得:x^2 + y ^2 – 2*x*y。

        所謂廣播機制,是指兩個矩陣在每一維上維度相等或者其中一個矩陣的維度是1的情況下,較小的矩陣將自動擴充套件為較大矩陣同樣的大小。舉例如下:

        矩陣a = [[1,2,3]

                       [1,2,3]

                       [1,2,3]] 

               b =  [1 1 1]

         a矩陣維度為3*3,b矩陣維度為1*3.由於a和b在列上維度相同,b矩陣在行上維度為1。故a = a+b的結果將為:

               a = [[2,3,4]

                      [2,3,4]

                      [2,3,4]]

         相當於將b矩陣按行廣播,變為[[1,1,1][1,1,1][1,1,1]]了。

         更多廣播機制的內容參見 這裡

def compute_distances_no_loops(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using no explicit loops.

    Input / Output: Same as compute_distances_two_loops
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train)) 
    #########################################################################
    # TODO:                                                                 #
    # Compute the L2 distance between all test points and all training      #
    # points without using any explicit loops, and store the result in      #
    # dists.                                                                #
    #                                                                       #
    # You should implement this function using only basic array operations; #
    # in particular you should not use functions from scipy.                #
    #                                                                       #
    # HINT: Try to formulate the l2 distance using matrix multiplication    #
    #       and two broadcast sums.                                         #
    #########################################################################
    dists += np.sum(self.X_train ** 2, axis=1).reshape(1, num_train) #1*5000,第一次廣播
    dists += np.sum(X ** 2, axis=1).reshape(num_test,1) #500*1,第二次廣播
    dists -= 2 * np.dot(X, self.X_train.T) #500*5000
    dists = np.sqrt(dists)
    #pass
    #########################################################################
    #                         END OF YOUR CODE                              #
    #########################################################################
    return dists

3.15 測試向量化程式碼

# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)

# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
    print('Good! The distance matrices are the same')
else:
    print('Uh-oh! The distance matrices are different')

執行結果:


3.16 比較各函式效果

# Let's compare how fast the implementations are
def time_function(f, *args):
    """
    Call a function f with args and return the time (in seconds) that it took to execute.
    """
    import time
    tic = time.time()
    f(*args)
    toc = time.time()
    return toc - tic

two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)

one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)

no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)

# you should see significantly faster performance with the fully vectorized implementation

      我們在3.14實現了向量化函式,在3.12實現了半向量化函式,在3.7實現了無向量化函式。對他們用時分別測試。結果如下:


3.17  交叉驗證

num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]

X_train_folds = []
y_train_folds = []
################################################################################
# TODO:                                                                        #
# Split up the training data into folds. After splitting, X_train_folds and    #
# y_train_folds should each be lists of length num_folds, where                #
# y_train_folds[i] is the label vector for the points in X_train_folds[i].     #
# Hint: Look up the numpy array_split function.                                #
################################################################################
# split self.X_train to 5 folds
avg_size = int(X_train.shape[0] / num_folds) # will abandon the rest if not divided evenly.
for i in range(num_folds):
    X_train_folds.append(X_train[i * avg_size : (i+1) * avg_size])
    y_train_folds.append(y_train[i * avg_size : (i+1) * avg_size])

pass
################################################################################
#                                 END OF YOUR CODE                             #
################################################################################

# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}


################################################################################
# TODO:                                                                        #
# Perform k-fold cross validation to find the best value of k. For each        #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times,   #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all     #
# values of k in the k_to_accuracies dictionary.                               #
################################################################################
for k in k_choices:
    accuracies = []
    print(k)
    for i in range(num_folds):
        X_train_cv = np.vstack(X_train_folds[0:i] + X_train_folds[i+1:])
        y_train_cv = np.hstack(y_train_folds[0:i] + y_train_folds[i+1:])
        X_valid_cv = X_train_folds[i]
        y_valid_cv = y_train_folds[i]
        
        classifier.train(X_train_cv, y_train_cv)
        dists = classifier.compute_distances_no_loops(X_valid_cv)
        accuracy = float(np.sum(classifier.predict_labels(dists, k) == y_valid_cv)) / y_valid_cv.shape[0]
        accuracies.append(accuracy)
    k_to_accuracies[k] = accuracies
pass
################################################################################
#                                 END OF YOUR CODE                             #
################################################################################

# Print out the computed accuracies
for k in sorted(k_to_accuracies):
    for accuracy in k_to_accuracies[k]:
        print('k = %d, accuracy = %f' % (k, accuracy))

       這部分程式碼不難理解,主要將訓練程式碼分為5部分,其中一部分作為驗證集,來不斷改變k值,尋找最優解。其中np.vstack()表示沿著豎直方向將矩陣堆疊,np.hstack()表示沿水平方向堆疊矩陣。執行結果:

1
3
5
8
10
12
15
20
50
100
k = 1, accuracy = 0.263000
k = 1, accuracy = 0.257000
k = 1, accuracy = 0.264000
k = 1, accuracy = 0.278000
k = 1, accuracy = 0.266000
k = 3, accuracy = 0.239000
k = 3, accuracy = 0.249000
k = 3, accuracy = 0.240000
k = 3, accuracy = 0.266000
k = 3, accuracy = 0.254000
k = 5, accuracy = 0.248000
k = 5, accuracy = 0.266000
k = 5, accuracy = 0.280000
k = 5, accuracy = 0.292000
k = 5, accuracy = 0.280000
k = 8, accuracy = 0.262000
k = 8, accuracy = 0.282000
k = 8, accuracy = 0.273000
k = 8, accuracy = 0.290000
k = 8, accuracy = 0.273000
k = 10, accuracy = 0.265000
k = 10, accuracy = 0.296000
k = 10, accuracy = 0.276000
k = 10, accuracy = 0.284000
k = 10, accuracy = 0.280000
k = 12, accuracy = 0.260000
k = 12, accuracy = 0.295000
k = 12, accuracy = 0.279000
k = 12, accuracy = 0.283000
k = 12, accuracy = 0.280000
k = 15, accuracy = 0.252000
k = 15, accuracy = 0.289000
k = 15, accuracy = 0.278000
k = 15, accuracy = 0.282000
k = 15, accuracy = 0.274000
k = 20, accuracy = 0.270000
k = 20, accuracy = 0.279000
k = 20, accuracy = 0.279000
k = 20, accuracy = 0.282000
k = 20, accuracy = 0.285000
k = 50, accuracy = 0.271000
k = 50, accuracy = 0.288000
k = 50, accuracy = 0.278000
k = 50, accuracy = 0.269000
k = 50, accuracy = 0.266000
k = 100, accuracy = 0.256000
k = 100, accuracy = 0.270000
k = 100, accuracy = 0.263000
k = 100, accuracy = 0.256000
k = 100, accuracy = 0.263000

3.18 結果視覺化

# plot the raw observations
for k in k_choices:
    accuracies = k_to_accuracies[k]
    plt.scatter([k] * len(accuracies), accuracies)

# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()

     橫座標表示不同的k值選擇,縱座標表示交叉驗證的準確率。

3.19  驗證k值

# Based on the cross-validation results above, choose the best value for k,   
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
temp = 0
for k in k_choices:
    accuracies = k_to_accuracies[k]    
    if temp < accuracies[np.argmax(accuracies)]:
        temp = accuracies[np.argmax(accuracies)]
        best_k = k
print(best_k)


classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)

# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))

   我們找到最佳k值,利用測試集驗證。結果如下:

10
Got 141 / 500 correct => accuracy: 0.282000

       需要指出的是,訓練集、驗證集和測試集是不同的三個集合。訓練集是訓練用的資料,在其中分裂出一部分作為驗證集,用來引數調優;記住千萬不能利用測試集來調優,它應該是最後用來檢驗模型能力的標準。

總結

        可以看到,KNN模型用作影象分類任務是沒有優勢的,訓練很簡單(儲存資料),測試的時候很耗費時間和計算資源(一一比對計算)。即使是最好的情況,識別率也不足30%。我們用這個模型來熟悉影象分類的大致流程,訓練我們的向量化思維。