1. 程式人生 > >機器學習演算法原理總結系列---演算法基礎之(11)聚類K均值(Clustering K-means)

機器學習演算法原理總結系列---演算法基礎之(11)聚類K均值(Clustering K-means)

一、原理詳解

  1. 歸類:
    聚類(clustering) 屬於非監督學習 (unsupervised learning)
    無類別標記(class label)

  2. 舉例:
    這裡寫圖片描述

  3. K-means 演算法:
    3.1 Clustering 中的經典演算法,資料探勘十大經典演算法之一
    3.2 演算法接受引數 k ;然後將事先輸入的n個數據物件劃分為 k個聚類以便使得所獲得的聚類滿足:同一聚類中的物件相似度較高;而不同聚類中的物件相似度較小。
    3.3 演算法思想:
    以空間中k個點為中心進行聚類,對最靠近他們的物件歸類。通過迭代的方法,逐次更新各聚類中心的值,直至得到最好的聚類結果
    3.4 演算法描述:

      (1)適當選擇c個類的初始中心;
      (2)在第k次迭代中,對任意一個樣本,求其到c各中心的距離,將該樣本歸到距離最短的中心所在     
              的類;
      (3)利用均值等方法更新該類的中心值;
      (4)對於所有的c個聚類中心,如果利用(2)(3)的迭代法更新後,值保持不變,則迭代結束,否則繼續迭代。
    

    3.5 演算法流程:
    這裡寫圖片描述
    輸入:k, data[n];
    (1) 選擇k個初始中心點,例如c[0]=data[0],…c[k-1]=data[k-1];
    (2) 對於data[0]….data[n], 分別與c[0]…c[k-1]比較,假定與c[i]差值最少,就標記為i;
    (3) 對於所有標記為i點,重新計算c[i]={ 所有標記為i的data[j]之和}/標記為i的個數;
    (4) 重複(2)(3),直到所有c[i]值的變化小於給定閾值。

  4. 舉例:
    這裡寫圖片描述
    這裡寫圖片描述
    這裡寫圖片描述
    這裡寫圖片描述
    這裡寫圖片描述
    這裡寫圖片描述
    這裡寫圖片描述
    這裡寫圖片描述
    這裡寫圖片描述
    這裡寫圖片描述
    這裡寫圖片描述
    這裡寫圖片描述
    這裡寫圖片描述
    停止
    這裡寫圖片描述
    優點:速度快,簡單
    缺點:最終結果跟初始點選擇相關,容易陷入區域性最優,需直到k值

二、程式碼實現

# -*- coding:utf-8 -*-
import numpy as np
import pandas as pd


# 資料來源是iris資料集,一共150例,其中分為3類:iris-setosa, iris-versicolor,iris-virginica
def read_data():
    IRIS_TRAIN_URL = 'iris_training.csv'
    names = ['sepal-length'
, 'sepal-width', 'petal-length', 'petal-width', 'species'] train = pd.read_csv(IRIS_TRAIN_URL, names=names, skiprows=1) x_train_ = train.drop('species', axis=1) x_train = np.array(x_train_) y_train_ = train.species y_train = np.array(y_train_).tolist() return x_train, y_train # Function: K Means # ------------- # K-Means is an algorithm that takes in a dataset and a constant # k and returns k centroids (which define clusters of data in the # dataset which are similar to one another). def k_means(X, k, max_It): num_points, num_dim = X.shape dataset = np.zeros((num_points, num_dim + 1)) dataset[:, :-1] = X # Initialize centroids randomly centroids = dataset[np.random.randint(num_points, size=k), :] # Randomly assign labels to initial centorid centroids[:, -1] = range(1, k + 1) # Initialize book keeping vars. iterations = 0 old_centroids = None # Run the main k-means algorithm while not should_stop(old_centroids, centroids, iterations, max_It): print("iteration: \n", iterations) print("dataset: \n", dataset) print("centroids: \n", centroids) # Save old centroids for convergence test. Book keeping. old_centroids = np.copy(centroids) iterations += 1 # Assign labels to each datapoint based on centroids update_labels(dataset, centroids) # Assign centroids based on datapoint labels centroids = get_centroids(dataset, k) # We can get the labels too by calling getLabels(dataset, centroids) return dataset # Function: Should Stop # ------------- # Returns True or False if k-means is done. K-means terminates either # because it has run a maximum number of iterations OR the centroids # stop changing. def should_stop(old_centroids, centroids, iterations, max_It): if iterations > max_It: return True return np.array_equal(old_centroids, centroids) # Function: Get Labels # ------------- # Update a label for each piece of data in the dataset. def update_labels(dataset, centroids): # For each element in the dataset, chose the closest centroid. # Make that centroid the element's label. num_points, num_dim = dataset.shape for i in range(0, num_points): dataset[i, -1] = get_label_from_closest_centroid(dataset[i, :-1], centroids) def get_label_from_closest_centroid(dataset_row, centroids): label = centroids[0, -1] min_dist = np.linalg.norm(dataset_row - centroids[0, :-1]) for i in range(1, centroids.shape[0]): dist = np.linalg.norm(dataset_row - centroids[i, :-1]) if dist < min_dist: min_dist = dist label = centroids[i, -1] print("min_dist:", min_dist) return label # Function: Get Centroids # ------------- # Returns k random centroids, each of dimension n. def get_centroids(dataset, k): # Each centroid is the geometric mean of the points that # have that centroid's label. Important: If a centroid is empty (no points have # that centroid's label) you should randomly re-initialize it. result = np.zeros((k, dataset.shape[1])) for i in range(1, k + 1): one_cluster = dataset[dataset[:, -1] == i, :-1] result[i - 1, :-1] = np.mean(one_cluster, axis=0) result[i - 1, -1] = i return result # 任務1:完成上面的例子 x1 = np.array([1, 1]) x2 = np.array([2, 1]) x3 = np.array([4, 3]) x4 = np.array([5, 4]) testX = np.vstack((x1, x2, x3, x4)) result = k_means(testX, 2, 10) print("final result:") print(result) # 任務2:用iris資料集測試聚類的效果 x_train, y_train = read_data() result = k_means(x_train, 3, 100) print("final result:") right = 0 for k, v in enumerate(result): if int(v[-1] - 1) == y_train[k]: right += 1 print('accuracy:' + str((right / 150) * 100) + '%') # print(result)

任務1的結果:
這裡寫圖片描述

任務二的結果:
這裡寫圖片描述

150例K-means聚類演算法的分類能力表現的不是特別好,這個特別依賴剛開始的聚類中心的選擇,選擇的好的話,分類表現還算可以,但選擇不好的話,分類效果很差。所以k-means的缺點是最終結果跟初始點選擇相關,容易陷入區域性最優,需直到k值。