1. 程式人生 > >中國mooc北京理工大學機器學習第二周(一):分類

中國mooc北京理工大學機器學習第二周(一):分類

kmeans 方法 輸入 nump arr mod 理工大學 each orm

一、K近鄰方法(KNeighborsClassifier)

使用方法同kmeans方法,先構造分類器,再進行擬合。區別是Kmeans聚類是無監督學習,KNN是監督學習,因此需要劃分出訓練集和測試集。

直接貼代碼。

X=[0,1,2,3]#樣本
Y=[0,0,1,1]#標簽
from  sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=3)#選擇周圍的三個點作為近鄰分析
neigh.fit(X,Y)
neigh.predict(1.1)

K-NN可以看成:有那麽一堆你已經知道分類的數據,然後當一個新數據進入的時候,就開始跟訓練數據裏的每個點求距離,然後挑離這個訓練數據最近的K個點看看這幾個點屬於什麽類型,然後用少數服從多數的原則,給新數據歸類。

2、決策樹(DessionTreeClassifer)

from sklearn.datasets import load_iris
form sklearn.tree import DessionTreeClassifier
from sklearn.model_selection import cross_val_score#交叉驗證
clf = DessionTreeClassifier()#默認參數構造的話,基於基尼系數
iris = iris_load()
cross_val_score(clf,iris.data,iris.target,cv=10)#cv=10代表10折驗證
from
sklearn import tree X = [[0, 0], [1, 1]] Y = [0, 1] clf = tree.DecisionTreeClassifier() clf = clf.fit(X, Y) clf.predict([[1.2, 1.2]])

三、樸素貝葉斯(naive_bayes.GaussianNB)

對於給定的數據,首先基於特征的條件獨立性假設,學習輸入輸出的聯合分布概率,然後基於此模型對給定的輸入x,應用貝葉斯定力,測試其後驗概率。

在sklearn中實現了高斯樸素貝葉斯,多項式樸素貝葉斯,多元伯努利樸素貝葉斯。

import numpy as np
from
sklearn.naive_bayes import GaussianNB X=np.array([[-1,-1],[-2,-1],[-3,-2],[1,1],[2,1],[3,2]]) Y=np.array([1,1,1,2,2,2]) clf=GaussianNB() clf.fit(X,Y) print(clf.predict([[-0.8,-1]]))

三、人體運動狀態信息評級

    import pandas as pd
    import numpy as np  
     
    from sklearn.preprocessing import Imputer
    from sklearn.cross_validation import train_test_split 
    from sklearn.metrics import classification_report
       
    from sklearn.neighbors import KNeighborsClassifier#K近鄰
    from sklearn.tree import DecisionTreeClassifier#決策樹
    from sklearn.naive_bayes import GaussianNB#樸素貝葉斯
     
    def load_datasets(feature_paths, label_paths):
        feature = np.ndarray(shape=(0,41))
        label = np.ndarray(shape=(0,1))
        for file in feature_paths:
            df = pd.read_table(file, delimiter=,, na_values=?, header=None)
            imp = Imputer(missing_values=NaN, strategy=mean, axis=0)
            imp.fit(df)
            df = imp.transform(df)
            feature = np.concatenate((feature, df))
         
        for file in label_paths:
            df = pd.read_table(file, header=None)
            label = np.concatenate((label, df))
             
        label = np.ravel(label)
        return feature, label
     
    if __name__ == __main__:
        ‘‘‘ 數據路徑 ‘‘‘
        featurePaths = [A.feature,B.feature,C.feature,D.feature,E.feature]
        labelPaths = [A.label,B.label,C.label,D.label,E.label]
        ‘‘‘ 讀入數據  ‘‘‘
        x_train,y_train = load_datasets(featurePaths[:4],labelPaths[:4])
        x_test,y_test = load_datasets(featurePaths[4:],labelPaths[4:])
        x_train, x_, y_train, y_ = train_test_split(x_train, y_train, test_size = 0.0)
         
        print(Start training knn)
        knn = KNeighborsClassifier().fit(x_train, y_train)
        print(Training done)
        answer_knn = knn.predict(x_test)
        print(Prediction done)
         
        print(Start training DT)
        dt = DecisionTreeClassifier().fit(x_train, y_train)
        print(Training done)
        answer_dt = dt.predict(x_test)
        print(Prediction done)
         
        print(Start training Bayes)
        gnb = GaussianNB().fit(x_train, y_train)
        print(Training done)
        answer_gnb = gnb.predict(x_test)
        print(Prediction done)
         
        print(\n\nThe classification report for knn:)
        print(classification_report(y_test, answer_knn))
        print(\n\nThe classification report for DT:)
        print(classification_report(y_test, answer_dt))
        print(\n\nThe classification report for Bayes:)
        print(classification_report(y_test, answer_gnb))

四、支持向量機(SVM)

    from sklearn import svm  
      
    X = [[0, 0], [1, 1], [1, 0]]  # training samples   
    y = [0, 1, 1]  # training target  
    clf = svm.SVC()  # class   
    clf.fit(X, y)  # training the svc model  
      
    result = clf.predict([[2, 2]]) # predict the target of testing samples   
    print(result)  # target   
      
    print(clf.support_vectors_)  #support vectors  
      
    print(clf.support_)  # indeices of support vectors  
      
    print(clf.n_support_)  # number of support vectors for each class 

以上。

:)

中國mooc北京理工大學機器學習第二周(一):分類