1. 程式人生 > >sklearn學習筆記之svm

sklearn學習筆記之svm

支援向量機:

# -*- coding: utf-8 -*-
import sklearn
from sklearn.svm import SVC
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import datasets
import pandas as pd
import numpy


def getData_1():

    iris = datasets.load_iris()
    X = iris.data   #樣本特徵矩陣,150*4矩陣,每行一個樣本,每個樣本維度是4
y = iris.target #樣本類別矩陣,150維行向量,每個元素代表一個樣本的類別 df1=pd.DataFrame(X, columns =['SepalLengthCm','SepalWidthCm','PetalLengthCm','PetalWidthCm']) df1['target']=y return df1 df=getData_1() X_train, X_test, y_train, y_test = train_test_split(df.iloc[:,0:3],df['target'], test_size=0.3, random_state
=42) print X_train, X_test, y_train, y_test model = SVC(C=1.0, kernel='rbf', gamma='auto') """引數 --- C:誤差項的懲罰引數C gamma: 核相關係數。浮點數,If gamma is ‘auto’ then 1/n_features will be used instead. """ model.fit(X_train,y_train) predict=model.predict(X_test) print predict print y_test.values print
'SVC分類:{:.3f}'.format(model.score(X_test, y_test))

結果:

[1 0 2 1 1 0 1 2 1 1 2 0 0 0 0 1 2 1 1 2 0 2 0 2 2 2 2 2 0 0 0 0 1 0 0 2 1
 0 0 0 2 1 1 0 0]
[1 0 2 1 1 0 1 2 1 1 2 0 0 0 0 1 2 1 1 2 0 2 0 2 2 2 2 2 0 0 0 0 1 0 0 2 1
 0 0 0 2 1 1 0 0]

SVC分類:1.000

準確度驚人的100%......,比線性迴歸和樸素貝葉斯分類高很多。。。