1. 程式人生 > >第8章 SVC/LinearSVC(乳腺癌檢測)

第8章 SVC/LinearSVC(乳腺癌檢測)

資料預處理

不同於匯入 scikit-learn 自有乳腺癌資料集,採用 pandas 讀取下載的資料集。
在這裡插入圖片描述

# 載入資料
from sklearn.datasets import load_breast_cancer

cancer = load_breast_cancer()
X = cancer.data
y = cancer.target
print('data shape: {0}; no. positive: {1}; no. negative: {2}'.format(
    X.shape, y[y==1].shape[0], y[y==0].shape[0]))

data shape: (569, 30); no. positive: 357; no. negative: 212

注意:自有資料集中 diagnosis 已經是0,1形式的 int 型資料。

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

data = pd.read_csv(r'D:\machinelearningDatasets\BreastCancerLR\Breast cancer.csv')

X = data.iloc[:,2:31]
y = data.iloc[:,1
:2] y.diagnosis.value_counts()

在這裡插入圖片描述

y = y.values.ravel() 

資料標準化:
sklearn.preprocessing.MinMaxScaler
Transforms features by scaling each feature to a given range.

from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X = scaler.fit_transform(X)

劃分資料集:

from sklearn.model_selection import
train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

高斯核函式(rbf)

from sklearn.svm import SVC

clf = SVC(C=1.0, kernel='rbf')
clf.fit(X_train, y_train)
train_score = clf.score(X_train, y_train)
test_score = clf.score(X_test, y_test)
print('train score: {0}; test score: {1}'.format(train_score, test_score))

train score: 0.9538461538461539; test score: 0.9649122807017544
擬合非常好!

不對資料標準化時:
train score: 1.0; test score: 0.631578947368421
訓練集分數接近滿分,而驗證集評分很低,典型的過擬合現象。此時優化gamma也可使評分達到0.950。

模型優化:

import sys
sys.path.append(r'C:\Users\Qiuyi\Desktop\scikit-learn code\code\common')

from utils import plot_param_curve
from sklearn.model_selection import GridSearchCV

gammas = np.linspace(0, 0.001, 50)
C = [1, 10, 100,1000]
param_grid = {'gamma': gammas, 'C':C}

clf = GridSearchCV(SVC(), param_grid, cv=5, return_train_score=True)
clf.fit(X, y)
print("best param: {0}\nbest score: {1}".format(clf.best_params_,
                                                clf.best_score_))

best param: {‘C’: 1000, ‘gamma’: 0.0008979591836734694}
best score: 0.9789103690685413

繪製學習曲線:

import time
from utils import plot_learning_curve
from sklearn.model_selection import ShuffleSplit

cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=0)
title = 'Learning Curves for Gaussian Kernel'

start = time.clock()
plt.figure(figsize=(10, 4), dpi=144)
plot_learning_curve(plt, SVC(C=1000, kernel='rbf', gamma=0.0008979591836734694),
                    title, X, y, ylim=(0.5, 1.01), cv=cv)

print('elaspe: {0:.6f}'.format(time.clock()-start))

elaspe: 0.340826
在這裡插入圖片描述

多項式核函式(poly)

簡單測試一下,執行明顯比高斯核函式慢一些。

from sklearn.svm import SVC

clf = SVC(C=1.0, kernel='poly', degree=2)
clf.fit(X_train, y_train)
train_score = clf.score(X_train, y_train)
test_score = clf.score(X_test, y_test)
print('train score: {0}; test score: {1}'.format(train_score, test_score))

train score: 0.967032967032967; test score: 0.9473684210526315

擬合情況比較好!

繪製學習曲線:

import time
from utils import plot_learning_curve
from sklearn.model_selection import ShuffleSplit

cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=0)
title = 'Learning Curves with degree={0}'
degrees = [1, 2]

start = time.clock()
plt.figure(figsize=(12, 4), dpi=144)
for i in range(len(degrees)):
    plt.subplot(1, len(degrees), i + 1)
    plot_learning_curve(plt, SVC(C=1.0, kernel='poly', degree=degrees[i]),
                        title.format(degrees[i]), X, y, ylim=(0.8, 1.01), cv=cv, n_jobs=-1)

print('elaspe: {0:.6f}'.format(time.clock()-start))

elaspe: 431.939271
在這裡插入圖片描述

計算代價非常高!

一階多項式核函式分數偏高一些。但仍不如高斯核函式。

多項式 LinearSVC

LinearSVC() 與 SVC(kernel=‘linear’) 的區別:

  • LinearSVC() 最小化 hinge loss的平方,
    SVC(kernel=‘linear’) 最小化 hinge loss;
  • LinearSVC() 使用 one-vs-rest 處理多類問題,
    SVC(kernel=‘linear’) 使用 one-vs-one 處理多類問題;
  • LinearSVC() 使用 liblinear 執行,
    SVC(kernel=‘linear’)使用 libsvm 執行;
  • LinearSVC() 可以選擇正則項和損失函式,
    SVC(kernel=‘linear’)使用預設設定。

一句話,大規模線性可分問題上 LinearSVC 更快。

from sklearn.svm import LinearSVC
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import Pipeline

def create_model(degree=2, **kwarg):
    polynomial_features = PolynomialFeatures(degree=degree, include_bias=False)
    scaler = MinMaxScaler()
    linear_svc = LinearSVC(**kwarg)
    pipeline = Pipeline([("polynomial_features", polynomial_features),
                         ("scaler", scaler),
                         ("linear_svc", linear_svc)])
    return pipeline

clf = create_model(penalty='l1', dual=False)
clf.fit(X_train, y_train)
train_score = clf.score(X_train, y_train)
test_score = clf.score(X_test, y_test)
print('train score: {0}; test score: {1}'.format(train_score, test_score))

train score: 0.9824175824175824; test score: 0.9649122807017544

import time
from utils import plot_learning_curve
from sklearn.model_selection import ShuffleSplit

cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=0)
title = 'Learning Curves for LinearSVC with Degree={0}'
degrees = [1, 2]

start = time.clock()
plt.figure(figsize=(12, 4), dpi=144)
for i in range(len(degrees)):
    plt.subplot(1, len(degrees), i + 1)
    plot_learning_curve(plt, create_model(penalty='l1', dual=False, degree=degrees[i]),
                        title.format(degrees[i]), X, y, ylim=(0.8, 1.01), cv=cv)

print('elaspe: {0:.6f}'.format(time.clock()-start))

在這裡插入圖片描述

擬合情況略差於高斯核函式,但好於多項式核函式!關鍵是比較快!

參考:
common\utils
第3章 plot_learning_curve 繪製學習曲線

注意事項:

必須 values.ravel() ,否則:

C:\Python36\lib\site-packages\sklearn\utils\validation.py:578: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)

GridSearchCV 利用 mean_train_score 等引數時,必須有 return_train_score=True ,否則會報錯。

plot_param_curve()
train_scores_mean = cv_results[‘mean_train_score’]
train_scores_std = cv_results[‘std_train_score’]
test_scores_mean = cv_results[‘mean_test_score’]
test_scores_std = cv_results[‘std_test_score’]