1. 程式人生 > >學習曲線檢查模型欠擬合&過擬合

學習曲線檢查模型欠擬合&過擬合

from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
import numpy as np

def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
                        n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
    plt.figure()
    plt.title(title)
    if ylim is not None:
        plt.ylim(*ylim)
        plt.xlabel("Training examples")
        plt.ylabel("Score")
        train_sizes, train_scores, test_scores = learning_curve(
        estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
        train_scores_mean = np.mean(train_scores, axis=1)
        train_scores_std = np.std(train_scores, axis=1)
        test_scores_mean = np.mean(test_scores, axis=1)
        test_scores_std = np.std(test_scores, axis=1)
        plt.grid()
        
        plt.fill_between(
        train_sizes, train_scores_mean - train_scores_std,
        train_scores_mean + train_scores_std, alpha=0.1,color="r")
        
        plt.fill_between(
        train_sizes, test_scores_mean - test_scores_std,
        test_scores_mean + test_scores_std, alpha=0.1, color="g")
        
        plt.plot(train_sizes, train_scores_mean, 'o-', color="r",label="Training score")
        plt.plot(train_sizes, test_scores_mean, 'o-', color="g",label="Cross-validation score")
        plt.legend(loc="best")
    return plt

例項(員工離職預測(邏輯迴歸)):
title:影象的名字。

cv:預設cv=None,如果需要傳入則如下:

cv : int, 交叉驗證生成器或可迭代的可選項,確定交叉驗證拆分策略。cv的可能輸入是:
- 無,使用預設的3倍交叉驗證,
- 整數,指定摺疊數。
- 要用作交叉驗證生成器的物件。
- 可迭代的yielding訓練/測試分裂。

ShuffleSplit:我們這裡設定cv,交叉驗證使用ShuffleSplit方法,一共取得100組訓練集與測試集,每次的測試集為20%,它返回的是每組訓練集與測試集的下標索引,由此可以知道哪些是train,那些是test。

ylim:tuple, shape (ymin, ymax), 可選的。定義繪製的最小和最大y值,這裡是(0.7,1.01)。

n_jobs : 整數,可選並行執行的作業數(預設值為1)。windows開多執行緒需要在"name"==__main__中執行。

title='Learning Curves'
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=0)
estimator =LR     # 建模
plot_learning_curve(estimator, title, X, y, (0.7, 1.01), cv=cv, n_jobs=1)

在這裡插入圖片描述