1. 程式人生 > >Learn: OverfittingAndUnderfitting、一種緩解方式之決策迴歸樹中設定max_leaf_nodes

Learn: OverfittingAndUnderfitting、一種緩解方式之決策迴歸樹中設定max_leaf_nodes

過擬合、欠擬合

以決策樹為例,來說。
dataset被劃分到葉子,樹太淺,假如資料集僅被split成了2個groups(劃分的粒度特粗),每個group裡必然特別多的houses。如果樹特別地深,假如資料集被split成了1024個groups(劃分的粒度特粗),葉子特別多,每個葉子上的houses則特別地少。
簡單說,樹太deep,易發生過擬合。
樹太shallow,易發生欠擬合。

Overfitting: capturing spurious patterns that won’t recur in the future, leading to less accurate predictions, or
Underfitting

: failing to capture relevant patterns, again leading to less accurate predictions.

overfitting:該模型capture了將來不會再出現的**“騙人模式”(spurious patterns)**,對訓練資料擬合地perfect,But對驗證集、對新資料的預測超級poor。
unfitting:該模型沒有能capture到資料中 important distinctions and patterns。因為我們關注模型在新資料上的準確度,我們評估驗證集,我們想找到過擬合和欠擬合之間的the sweet spot(最佳點)

一種解決:決策迴歸樹中設定max_leaf_nodes

比如,在決策樹中,我們可以設定max_leaf_nodes值。如,DecisionTreeRegressor函式提供了max_leaf_nodes。
嘗試不同的max_leaf_nodes值,觀察Error,找到最佳的max_leaf_nodes。

import pandas as pd
from sklearn.model_selection import train_test_split

#read csv
melbourne_data= pd.read_csv(r'G:\kaggle\melb_data.csv')
#drop NaN
filtered_melbourne_data= melbourne_data.dropna(axis=0)#pandas的 pd.dropna() #target y= filtered_melbourne_data.Price #choosing features melbourne_features=['Rooms', 'Bathroom', 'Landsize', 'BuildingArea', 'YearBuilt', 'Lattitude', 'Longtitude'] #X X= filtered_melbourne_data[melbourne_features] #split data into two pieces: training data and validation data train_X, val_X, train_y, val_y = train_test_split(X, y, test_size=0.33, random_state=0 )#random_state設為0,重複實驗室,其它引數不變時,每次得到的隨機陣列是不一樣的。
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_absolute_error

#compare MAE scores from different values for max_leaf_nodes
def get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y ):
    model= DecisionTreeRegressor(max_leaf_nodes= max_leaf_nodes, random_state=1)
    model.fit(train_X, train_y)
    val_prediction_y= model.predict(val_X)
    mae= mean_absolute_error(val_y, val_prediction_y)
    return mae
help(DecisionTreeRegressor)
#compare MAE with differing values of max_leaf_nodes
for max_leaf_nodes in [5,50,500,5000]:
    my_mae= get_mae(max_leaf_nodes,train_X, val_X, train_y, val_y )
    print("When max_leaf_nodes is ",max_leaf_nodes,", MAE is " ,my_mae)
('When max_leaf_nodes is ', 5, ', MAE is ', 345357.4675454862)
('When max_leaf_nodes is ', 50, ', MAE is ', 258991.08393549395)
('When max_leaf_nodes is ', 500, ', MAE is ', 244241.84628754357)
('When max_leaf_nodes is ', 5000, ', MAE is ', 253299.98630806847)

可見,500是最優的葉子節點數量。