1. 程式人生 > >Bobo老師機器學習筆記-資料歸一化

Bobo老師機器學習筆記-資料歸一化

 

實現演算法:


def normalizate_max_min(X):
    """
    利用最大和最小化方式進行歸一化,過一化的資料集中在【0, 1】
    :param X:
    :return:
    """
    np.asarray(X, dtype=float)
    if len(X.shape) == 1:
        normalizate_array = ( X - np.min(X) ) / (np.max(X) - np.min(X))
    else:
        normalizate_array = np.zeros(X.shape)
        for column in range(X.shape[1]):
            normalizate_array[:, column] = (X[:, column] - np.min(X[:, column])) / (np.max(X[:, column] - np.min(X[:, column])))

    return normalizate_array


def standardization(X):
    """
    利用z-scores實現,標準化後的資料大概在【-1.5到1.5】,資料的平均數為0,方差為1
    :param X: 可以是矩陣
    :return:
    """

    np.asarray(X, dtype=float)
    if len(X.shape) == 1:
        return (X - np.mean(X)) / np.std(X)
    else:
        dt = np.zeros(X.shape)
        for i in range(X.shape[1]):
            dt[:, i] = (X[:,i] - np.mean(X)) / np.std(X[:, ])

        return dt
 Y = np.random.randint(1, 10, (20, 5))
    nml_data = normalizate_max_min(Y)
    std_data = standardization(Y)

    plt.scatter(Y[:, 0], Y[:, 1], color='green', label='Original Data')
    plt.scatter(std_data[:, 0], std_data[:, 1], color='red', label="Max-Min normalization")
    plt.scatter(nml_data[:, 0], nml_data[:, 1], color='blue', marker='+', label="Standardazation")
    plt.legend()
    plt.show()

把測試資料進行圖形展示:

Sklearn中實現可以詳細可以看下面連結:

http://cwiki.apachecn.org/pages/viewpage.action?pageId=10814134

主要有一個模組交preprocessing,裡面有實現各種scaler。 

比如下面兩個例子:

    from sklearn.preprocessing import MinMaxScaler, StandardScaler, scale
    max_min_scaler = MinMaxScaler()
    x_train_data = max_min_scaler.fit_transform(Y)
    print(x_train_data)

    standard_scaler =  StandardScaler()
    x_train_data = standard_scaler.fit_transform(Y)
    print(x_train_data)

用KNN演算法測試:

from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier

iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target)

standardscaler = StandardScaler()
standardscaler.fit(X_train)

X_train_stanard = standardscaler.transform(X_train)
X_test_standard = standardscaler.transform(X_test)

knn_clf = KNeighborsClassifier(n_neighbors=3)
knn_clf.fit(X_train_stanard, y_train)

knn_clf.predict(X_test_standard)
print(knn_clf.score(X_test_standard, y_test))