1. 程式人生 > >Spark ML包中的幾種歸一化方法總結

Spark ML包中的幾種歸一化方法總結

org.apache.spark.ml.feature包中包含了4種不同的歸一化方法:

  • Normalizer
  • StandardScaler
  • MinMaxScaler
  • MaxAbsScaler

有時感覺會容易混淆,藉助官方文件和實際資料的變換,在這裡做一次總結。

0 資料準備

import org.apache.spark.ml.linalg.Vectors

val dataFrame = spark.createDataFrame(Seq(
  (0, Vectors.dense(1.0, 0.5, -1.0)),
  (1, Vectors.dense(2.0, 1.0, 1.0)),
  (2
, Vectors.dense(4.0, 10.0, 2.0)) )).toDF("id", "features") dataFrame.show // 原始資料 +---+--------------+ | id| features| +---+--------------+ | 0|[1.0,0.5,-1.0]| | 1| [2.0,1.0,1.0]| | 2|[4.0,10.0,2.0]| +---+--------------+

1 Normalizer

Normalizer的作用範圍是每一行,使每一個行向量的範數變換為一個單位範數,下面的示例程式碼都來自spark官方文件加上少量改寫和註釋。

import org.apache.spark.ml.feature.Normalizer

// 正則化每個向量到1階範數
val normalizer = new Normalizer()
  .setInputCol("features")
  .setOutputCol("normFeatures")
  .setP(1.0)

val l1NormData = normalizer.transform(dataFrame)
println("Normalized using L^1 norm")
l1NormData.show()

// 將每一行的規整為1階範數為1的向量,1階範數即所有值絕對值之和。
+---+--------------+------------------+ | id| features| normFeatures| +---+--------------+------------------+ | 0|[1.0,0.5,-1.0]| [0.4,0.2,-0.4]| | 1| [2.0,1.0,1.0]| [0.5,0.25,0.25]| | 2|[4.0,10.0,2.0]|[0.25,0.625,0.125]| +---+--------------+------------------+ // 正則化每個向量到無窮階範數 val lInfNormData = normalizer.transform(dataFrame, normalizer.p -> Double.PositiveInfinity) println("Normalized using L^inf norm") lInfNormData.show() // 向量的無窮階範數即向量中所有值中的最大值 +---+--------------+--------------+ | id| features| normFeatures| +---+--------------+--------------+ | 0|[1.0,0.5,-1.0]|[1.0,0.5,-1.0]| | 1| [2.0,1.0,1.0]| [1.0,0.5,0.5]| | 2|[4.0,10.0,2.0]| [0.4,1.0,0.2]| +---+--------------+--------------+

2 StandardScaler

StandardScaler處理的物件是每一列,也就是每一維特徵,將特徵標準化為單位標準差或是0均值,或是0均值單位標準差。
主要有兩個引數可以設定:
- withStd: 預設為真。將資料標準化到單位標準差。
- withMean: 預設為假。是否變換為0均值。

StandardScaler需要fit資料,獲取每一維的均值和標準差,來縮放每一維特徵。

import org.apache.spark.ml.feature.StandardScaler

val scaler = new StandardScaler()
  .setInputCol("features")
  .setOutputCol("scaledFeatures")
  .setWithStd(true)
  .setWithMean(false)

// Compute summary statistics by fitting the StandardScaler.
val scalerModel = scaler.fit(dataFrame)

// Normalize each feature to have unit standard deviation.
val scaledData = scalerModel.transform(dataFrame)
scaledData.show

// 將每一列的標準差縮放到1。
+---+--------------+------------------------------------------------------------+
|id |features      |scaledFeatures                                              |
+---+--------------+------------------------------------------------------------+
|0  |[1.0,0.5,-1.0]|[0.6546536707079772,0.09352195295828244,-0.6546536707079771]|
|1  |[2.0,1.0,1.0] |[1.3093073414159544,0.1870439059165649,0.6546536707079771]  |
|2  |[4.0,10.0,2.0]|[2.618614682831909,1.870439059165649,1.3093073414159542]    |
+---+--------------+------------------------------------------------------------+

3 MinMaxScaler

MinMaxScaler作用同樣是每一列,即每一維特徵。將每一維特徵線性地對映到指定的區間,通常是[0, 1]。
它也有兩個引數可以設定:
- min: 預設為0。指定區間的下限。
- max: 預設為1。指定區間的上限。

import org.apache.spark.ml.feature.MinMaxScaler

val scaler = new MinMaxScaler()
  .setInputCol("features")
  .setOutputCol("scaledFeatures")

// Compute summary statistics and generate MinMaxScalerModel
val scalerModel = scaler.fit(dataFrame)

// rescale each feature to range [min, max].
val scaledData = scalerModel.transform(dataFrame)
println(s"Features scaled to range: [${scaler.getMin}, ${scaler.getMax}]")
scaledData.select("features", "scaledFeatures").show

// 每維特徵線性地對映,最小值對映到0,最大值對映到1。
+--------------+-----------------------------------------------------------+
|features      |scaledFeatures                                             |
+--------------+-----------------------------------------------------------+
|[1.0,0.5,-1.0]|[0.0,0.0,0.0]                                              |
|[2.0,1.0,1.0] |[0.3333333333333333,0.05263157894736842,0.6666666666666666]|
|[4.0,10.0,2.0]|[1.0,1.0,1.0]                                              |
+--------------+-----------------------------------------------------------+

4 MaxAbsScaler

MaxAbsScaler將每一維的特徵變換到[-1, 1]閉區間上,通過除以每一維特徵上的最大的絕對值,它不會平移整個分佈,也不會破壞原來每一個特徵向量的稀疏性。

import org.apache.spark.ml.feature.MaxAbsScaler

val scaler = new MaxAbsScaler()
  .setInputCol("features")
  .setOutputCol("scaledFeatures")

// Compute summary statistics and generate MaxAbsScalerModel
val scalerModel = scaler.fit(dataFrame)

// rescale each feature to range [-1, 1]
val scaledData = scalerModel.transform(dataFrame)
scaledData.select("features", "scaledFeatures").show()

// 每一維的絕對值的最大值為[4, 10, 2]
+--------------+----------------+                                               
|      features|  scaledFeatures|
+--------------+----------------+
|[1.0,0.5,-1.0]|[0.25,0.05,-0.5]|
| [2.0,1.0,1.0]|   [0.5,0.1,0.5]|
|[4.0,10.0,2.0]|   [1.0,1.0,1.0]|
+--------------+----------------+

總結

所有4種歸一化方法都是線性的變換,當某一維特徵上具有非線性的分佈時,還需要配合其它的特徵預處理方法。