1. 程式人生 > >機器學習(利用adaboost元算法提高分類性能)

機器學習(利用adaboost元算法提高分類性能)

ear tarray 我們 imp quit figure cte 訓練樣本 這一

元算法背後的思路是對其他算法進行組合的一種方式,A

from numpy import *

def loadSimpData():
    datMat = matrix([[ 1. ,  2.1],
        [ 2. ,  1.1],
        [ 1.3,  1. ],
        [ 1. ,  1. ],
        [ 2. ,  1. ]])
    classLabels = [1.0, 1.0, -1.0, -1.0, 1.0]
    return datMat,classLabels

def loadDataSet(fileName):      #general function to parse tab -delimited floats
numFeat = len(open(fileName).readline().split(\t)) #get number of fields dataMat = []; labelMat = [] fr = open(fileName) for line in fr.readlines(): lineArr =[] curLine = line.strip().split(\t) for i in range(numFeat-1): lineArr.append(float(curLine[i])) dataMat.append(lineArr) labelMat.append(float(curLine[
-1])) return dataMat,labelMat def stumpClassify(dataMatrix,dimen,threshVal,threshIneq):#just classify the data retArray = ones((shape(dataMatrix)[0],1)) if threshIneq == lt: retArray[dataMatrix[:,dimen] <= threshVal] = -1.0 else: retArray[dataMatrix[:,dimen] > threshVal] = -1.0 return
retArray def buildStump(dataArr,classLabels,D): dataMatrix = mat(dataArr); labelMat = mat(classLabels).T m,n = shape(dataMatrix) numSteps = 10.0; bestStump = {}; bestClasEst = mat(zeros((m,1))) minError = inf #init error sum, to +infinity for i in range(n):#loop over all dimensions rangeMin = dataMatrix[:,i].min(); rangeMax = dataMatrix[:,i].max(); stepSize = (rangeMax-rangeMin)/numSteps for j in range(-1,int(numSteps)+1):#loop over all range in current dimension for inequal in [lt, gt]: #go over less than and greater than threshVal = (rangeMin + float(j) * stepSize) predictedVals = stumpClassify(dataMatrix,i,threshVal,inequal)#call stump classify with i, j, lessThan errArr = mat(ones((m,1))) errArr[predictedVals == labelMat] = 0 weightedError = D.T*errArr #calc total error multiplied by D #print "split: dim %d, thresh %.2f, thresh ineqal: %s, the weighted error is %.3f" % (i, threshVal, inequal, weightedError) if weightedError < minError: minError = weightedError bestClasEst = predictedVals.copy() bestStump[dim] = i bestStump[thresh] = threshVal bestStump[ineq] = inequal return bestStump,minError,bestClasEst def adaBoostTrainDS(dataArr,classLabels,numIt=40): weakClassArr = [] m = shape(dataArr)[0] D = mat(ones((m,1))/m) #init D to all equal aggClassEst = mat(zeros((m,1))) for i in range(numIt): bestStump,error,classEst = buildStump(dataArr,classLabels,D)#build Stump #print "D:",D.T alpha = float(0.5*log((1.0-error)/max(error,1e-16)))#calc alpha, throw in max(error,eps) to account for error=0 bestStump[alpha] = alpha weakClassArr.append(bestStump) #store Stump Params in Array #print "classEst: ",classEst.T expon = multiply(-1*alpha*mat(classLabels).T,classEst) #exponent for D calc, getting messy D = multiply(D,exp(expon)) #Calc New D for next iteration D = D/D.sum() #calc training error of all classifiers, if this is 0 quit for loop early (use break) aggClassEst += alpha*classEst #print "aggClassEst: ",aggClassEst.T aggErrors = multiply(sign(aggClassEst) != mat(classLabels).T,ones((m,1))) errorRate = aggErrors.sum()/m print "total error: ",errorRate if errorRate == 0.0: break return weakClassArr,aggClassEst def adaClassify(datToClass,classifierArr): dataMatrix = mat(datToClass)#do stuff similar to last aggClassEst in adaBoostTrainDS m = shape(dataMatrix)[0] aggClassEst = mat(zeros((m,1))) for i in range(len(classifierArr)): classEst = stumpClassify(dataMatrix,classifierArr[i][dim], classifierArr[i][thresh], classifierArr[i][ineq])#call stump classify aggClassEst += classifierArr[i][alpha]*classEst print aggClassEst return sign(aggClassEst) def plotROC(predStrengths, classLabels): import matplotlib.pyplot as plt cur = (1.0,1.0) #cursor ySum = 0.0 #variable to calculate AUC numPosClas = sum(array(classLabels)==1.0) yStep = 1/float(numPosClas); xStep = 1/float(len(classLabels)-numPosClas) sortedIndicies = predStrengths.argsort()#get sorted index, it‘s reverse fig = plt.figure() fig.clf() ax = plt.subplot(111) #loop through all the values, drawing a line segment at each point for index in sortedIndicies.tolist()[0]: if classLabels[index] == 1.0: delX = 0; delY = yStep; else: delX = xStep; delY = 0; ySum += cur[1] #draw line from cur to (cur[0]-delX,cur[1]-delY) ax.plot([cur[0],cur[0]-delX],[cur[1],cur[1]-delY], c=b) cur = (cur[0]-delX,cur[1]-delY) ax.plot([0,1],[0,1],b--) plt.xlabel(False positive rate); plt.ylabel(True positive rate) plt.title(ROC curve for AdaBoost horse colic detection system) ax.axis([0,1,0,1]) plt.show() print "the Area Under the Curve is: ",ySum*xStep

daboost是最為流行的元算法,是機器學習中最強有力的工具之一

組合方式有不同算法之間的組合,也可以是同一算法在不同設置下的集成,還可以是數據集不同部分分配給不同分類器之後的集成

優點:泛化錯誤率低,易編碼,可應用於大部分的分類器上,無參數需調整

缺點:對離群點敏感

適用於數值型於標稱型數據

bagging是從原始數據集中選擇S次後得到S個新數據集的技術,新數據集與原數據集大小相等,每個數據集都是通過原始數據集進行隨機選擇一個樣本進行替換掉的,這一過程允許選擇重復得值,而有些值則可以不出現

在S個數據建好之後,將某個算法分別作用於每個數據集得到S個分類器,當我們對新數據進行分類的時候,可以用這S個分類器進行分類,選擇分類器投票結果中最多的結果作為最後的分類結果

比較先進的bagging方法是隨機森林

boosting是一種和bagging類似的技術,bagging是通過串行訓練而獲得的,boosting則是集中關註被已有分類器錯分的那部分數據來獲得新的分類器

boosting的結果是通過所有分類器加權求和的結果,bagging是等權重的,boosting權重不同,每個權重代表分類器在上一輪叠代的成功度

Adaboost就是boosting中的一種

Adaboost算法可以簡述為三個步驟:
(1)首先,是初始化訓練數據的權值分布D1。假設有N個訓練樣本數據,則每一個訓練樣本最開始時,都被賦予相同的權值:w1=1/N。
(2)然後,訓練弱分類器hi。具體訓練過程中是:如果某個訓練樣本點,被弱分類器hi準確地分類,那麽在構造下一個訓練集中,它對應的權值要減小;相反,如果某個訓練樣本點被錯誤分類,那麽它的權值就應該增大。權值更新過的樣本集被用於訓練下一個分類器,整個訓練過程如此叠代地進行下去。
(3)最後,將各個訓練得到的弱分類器組合成一個強分類器。各個弱分類器的訓練過程結束後,加大分類誤差率小的弱分類器的權重,使其在最終的分類函數中起著較大的決定作用,而降低分類誤差率大的弱分類器的權重,使其在最終的分類函數中起著較小的決定作用。
換而言之,誤差率低的弱分類器在最終分類器中占的權重較大,否則較小。

機器學習(利用adaboost元算法提高分類性能)