1. 程式人生 > >文字分類實戰(四)—— Bi-LSTM模型 文字分類實戰(一)—— word2vec預訓練詞向量

文字分類實戰(四)—— Bi-LSTM模型 文字分類實戰(一)—— word2vec預訓練詞向量

1 大綱概述

  文字分類這個系列將會有十篇左右,包括基於word2vec預訓練的文字分類,與及基於最新的預訓練模型(ELMo,BERT等)的文字分類。總共有以下系列:

  word2vec預訓練詞向量

  textCNN 模型

  charCNN 模型

  Bi-LSTM 模型

  Bi-LSTM + Attention 模型

  RCNN 模型

  Adversarial LSTM 模型

  Transformer 模型

  ELMo 預訓練模型

  BERT 預訓練模型

  所有程式碼均在textClassifier倉庫中,覺得有幫助,請給個小星星。

 

2 資料集

  資料集為IMDB 電影影評,總共有三個資料檔案,在/data/rawData目錄下,包括unlabeledTrainData.tsv,labeledTrainData.tsv,testData.tsv。在進行文字分類時需要有標籤的資料(labeledTrainData),資料預處理如文字分類實戰(一)—— word2vec預訓練詞向量中一樣,預處理後的檔案為/data/preprocess/labeledTrain.csv。

 

3 Bi-LSTM模型結構

  Bi-LSTM即雙向LSTM,較單向的LSTM,Bi-LSTM能更好地捕獲句子中上下文的資訊。LSTM的介紹見

這篇。在本次實戰中採用雙層的Bi-LSTM結構來進行文字分類。

 

4 配置引數

import os
import csv
import time
import datetime
import random
import json

import warnings
from collections import Counter
from math import sqrt

import gensim
import pandas as pd
import numpy as np
import tensorflow as tf
from sklearn.metrics import
roc_auc_score, accuracy_score, precision_score, recall_score warnings.filterwarnings("ignore")
# 配置引數

class TrainingConfig(object):
    epoches = 6
    evaluateEvery = 100
    checkpointEvery = 100
    learningRate = 0.001
    
class ModelConfig(object):
    embeddingSize = 200
    
    hiddenSizes = [256, 128]  # 雙層LSTM結構的神經元個數
    
    dropoutKeepProb = 0.5
    l2RegLambda = 0.0
    
class Config(object):
    sequenceLength = 200  # 取了所有序列長度的均值
    batchSize = 128
    
    dataSource = "../data/preProcess/labeledTrain.csv"
    
    stopWordSource = "../data/english"
    
    numClasses = 2
    
    rate = 0.8  # 訓練集的比例
    
    training = TrainingConfig()
    
    model = ModelConfig()

    
# 例項化配置引數物件
config = Config()

 

5 生成訓練資料

  1)將資料載入進來,將句子分割成詞表示,並去除低頻詞和停用詞。

  2)將詞對映成索引表示,構建詞彙-索引對映表,並儲存成json的資料格式,之後做inference時可以用到。(注意,有的詞可能不在word2vec的預訓練詞向量中,這種詞直接用UNK表示)

  3)從預訓練的詞向量模型中讀取出詞向量,作為初始化值輸入到模型中。

  4)將資料集分割成訓練集和測試集

# 資料預處理的類,生成訓練集和測試集

class Dataset(object):
    def __init__(self, config):
        self._dataSource = config.dataSource
        self._stopWordSource = config.stopWordSource  
        
        self._sequenceLength = config.sequenceLength  # 每條輸入的序列處理為定長
        self._embeddingSize = config.model.embeddingSize
        self._batchSize = config.batchSize
        self._rate = config.rate
        
        self._stopWordDict = {}
        
        self.trainReviews = []
        self.trainLabels = []
        
        self.evalReviews = []
        self.evalLabels = []
        
        self.wordEmbedding =None
        
        self._wordToIndex = {}
        self._indexToWord = {}
        
    def _readData(self, filePath):
        """
        從csv檔案中讀取資料集
        """
        
        df = pd.read_csv(filePath)
        labels = df["sentiment"].tolist()
        review = df["review"].tolist()
        reviews = [line.strip().split() for line in review]

        return reviews, labels

    def _reviewProcess(self, review, sequenceLength, wordToIndex):
        """
        將資料集中的每條評論用index表示
        wordToIndex中“pad”對應的index為0
        """
        
        reviewVec = np.zeros((sequenceLength))
        sequenceLen = sequenceLength
        
        # 判斷當前的序列是否小於定義的固定序列長度
        if len(review) < sequenceLength:
            sequenceLen = len(review)
            
        for i in range(sequenceLen):
            if review[i] in wordToIndex:
                reviewVec[i] = wordToIndex[review[i]]
            else:
                reviewVec[i] = wordToIndex["UNK"]

        return reviewVec

    def _genTrainEvalData(self, x, y, rate):
        """
        生成訓練集和驗證集
        """
        
        reviews = []
        labels = []
        
        # 遍歷所有的文字,將文字中的詞轉換成index表示
        for i in range(len(x)):
            reviewVec = self._reviewProcess(x[i], self._sequenceLength, self._wordToIndex)
            reviews.append(reviewVec)
            
            labels.append([y[i]])
            
        trainIndex = int(len(x) * rate)
        
        trainReviews = np.asarray(reviews[:trainIndex], dtype="int64")
        trainLabels = np.array(labels[:trainIndex], dtype="float32")
        
        evalReviews = np.asarray(reviews[trainIndex:], dtype="int64")
        evalLabels = np.array(labels[trainIndex:], dtype="float32")

        return trainReviews, trainLabels, evalReviews, evalLabels
        
    def _genVocabulary(self, reviews):
        """
        生成詞向量和詞彙-索引對映字典,可以用全資料集
        """
        
        allWords = [word for review in reviews for word in review]
        
        # 去掉停用詞
        subWords = [word for word in allWords if word not in self.stopWordDict]
        
        wordCount = Counter(subWords)  # 統計詞頻
        sortWordCount = sorted(wordCount.items(), key=lambda x: x[1], reverse=True)
        
        # 去除低頻詞
        words = [item[0] for item in sortWordCount if item[1] >= 5]
        
        vocab, wordEmbedding = self._getWordEmbedding(words)
        self.wordEmbedding = wordEmbedding
        
        self._wordToIndex = dict(zip(vocab, list(range(len(vocab)))))
        self._indexToWord = dict(zip(list(range(len(vocab))), vocab))
        
        # 將詞彙-索引對映表儲存為json資料,之後做inference時直接載入來處理資料
        with open("../data/wordJson/wordToIndex.json", "w", encoding="utf-8") as f:
            json.dump(self._wordToIndex, f)
        
        with open("../data/wordJson/indexToWord.json", "w", encoding="utf-8") as f:
            json.dump(self._indexToWord, f)
            
    def _getWordEmbedding(self, words):
        """
        按照我們的資料集中的單詞取出預訓練好的word2vec中的詞向量
        """
        
        wordVec = gensim.models.KeyedVectors.load_word2vec_format("../word2vec/word2Vec.bin", binary=True)
        vocab = []
        wordEmbedding = []
        
        # 新增 "pad" 和 "UNK", 
        vocab.append("pad")
        vocab.append("UNK")
        wordEmbedding.append(np.random.randn(self._embeddingSize))
        wordEmbedding.append(np.random.randn(self._embeddingSize))
        
        for word in words:
            try:
                vector = wordVec.wv[word]
                vocab.append(word)
                wordEmbedding.append(vector)
            except:
                print(word + "不存在於詞向量中")
                
        return vocab, np.array(wordEmbedding)
    
    def _readStopWord(self, stopWordPath):
        """
        讀取停用詞
        """
        
        with open(stopWordPath, "r") as f:
            stopWords = f.read()
            stopWordList = stopWords.splitlines()
            # 將停用詞用列表的形式生成,之後查詢停用詞時會比較快
            self.stopWordDict = dict(zip(stopWordList, list(range(len(stopWordList)))))
            
    def dataGen(self):
        """
        初始化訓練集和驗證集
        """
        
        # 初始化停用詞
        self._readStopWord(self._stopWordSource)
        
        # 初始化資料集
        reviews, labels = self._readData(self._dataSource)
        
        # 初始化詞彙-索引對映表和詞向量矩陣
        self._genVocabulary(reviews)
        
        # 初始化訓練集和測試集
        trainReviews, trainLabels, evalReviews, evalLabels = self._genTrainEvalData(reviews, labels, self._rate)
        self.trainReviews = trainReviews
        self.trainLabels = trainLabels
        
        self.evalReviews = evalReviews
        self.evalLabels = evalLabels
        
        
data = Dataset(config)
data.dataGen()

 

6 生成batch資料集

  採用生成器的形式向模型輸入batch資料集,(生成器可以避免將所有的資料加入到記憶體中)

# 輸出batch資料集

def nextBatch(x, y, batchSize):
        """
        生成batch資料集,用生成器的方式輸出
        """
    
        perm = np.arange(len(x))
        np.random.shuffle(perm)
        x = x[perm]
        y = y[perm]
        
        numBatches = len(x) // batchSize

        for i in range(numBatches):
            start = i * batchSize
            end = start + batchSize
            batchX = np.array(x[start: end], dtype="int64")
            batchY = np.array(y[start: end], dtype="float32")
            
            yield batchX, batchY

 

7 Bi-LSTM模型

# 構建模型
class BiLSTM(object):
    """
    Bi-LSTM 用於文字分類
    """
    def __init__(self, config, wordEmbedding):

        # 定義模型的輸入
        self.inputX = tf.placeholder(tf.int32, [None, config.sequenceLength], name="inputX")
        self.inputY = tf.placeholder(tf.float32, [None, 1], name="inputY")
        
        self.dropoutKeepProb = tf.placeholder(tf.float32, name="dropoutKeepProb")
        
        # 定義l2損失
        l2Loss = tf.constant(0.0)
        
        # 詞嵌入層
        with tf.name_scope("embedding"):

            # 利用預訓練的詞向量初始化詞嵌入矩陣
            self.W = tf.Variable(tf.cast(wordEmbedding, dtype=tf.float32, name="word2vec") ,name="W")
            # 利用詞嵌入矩陣將輸入的資料中的詞轉換成詞向量,維度[batch_size, sequence_length, embedding_size]
            self.embeddedWords = tf.nn.embedding_lookup(self.W, self.inputX)
            
        # 定義兩層雙向LSTM的模型結構
        with tf.name_scope("Bi-LSTM"):
            fwHiddenLayers = []
            bwHiddenLayers = []
            for idx, hiddenSize in enumerate(config.model.hiddenSizes):
                with tf.name_scope("Bi-LSTM" + str(idx)):
                    # 定義前向LSTM結構
                    lstmFwCell = tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.LSTMCell(num_units=hiddenSize, state_is_tuple=True),
                                                                 output_keep_prob=self.dropoutKeepProb)
                    # 定義反向LSTM結構
                    lstmBwCell = tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.LSTMCell(num_units=hiddenSize, state_is_tuple=True),
                                                                 output_keep_prob=self.dropoutKeepProb)

                fwHiddenLayers.append(lstmFwCell)
                bwHiddenLayers.append(lstmBwCell)

            # 實現多層的LSTM結構, state_is_tuple=True,則狀態會以元祖的形式組合(h, c),否則列向拼接
            fwMultiLstm = tf.nn.rnn_cell.MultiRNNCell(cells=fwHiddenLayers, state_is_tuple=True)
            bwMultiLstm = tf.nn.rnn_cell.MultiRNNCell(cells=bwHiddenLayers, state_is_tuple=True)

            # 採用動態rnn,可以動態的輸入序列的長度,若沒有輸入,則取序列的全長
            # outputs是一個元祖(output_fw, output_bw),其中兩個元素的維度都是[batch_size, max_time, hidden_size],fw和bw的hidden_size一樣
            # self.current_state 是最終的狀態,二元組(state_fw, state_bw),state_fw=[batch_size, s],s是一個元祖(h, c)
            outputs, self.current_state = tf.nn.bidirectional_dynamic_rnn(fwMultiLstm, bwMultiLstm, self.embeddedWords, dtype=tf.float32)
        
        # 對outputs中的fw和bw的結果拼接 [batch_size, time_step, hidden_size * 2]
        concatedOutput = tf.concat(outputs, 2)
        
        # 去除最後時間步的輸出作為全連線的輸入
        finalOutput = concatedOutput[:, -1, :]
        
        outputSize = config.model.hiddenSizes[-1] * 2  # 因為是雙向LSTM,最終的輸出值是fw和bw的拼接,因此要乘以2
        output = tf.reshape(finalOutput, [-1, outputSize])  # reshape成全連線層的輸入維度
        
        # 全連線層的輸出
        with tf.name_scope("output"):
            outputW = tf.get_variable(
                "outputW",
                shape=[outputSize, 1],
                initializer=tf.contrib.layers.xavier_initializer())
            
            outputB= tf.Variable(tf.constant(0.1, shape=[1]), name="outputB")
            l2Loss += tf.nn.l2_loss(outputW)
            l2Loss += tf.nn.l2_loss(outputB)
            self.predictions = tf.nn.xw_plus_b(output, outputW, outputB, name="predictions")
            self.binaryPreds = tf.cast(tf.greater_equal(self.predictions, 0.5), tf.float32, name="binaryPreds")
        
        # 計算二元交叉熵損失
        with tf.name_scope("loss"):
            
            losses = tf.nn.sigmoid_cross_entropy_with_logits(logits=self.predictions, labels=self.inputY)
            self.loss = tf.reduce_mean(losses) + config.model.l2RegLambda * l2Loss

 

8 定義計算metrics的函式

# 定義效能指標函式

def mean(item):
    return sum(item) / len(item)


def genMetrics(trueY, predY, binaryPredY):
    """
    生成acc和auc值
    """
    auc = roc_auc_score(trueY, predY)
    accuracy = accuracy_score(trueY, binaryPredY)
    precision = precision_score(trueY, binaryPredY)
    recall = recall_score(trueY, binaryPredY)
    
    return round(accuracy, 4), round(auc, 4), round(precision, 4), round(recall, 4)

 

9 訓練模型

  在訓練時,我們定義了tensorBoard的輸出,並定義了兩種模型儲存的方法。

# 訓練模型

# 生成訓練集和驗證集
trainReviews = data.trainReviews
trainLabels = data.trainLabels
evalReviews = data.evalReviews
evalLabels = data.evalLabels

wordEmbedding = data.wordEmbedding

# 定義計算圖
with tf.Graph().as_default():

    session_conf = tf.ConfigProto(allow_soft_placement=True, log_device_placement=False)
    session_conf.gpu_options.allow_growth=True
    session_conf.gpu_options.per_process_gpu_memory_fraction = 0.9  # 配置gpu佔用率  

    sess = tf.Session(config=session_conf)
    
    # 定義會話
    with sess.as_default():
        lstm = BiLSTM(config, wordEmbedding)
        
        globalStep = tf.Variable(0, name="globalStep", trainable=False)
        # 定義優化函式,傳入學習速率引數
        optimizer = tf.train.AdamOptimizer(config.training.learningRate)
        # 計算梯度,得到梯度和變數
        gradsAndVars = optimizer.compute_gradients(lstm.loss)
        # 將梯度應用到變數下,生成訓練器
        trainOp = optimizer.apply_gradients(gradsAndVars, global_step=globalStep)
        
        # 用summary繪製tensorBoard
        gradSummaries = []
        for g, v in gradsAndVars:
            if g is not None:
                tf.summary.histogram("{}/grad/hist".format(v.name), g)
                tf.summary.scalar("{}/grad/sparsity".format(v.name), tf.nn.zero_fraction(g))
        
        outDir = os.path.abspath(os.path.join(os.path.curdir, "summarys"))
        print("Writing to {}\n".format(outDir))
        
        lossSummary = tf.summary.scalar("loss", lstm.loss)
        summaryOp = tf.summary.merge_all()
        
        trainSummaryDir = os.path.join(outDir, "train")
        trainSummaryWriter = tf.summary.FileWriter(trainSummaryDir, sess.graph)
        
        evalSummaryDir = os.path.join(outDir, "eval")
        evalSummaryWriter = tf.summary.FileWriter(evalSummaryDir, sess.graph)
        
        
        # 初始化所有變數
        saver = tf.train.Saver(tf.global_variables(), max_to_keep=5)
        
        # 儲存模型的一種方式,儲存為pb檔案
        builder = tf.saved_model.builder.SavedModelBuilder("../model/Bi-LSTM/savedModel")
        sess.run(tf.global_variables_initializer())

        def trainStep(batchX, batchY):
            """
            訓練函式
            """   
            feed_dict = {
              lstm.inputX: batchX,
              lstm.inputY: batchY,
              lstm.dropoutKeepProb: config.model.dropoutKeepProb
            }
            _, summary, step, loss, predictions, binaryPreds = sess.run(
                [trainOp, summaryOp, globalStep, lstm.loss, lstm.predictions, lstm.binaryPreds],
                feed_dict)
            timeStr = datetime.datetime.now().isoformat()
            acc, auc, precision, recall = genMetrics(batchY, predictions, binaryPreds)
            print("{}, step: {}, loss: {}, acc: {}, auc: {}, precision: {}, recall: {}".format(timeStr, step, loss, acc, auc, precision, recall))
            trainSummaryWriter.add_summary(summary, step)

        def devStep(batchX, batchY):
            """
            驗證函式
            """
            feed_dict = {
              lstm.inputX: batchX,
              lstm.inputY: batchY,
              lstm.dropoutKeepProb: 1.0
            }
            summary, step, loss, predictions, binaryPreds = sess.run(
                [summaryOp, globalStep, lstm.loss, lstm.predictions, lstm.binaryPreds],
                feed_dict)
            
            acc, auc, precision, recall = genMetrics(batchY, predictions, binaryPreds)
            
            evalSummaryWriter.add_summary(summary, step)
            
            return loss, acc, auc, precision, recall
        
        for i in range(config.training.epoches):
            # 訓練模型
            print("start training model")
            for batchTrain in nextBatch(trainReviews, trainLabels, config.batchSize):
                trainStep(batchTrain[0], batchTrain[1])

                currentStep = tf.train.global_step(sess, globalStep) 
                if currentStep % config.training.evaluateEvery == 0:
                    print("\nEvaluation:")
                    
                    losses = []
                    accs = []
                    aucs = []
                    precisions = []
                    recalls = []
                    
                    for batchEval in nextBatch(evalReviews, evalLabels, config.batchSize):
                        loss, acc, auc, precision, recall = devStep(batchEval[0], batchEval[1])
                        losses.append(loss)
                        accs.append(acc)
                        aucs.append(auc)
                        precisions.append(precision)
                        recalls.append(recall)
                        
                    time_str = datetime.datetime.now().isoformat()
                    print("{}, step: {}, loss: {}, acc: {}, auc: {}, precision: {}, recall: {}".format(time_str, currentStep, mean(losses), 
                                                                                                       mean(accs), mean(aucs), mean(precisions),
                                                                                                       mean(recalls)))
                    
                if currentStep % config.training.checkpointEvery == 0:
                    # 儲存模型的另一種方法,儲存checkpoint檔案
                    path = saver.save(sess, "../model/Bi-LSTM/model/my-model", global_step=currentStep)
                    print("Saved model checkpoint to {}\n".format(path))
                    
        inputs = {"inputX": tf.saved_model.utils.build_tensor_info(lstm.inputX),
                  "keepProb": tf.saved_model.utils.build_tensor_info(lstm.dropoutKeepProb)}

        outputs = {"binaryPreds": tf.saved_model.utils.build_tensor_info(lstm.binaryPreds)}

        prediction_signature = tf.saved_model.signature_def_utils.build_signature_def(inputs=inputs, outputs=outputs,
                                                                                      method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
        legacy_init_op = tf.group(tf.tables_initializer(), name="legacy_init_op")
        builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.SERVING],
                                            signature_def_map={"predict": prediction_signature}, legacy_init_op=legacy_init_op)

        builder.save()