1. 程式人生 > >Keras —— 基於InceptionV3模型(不含全連線層)的遷移學習應用

Keras —— 基於InceptionV3模型(不含全連線層)的遷移學習應用

一、ImageDataGenerator

def image_preprocess():
    #  訓練集的圖片生成器,通過引數的設定進行資料擴增
    train_datagen = ImageDataGenerator(
        preprocessing_function=preprocess_input,
        rotation_range=30,
        width_shift_range=0.2,
        height_shift_range=0.2,
        shear_range=0.2,
        zoom_range=0.2
, horizontal_flip=True ) # 驗證集的圖片生成器,不進行資料擴增,只進行資料預處理 val_datagen = ImageDataGenerator( preprocessing_function=preprocess_input, ) # 訓練資料與測試資料 train_generator = train_datagen.flow_from_directory( train_dir, target_size=(IM_WIDTH, IM_HEIGHT), batch_size=batch_size, class_mode='categorical'
) validation_generator = val_datagen.flow_from_directory( val_dir, target_size=(IM_WIDTH, IM_HEIGHT), batch_size=batch_size, class_mode='categorical') return train_generator, validation_generator

二、載入InceptionV3模型(不含全連線層)

使用帶有預訓練權重的InceptionV3模型,但不包括頂層分類器(頂層分類器即全連線層。)

base_model = InceptionV3(weights='imagenet', include_top=False)

三、新增新的頂層分類器

def add_new_last_layer(base_model, nb_classes):
    x = base_model.output
    x = GlobalAveragePooling2D()(x)
    x = Dense(FC_SIZE, activation='relu')(x)
    predictions = Dense(nb_classes, activation='softmax')(x)
    model = Model(input=base_model.input, output=predictions)
    return model

四、訓練頂層分類器

凍結base_model所有層,這樣就可以正確獲得bottleneck特徵

def setup_to_transfer_learn(model, base_model):
    for layer in base_model.layers:
        layer.trainable = False
    model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

setup_to_transfer_learn(model, base_model)
history_tl = model.fit_generator(
        train_generator,
        epochs=nb_epoch,
        steps_per_epoch=nb_train_samples // batch_size,
        validation_data=validation_generator,
        validation_steps=nb_val_samples // batch_size,
        class_weight='auto')

五、對頂層分類器進行fine_tuning

凍結部分層,對頂層分類器進行Fine-tune

Fine-tune以一個預訓練好的網路為基礎,在新的資料集上重新訓練一小部分權重。fine-tune應該在很低的學習率下進行,通常使用SGD優化

def setup_to_finetune(model):
    for layer in model.layers[:NB_IV3_LAYERS_TO_FREEZE]:
        layer.trainable = False
    for layer in model.layers[NB_IV3_LAYERS_TO_FREEZE:]:
        layer.trainable = True
    model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])

setup_to_finetune(model)  # 凍結model的部分層
history_ft = model.fit_generator(
        train_generator,
        steps_per_epoch=nb_train_samples // batch_size,
        epochs=nb_epoch,
        validation_data=validation_generator,
        validation_steps=nb_val_samples // batch_size,
        class_weight='auto')

原始碼地址: