1. 程式人生 > >Keras遷移學習提取特徵

Keras遷移學習提取特徵

Application應用

Kera的應用模組Application提供了帶有預訓練權重的Keras模型,這些模型可以用來進行預測、特徵提取和finetune

模型的預訓練權重將下載到~/.keras/models/並在載入模型時自動載入

可用的模型

應用於影象分類的模型,權重訓練自ImageNet:

所有的這些模型(除了Xception)都相容Theano和Tensorflow,並會自動基於~/.keras/keras.json的Keras的影象維度進行自動設定。例如,如果你設定image_dim_ordering=tf

,則載入的模型將按照TensorFlow的維度順序來構造,即“Width-Height-Depth”的順序

應用於音樂自動標籤(以Mel-spectrograms為輸入)


圖片分類模型的示例

利用ResNet50網路進行ImageNet分類

from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
import
numpy as np model = ResNet50(weights='imagenet') img_path = 'elephant.jpg' img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) preds = model.predict(x) # decode the results into a list of tuples (class, description, probability)
# (one such list for each sample in the batch) print('Predicted:', decode_predictions(preds, top=3)[0]) # Predicted: [(u'n02504013', u'Indian_elephant', 0.82658225), (u'n01871265', u'tusker', 0.1122357), (u'n02504458', u'African_elephant', 0.061040461)]

利用VGG16提取特徵

from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np

model = VGG16(weights='imagenet', include_top=False)

img_path = 'elephant.jpg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)

features = model.predict(x)

從VGG19的任意中間層中抽取特徵

from keras.applications.vgg19 import VGG19
from keras.preprocessing import image
from keras.applications.vgg19 import preprocess_input
from keras.models import Model
import numpy as np

base_model = VGG19(weights='imagenet')
model = Model(input=base_model.input, output=base_model.get_layer('block4_pool').output)

img_path = 'elephant.jpg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)

block4_pool_features = model.predict(x)

利用新資料集finetune InceptionV3

from keras.applications.inception_v3 import InceptionV3
from keras.preprocessing import image
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K

# create the base pre-trained model
base_model = InceptionV3(weights='imagenet', include_top=False)

# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer -- let's say we have 200 classes
predictions = Dense(200, activation='softmax')(x)

# this is the model we will train
model = Model(input=base_model.input, output=predictions)

# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in base_model.layers:
    layer.trainable = False

# compile the model (should be done *after* setting layers to non-trainable)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')

# train the model on the new data for a few epochs
model.fit_generator(...)

# at this point, the top layers are well trained and we can start fine-tuning
# convolutional layers from inception V3. We will freeze the bottom N layers
# and train the remaining top layers.

# let's visualize layer names and layer indices to see how many layers
# we should freeze:
for i, layer in enumerate(base_model.layers):
   print(i, layer.name)

# we chose to train the top 2 inception blocks, i.e. we will freeze
# the first 172 layers and unfreeze the rest:
for layer in model.layers[:172]:
   layer.trainable = False
for layer in model.layers[172:]:
   layer.trainable = True

# we need to recompile the model for these modifications to take effect
# we use SGD with a low learning rate
from keras.optimizers import SGD
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy')

# we train our model again (this time fine-tuning the top 2 inception blocks
# alongside the top Dense layers
model.fit_generator(...)

在定製的輸入tensor上構建InceptionV3

from keras.applications.inception_v3 import InceptionV3
from keras.layers import Input

# this could also be the output a different Keras model or layer
input_tensor = Input(shape=(224, 224, 3))  # this assumes K.image_dim_ordering() == 'tf'

model = InceptionV3(input_tensor=input_tensor, weights='imagenet', include_top=True)

模型文件


## Xception模型 ```python keras.applications.xception.Xception(include_top=True, weights='imagenet', input_tensor=None,input_shape=None) ```

Xception V1 模型, 權重由ImageNet訓練而言

在ImageNet上,該模型取得了驗證集top1 0.790和top5 0.945的正確率

注意,該模型目前僅能以TensorFlow為後端使用,由於它依賴於"SeparableConvolution"層,目前該模型只支援tf的維度順序(width, height, channels)

預設輸入圖片大小為299x299

引數

  • include_top:是否保留頂層的3個全連線網路
  • weights:None代表隨機初始化,即不載入預訓練權重。'imagenet’代表載入預訓練權重
  • input_tensor:可填入Keras tensor作為模型的影象輸出tensor
  • input_shape: 可選引數,僅當include_top=False時才應指定該引數。input_shape須為長3的tuple,圖片的寬和高不得小於71.

返回值

Keras 模型物件

參考文獻

License

預訓練權重由我們自己訓練而來,基於MIT license釋出


## VGG16模型 ```python keras.applications.vgg16.VGG16(include_top=True, weights='imagenet', input_tensor=None, input_shape=None) ```

VGG16模型,權重由ImageNet訓練而來

該模型再Theano和TensorFlow後端均可使用,並接受th和tf兩種輸入維度順序

模型的預設輸入尺寸時224x224

引數

  • include_top:是否保留頂層的3個全連線網路
  • weights:None代表隨機初始化,即不載入預訓練權重。'imagenet’代表載入預訓練權重
  • input_tensor:可填入Keras tensor作為模型的影象輸出tensor
  • input_shape: 可選引數,僅當include_top=False時才應指定該引數。input_shape須為長3的tuple,維度順序依賴於image_dim_ordering,圖片的寬和高不得小於48.

返回值

Keras 模型物件

參考文獻

License

預訓練權重由牛津VGG組釋出的預訓練權重移植而來,基於Creative Commons Attribution License


## VGG19模型 ```python keras.applications.vgg19.VGG19(include_top=True, weights='imagenet', input_tensor=None, input_shape=None) ``` VGG19模型,權重由ImageNet訓練而來

該模型再Theano和TensorFlow後端均可使用,並接受th和tf兩種輸入維度順序

模型的預設輸入尺寸時224x224

引數

  • include_top:是否保留頂層的3個全連線網路
  • weights:None代表隨機初始化,即不載入預訓練權重。'imagenet’代表載入預訓練權重
  • input_tensor:可填入Keras tensor作為模型的影象輸出tensor
  • input_shape: 可選引數,僅當include_top=False時才應指定該引數。input_shape須為長3的tuple,維度順序依賴於image_dim_ordering,圖片的寬和高不得小於48.

返回值

Keras 模型物件

參考文獻

License

預訓練權重由牛津VGG組釋出的預訓練權重移植而來,基於Creative Commons Attribution License


## ResNet50模型 ```python keras.applications.resnet50.ResNet50(include_top=True, weights='imagenet', input_tensor=None, input_shape=None) ```

50層殘差網路模型,權重訓練自ImageNet

該模型再Theano和TensorFlow後端均可使用,並接受th和tf兩種輸入維度順序

模型的預設輸入尺寸時224x224

引數

  • include_top:是否保留頂層的全連線網路
  • weights:None代表隨機初始化,即不載入預訓練權重。'imagenet’代表載入預訓練權重
  • input_tensor:可填入Keras tensor作為模型的影象輸出tensor
  • input_shape: 可選引數,僅當include_top=False時才應指定該引數。input_shape須為長3的tuple,維度順序依賴於image_dim_ordering,圖片的寬和高不得小於197.

返回值

Keras 模型物件

參考文獻

License

預訓練權重由Kaiming He釋出的預訓練權重移植而來,基於MIT License


## InceptionV3模型 ```python keras.applications.inception_v3.InceptionV3(include_top=True, weights='imagenet', input_tensor=None,input_shape=None) ``` InceptionV3網路,權重訓練自ImageNet

該模型再Theano和TensorFlow後端均可使用,並接受th和tf兩種輸入維度順序

模型的預設輸入尺寸時299x299

引數

  • include_top:是否保留頂層的全連線網路
  • weights:None代表隨機初始化,即不載入預訓練權重。'imagenet’代表載入預訓練權重
  • input_tensor:可填入Keras tensor作為模型的影象輸出tensor
  • input_shape: 可選引數,僅當include_top=False時才應指定該引數。input_shape須為長3的tuple,其維度順序依賴於所使用的image_dim_ordering,圖片的寬和高不得小於139.

返回值

Keras 模型物件

參考文獻

License

預訓練權重由我們自己訓練而來,基於MIT License


## MusicTaggerCRNN模型 ```python keras.applications.music_tagger_crnn.MusicTaggerCRNN(weights='msd', input_tensor=None, include_top=True) ``` 該模型時一個卷積迴圈模型,以向量化的MelSpectrogram音樂資料為輸入,能夠輸出音樂的風格. 你可以用`keras.applications.music_tagger_crnn.preprocess_input`來將一個音樂檔案向量化為spectrogram.注意,使用該功能需要安裝[Librosa](http://librosa.github.io/librosa/),請參考上面的使用範例. ### 引數 * include_top:是否保留頂層的全連線網路 * weights:None代表隨機初始化,即不載入預訓練權重。'imagenet'代表載入預訓練權重 * input_tensor:可填入Keras tensor作為模型的影象輸出tensor

返回值

Keras 模型物件

參考文獻

License

預訓練權重由我們自己訓練而來,基於MIT License

引用

https://github.com/MoyanZitto/keras-cn/blob/master/docs/other/application.md~