1. 程式人生 > >Keras的簡單應用(三分類問題)——改編自cifar10vgg16

Keras的簡單應用(三分類問題)——改編自cifar10vgg16

此為改編自cifar10的三分類問題:

from __future__ import print_function  #此為在老版本的python中兼顧新特性的一種方法
import keras
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
from keras.models import Sequential
from keras.models import model_from_json
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from keras import optimizers
from keras import backend as K
from keras import regularizers
from keras.applications.imagenet_utils import preprocess_input

import scipy.misc
import numpy as np
import matplotlib.pyplot as plt
import h5py


#from keras.datasets import cifar10
#from keras.layers.core import Lambda
#from matplotlib.pyplot import imshow


class teeth3vgg:
    def __init__(self,train=False):
        self.num_classes = 3
        self.weight_decay = 0.0005  #權值衰減,目的是防止過擬合
        self.x_shape = [32,32,3]

        self.model = self.build_model()
        if train:
            self.model = self.train(self.model)
        else:
            #self.model.load_weights('weight.h5')

            # 載入模型資料和weights
            self.model = model_from_json(open('my_model_architecture.json').read())
            self.model.load_weights('my_model_weights.h5')


    def build_model(self):
        # Build the network of vgg for 10 classes with massive dropout and weight decay as described in the paper.

        model = Sequential()
        weight_decay = self.weight_decay

        model.add(Conv2D(64, (3, 3), padding='same',input_shape=self.x_shape,kernel_regularizer=regularizers.l2(weight_decay)))
        #kernel_regularizer表示施加在權重上的正則項
        model.add(Activation('relu'))
        model.add(BatchNormalization())
        #該層在每個batch(批次)上將前一層的啟用值重新規範化,即使得其輸出資料的均值接近0,其標準差接近1
        model.add(Dropout(0.3))

        model.add(Conv2D(64, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
        model.add(Activation('relu'))
        model.add(BatchNormalization())

        model.add(MaxPooling2D(pool_size=(2, 2)))

        model.add(Conv2D(128, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
        model.add(Activation('relu'))
        model.add(BatchNormalization())
        model.add(Dropout(0.4))

        model.add(Conv2D(128, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
        model.add(Activation('relu'))
        model.add(BatchNormalization())

        model.add(MaxPooling2D(pool_size=(2, 2)))

        model.add(Conv2D(256, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
        model.add(Activation('relu'))
        model.add(BatchNormalization())
        model.add(Dropout(0.4))

        model.add(Conv2D(256, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
        model.add(Activation('relu'))
        model.add(BatchNormalization())
        model.add(Dropout(0.4))

        model.add(Conv2D(256, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
        model.add(Activation('relu'))
        model.add(BatchNormalization())

        model.add(MaxPooling2D(pool_size=(2, 2)))


        model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
        model.add(Activation('relu'))
        model.add(BatchNormalization())
        model.add(Dropout(0.4))

        model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
        model.add(Activation('relu'))
        model.add(BatchNormalization())
        model.add(Dropout(0.4))

        model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
        model.add(Activation('relu'))
        model.add(BatchNormalization())

        model.add(MaxPooling2D(pool_size=(2, 2)))


        model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
        model.add(Activation('relu'))
        model.add(BatchNormalization())
        model.add(Dropout(0.4))

        model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
        model.add(Activation('relu'))
        model.add(BatchNormalization())
        model.add(Dropout(0.4))

        model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
        model.add(Activation('relu'))
        model.add(BatchNormalization())

        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.5))

        model.add(Flatten())  #即把多維的輸入一維化,常用於卷積層到全連線層的過渡
        model.add(Dense(512,kernel_regularizer=regularizers.l2(weight_decay)))
        model.add(Activation('relu'))
        model.add(BatchNormalization())

        model.add(Dropout(0.5))
        model.add(Dense(self.num_classes))  #該全連線層的輸出維度為3
        model.add(Activation('softmax'))
        return model


    def normalize(self, X_train, X_valida, X_test):
        #this function normalize inputs for zero mean and unit variance
        # it is used when training a model.
        # Input: training set and test set
        # Output: normalized training set and test set according to the trianing set statistics.
        mean = np.mean(X_train, axis=(0, 1, 2, 3)) #計算均值
        std = np.std(X_train, axis=(0, 1, 2, 3)) #計算標準差
        X_train = (X_train-mean)/(std+1e-7)
        X_valida = (X_valida- mean)/(std + 1e-7)
        X_test = (X_test-mean)/(std+1e-7)
        return X_train, X_valida, X_test

    def normalize_production(self,x):
        #this function is used to normalize instances in production according to saved training set statistics
        # Input: X - a training set
        # Output X - a normalized training set according to normalization constants.

        #these values produced during first training and are general for the standard cifar10 training set normalization
        mean = np.mean(x)
        std = np.std(x)
        return (x-mean)/(std+1e-7)
    #'''
    def predict(self, x, normalize=True, batch_size=50):
        if normalize:
            x = self.normalize_production(x)
        return self.model.predict(x, batch_size)
    #'''
    def train(self,model):

        #training parameters
        #batch_size = 128
        batch_size = 100
        #maxepoches = 250
        maxepoches = 150
        learning_rate = 0.01
        lr_decay = 1e-6
        lr_drop = 20
        # The data, shuffled and split between train and test sets:

        #(x_train, y_train), (x_test, y_test) = cifar10.load_data()
        train_dataset = h5py.File('data.h5', 'r')
        x_train = np.array(train_dataset['X_train'][:])  # your train set features
        y_train = np.array(train_dataset['y_train'][:])  # your train set labels
        x_valida = np.array(train_dataset['X_valida'][:])  # your valida set features
        y_valida = np.array(train_dataset['y_valida'][:])  # your valida set labels
        x_test = np.array(train_dataset['X_test'][:])  # your test set features
        y_test = np.array(train_dataset['y_test'][:])  # your test set labels
        train_dataset.close()

        x_train = x_train.astype('float32')
        x_valida = x_valida.astype('float32')
        x_test = x_test.astype('float32')
        x_train, x_valida, x_test = self.normalize(x_train, x_valida, x_test)

        y_train = keras.utils.to_categorical(y_train, self.num_classes)
        y_valida = keras.utils.to_categorical(y_valida, self.num_classes)
        y_test = keras.utils.to_categorical(y_test, self.num_classes)

        def lr_scheduler(epoch):
            return learning_rate * (0.5 ** (epoch // lr_drop))
        reduce_lr = keras.callbacks.LearningRateScheduler(lr_scheduler)

        #data augmentation
        datagen = ImageDataGenerator(  #圖片生成器
            featurewise_center=False,  # set input mean to 0 over the dataset
            samplewise_center=False,  # set each sample mean to 0
            featurewise_std_normalization=False,  # divide inputs by std of the dataset
            samplewise_std_normalization=False,  # divide each input by its std
            zca_whitening=False,  # apply ZCA whitening
            rotation_range=15,  # randomly rotate images in the range (degrees, 0 to 180)
            width_shift_range=0.1,  # randomly shift images horizontally (fraction of total width)
            height_shift_range=0.1,  # randomly shift images vertically (fraction of total height)
            horizontal_flip=True,  # randomly flip images
            vertical_flip=False)  # randomly flip images
        # (std, mean, and principal components if ZCA whitening is applied).
        datagen.fit(x_train)
        print('***********************')
        print(x_train.shape)


        #optimization details  #優化器
        sgd = optimizers.SGD(lr=learning_rate, decay=lr_decay, momentum=0.9, nesterov=True)
        model.compile(loss='categorical_crossentropy', optimizer=sgd,metrics=['accuracy'])


        # training process in a for loop with learning rate drop every 25 epoches.

        historytemp = model.fit_generator(datagen.flow(x_train, y_train,
                                         batch_size=batch_size),
                            steps_per_epoch=x_train.shape[0] // batch_size,
                            epochs=maxepoches,
                            validation_data=(x_valida, y_valida), callbacks=[reduce_lr],verbose=2)  #verbose用來設定進度條
        # verbose是屏顯模式,官方這麼說的:verbose: 0 for no logging to stdout, 1 for progress bar logging, 2 for one log line per epoch.
        # 就是說0是不屏顯,1是顯示一個進度條,2是每個epoch都顯示一行資料

        #model.save_weights('weight.h5')
        json_string = model.to_json()  # 等價於 json_string = model.get_config()
        open('my_model_architecture.json', 'w').write(json_string)
        model.save_weights('my_model_weights.h5')

        return model

if __name__ == '__main__':


    #(x_train, y_train), (x_test, y_test) = cifar10.load_data()
    train_dataset = h5py.File('data.h5', 'r')
    x_train = np.array(train_dataset['X_train'][:])  # your train set features
    y_train = np.array(train_dataset['y_train'][:])  # your train set labels
    x_valida = np.array(train_dataset['X_valida'][:])  # your valida set features
    y_valida = np.array(train_dataset['y_valida'][:])  # your valida set labels
    x_test = np.array(train_dataset['X_test'][:])  # your test set features
    y_test = np.array(train_dataset['y_test'][:])  # your test set labels
    #train_dataset.close()

    x_train = x_train.astype('float32')
    x_valida = x_valida.astype('float32')
    x_test = x_test.astype('float32')

    y_train = keras.utils.to_categorical(y_train, 3)  #把類別轉化成了one-hot的格式
    y_valida = keras.utils.to_categorical(y_valida, 3)
    y_test = keras.utils.to_categorical(y_test, 3)

    print('讀取資料成功啦!來看看標籤吧!')
    print(y_train[0])
    print(y_train[1])
    print(y_train[2])
    print(y_train[3])
    print(y_train[4])
    print(y_train[5])
    print(y_train[6])
    print(y_train[7])
    print(y_train[8])
    print(y_train[9])
    print(y_train[10])
    #print(x_train[0])

    model = teeth3vgg()

    print('模型建立完畢!')

    #對驗證集進行正向傳播評估正確率
    predicted_x = model.predict(x_valida)
    print(model.predict(x_valida))

    residuals = (np.argmax(predicted_x, 1) == np.argmax(y_valida, 1))  #argmax返回的是最大數的索引
    print('------------------------***')
    print(residuals)
    print(sum(residuals))
    print(len(residuals))
    loss = sum(residuals)/len(residuals)
    print("the validation 0/1 loss is: ",loss)

    # 對測試集進行正向傳播評估正確率
    predicted_x1 = model.predict(x_test)
    print(predicted_x1.shape)
    print(y_test.shape)
    #print(model.predict(x_test))

    residuals1 = (np.argmax(predicted_x1, 1) == np.argmax(y_test, 1))  # argmax返回的是最大數的索引
    print('------------------------***')
    #print(residuals1)
    print(sum(residuals1))
    print(len(residuals1))
    loss1 = sum(residuals1) / len(residuals1)
    print("the test 0/1 loss is: ", loss1)

    '''
    score = model.evaluate(x_test, y_test, batch_size=10, verbose=1)
    print("loss  :",score[0])
    print("acc  :", score[1])
    
    score = model.evaluate(x_test, y_test, verbose=0)
    print('Val loss:', score[0])
    print('Val accuracy:', score[1])
    '''
    #model.evaluate(x_test, y_test, batch_size=100, show_accuracy=True, verbose=1)  # show_accuracy就是顯示每次迭代後的正確率
    img_path = 'E:/MatlabWorkspace/picdst14/247.jpg'
    img = image.load_img(img_path, target_size=(32, 32))
    x = image.img_to_array(img)
    print('Source input image shape:', x.shape)
    x = np.expand_dims(x, 0) #增加維度,在下標為dim=0的軸上增加一維;-1表示最後一維
    x = preprocess_input(x)  #資料預處理,提高演算法的效率
    print('Input image shape:', x.shape)
    my_image = scipy.misc.imread(img_path)
    plt.imshow(my_image)
    print("class prediction vector [p(0), p(1), p(2)] = ")
    px=model.predict(x)
    print(px)
    #print(model.predict(x_test[3]))
    if np.argmax(px,1)==0:
        print('該區域為牙齒!')
    elif np.argmax(px,1)==1:
        print('該區域為牙菌斑!')
    else:
        print('該區域為其他!')
    K.clear_session()
    '''
    fig, ax = plt.subplots(2, 1)
    ax[0].plot(history.history['loss'], color='b', label="Training loss")
    ax[0].plot(history.history['val_loss'], color='r', label="validation loss", axes=ax[0])
    ax[0].legend(loc='best', shadow=True)
    ax[1].plot(history.history['acc'], color='b', label="Training accuracy")
    ax[1].plot(history.history['val_acc'], color='r', label="Validation accuracy")
    ax[1].legend(loc='best', shadow=True)
    plt.show()
    '''

張量:A tensor is something that transforms like a tensor! 一個量,在不同的參考系下按照某種特定的法則進行變換,就是張量。

相關推薦

Keras簡單應用分類問題——改編cifar10vgg16

此為改編自cifar10的三分類問題:from __future__ import print_function #此為在老版本的python中兼顧新特性的一種方法 import keras from keras.preprocessing.image import Ima

應用篇】Activiti外接表單簡單應用

Activiti的簡單應用,使用外接表單的方式將業務頁面繫結到工作流的結點上,當執行到當前結點時,打印出繫結表單的內容。新建4個form頁面,頁面內容隨便寫些內容即可:按照下圖的方式依次繫結:流程變數設定如圖,其他的類似:對應生成的xml:<?xml version="

語音識別與分類分類

目的:識別三個單詞(bed,cat,happy) import librosa import os from sklearn.model_selection import train_test_split from keras.utils import t

android SQLite實現本地登入註冊功能,SQLite簡單應用android studio

     SQLite簡單用法介紹             SQLite基本用法很簡單,繼承SQLiteOpenHelper 中有兩個抽象方法,分別是onCreate()和 onUpgrade(),我們必須在自己的幫助類裡面重寫這兩個方法,然後分別在這兩個方法中去實現建

連結串列的簡單應用C語言

#include <stdio.h> #include<stdlib.h> #include<string.h> #include<conio.h> typedef struct List{ char na

K-means聚類演算法的典型簡單應用Matlab實現

%matlab code % K-means Cluster %load data.dat %x,y的範圍為0~50,x_data是一個1行100列的行矩陣 x_data = 50*rand(1,100); y_data = 50*rand(1,100); % x_da

Java for Web學習筆記定義tag3TLDS和Tag Handler

JSTL的TLD   這是JSTL採用的方式。TLD(Tag Library Descriptor)描述tag和function,以及具體執行的java程式碼tag handler。Tag Handler是javax.servlet.jsp.tagext.Tag或javax.servlet.jsp.tage

卷積神經網路簡單應用:模型測試

模型測試模型訓練好之後通過重新載入模型的方式進行模型測試,使用Tensorflow中的Saver物件。相關程式碼如下:def test_cnn(x_data): output = create_cnn(4) saver = tf.train.Saver()

Maven + Spring MVC+Mybatis + MySQL +AngularJS + Bootstrap 實現簡單微博應用前後臺互動

上一節http://blog.csdn.net/shymi1991/article/details/51985371 該章節實現實現使用者註冊、登入、發表微博、評論微博等功能。 1. 配置檔案 spring-mvc.xml <?xml version="1.0" e

『ORACLE』 SQL語句簡單應用11g

union times truncate sql語句 默認值 位數 lib rownum dual 排序 後加 nulls last 在降序排列中把null放在最後 select to_char(sysdate,‘q‘) from dual; dual

『ORACLE』 SQL語句簡單應用11g

應用 11g sql cal foreign ora 每次 int pri not null 非空 字段+not null unique 唯一 primary key 主鍵(確保數據不能重復) foreign key 外鍵 check 必須

IOS xib在tableview上的簡單應用通過xib定義cell

make aso eat 常用 last creat path ins div UITableView是一種常用的UI控件,在實際開發中,由於原生api的局限,自定義UITableViewCell十分重要,自定義cell可以通過代碼,也可以通過xib。 這篇隨筆介紹的是通過

vuex實踐之路——筆記本應用

lang 們的 res tool method note 做到 筆記 not Actions Action 類似於 mutation,不同在於: Action 提交的是 mutation,而不是直接變更狀態。 Action 可以包含任意異步操作。 讓我們來註冊一個簡單的

Spring Batch 簡單應用CSV文件操作(二)

分享 resultset hunk tid XML component files lin 實現 本文將通過一個完整的實例,與大家一起討論運用Spring Batch對CSV文件的讀寫操作。此實例的流程是:讀取一個含有四個字段的CSV文件(ID,Name,Age,Score

TF-IDF與余弦相似性的應用:自動摘要

下一步 dip target 似的 abs tps .net ebo ace 轉:http://www.ruanyifeng.com/blog/2013/03/automatic_summarization.html 有時候,很簡單的數學方法,就可以完成很復雜的任務。 這個

關於wamp的HTML, PHP, mysql 三者的操作與聯系 - mysql簡單配置

長度 -1 logs 建議 用戶 不用 自帶 .com upd   上一章講完HTML與PHP之間的傳值方法,這一章將wamp中mysql的使用,為下一章PHP調用數據準備。 再次說明作者的wamp版本是3.0.6 64bit   打開wamp自帶的mysql試圖數據庫

shell腳本應用for、while、case語句

姓名 std proc pgrep 符號 prefix dfa 先生 let 前言:當面對各種列表重復任務時,使用if語句已經難以滿足要求,而順序編寫全部代碼更是顯得異常繁瑣,困難重重。使用循環、分支等其他程序控制結構,從而能夠輕松完成更加復雜、強大的功能。1、使用for循

ffmpeg下載安裝和簡單應用C#音頻格式轉換

lan 音頻 sss sleep 自定義庫 blog version 就是 可執行文件 ffmpeg下載安裝和簡單應用 先介紹一下ffmpeg:FFmpeg是一個自由軟件,可以運行音頻和視頻多種格式的錄影、轉換、流功能,包含了libavcodec —這是一個用於多個項目

NFS和SAMBA的簡單應用

NFS和SAMBA的簡單應用NFS和SAMBA的簡單應用(一) (1)使用samba和NFS分別共享/data目錄; (2)讓samba客戶端和NFS客戶端分別掛載samba服務器上共享的/data/至本地的/mydata目錄;本地的mysqld或mariadb服務的數據目錄設置為/myda

MATLAB在數學建模中的應用

dash 以及 cxf 原始的 計算 而後 輸入輸出變量 tran fcm optimset函數 功能:創建或編輯優化選項參數結構。 語法: 1 options = optimset(‘param1’,value1,’para