1. 程式人生 > >86、使用Tensorflow實現,LSTM的時間序列預測,預測正弦函式

86、使用Tensorflow實現,LSTM的時間序列預測,預測正弦函式

'''
Created on 2017年5月21日

@author: weizhen
'''
# 以下程式為預測離散化之後的sin函式
import numpy as np
import tensorflow as tf
from tensorflow.contrib import rnn

# 載入matplotlib工具包,使用該工具包可以對預測的sin函式曲線進行繪圖
import matplotlib as mpl
from tensorflow.contrib.learn.python.learn.estimators.estimator import SKCompat
mpl.use(
'Agg') from matplotlib import pyplot as plt learn = tf.contrib.learn HIDDEN_SIZE = 30 # Lstm中隱藏節點的個數 NUM_LAYERS = 2 # LSTM的層數 TIMESTEPS = 10 # 迴圈神經網路的截斷長度 TRAINING_STEPS = 10000 # 訓練輪數 BATCH_SIZE = 32 # batch大小 TRAINING_EXAMPLES = 10000 # 訓練資料個數 TESTING_EXAMPLES = 1000 # 測試資料個數 SAMPLE_GAP = 0.01 # 取樣間隔 # 定義生成正弦資料的函式
def generate_data(seq): X = [] Y = [] # 序列的第i項和後面的TIMESTEPS-1項合在一起作為輸入;第i+TIMESTEPS項作為輸出 # 即用sin函式前面的TIMESTPES個點的資訊,預測第i+TIMESTEPS個點的函式值 for i in range(len(seq) - TIMESTEPS - 1): X.append([seq[i:i + TIMESTEPS]]) Y.append([seq[i + TIMESTEPS]]) return np.array(X, dtype=np.float32), np.array(Y, dtype=np.float32)
def LstmCell(): lstm_cell = rnn.BasicLSTMCell(HIDDEN_SIZE,state_is_tuple=True) return lstm_cell # 定義lstm模型 def lstm_model(X, y): cell = rnn.MultiRNNCell([LstmCell() for _ in range(NUM_LAYERS)]) output, _ = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32) output = tf.reshape(output, [-1, HIDDEN_SIZE]) # 通過無啟用函式的全連線層計算線性迴歸,並將資料壓縮成一維陣列結構 predictions = tf.contrib.layers.fully_connected(output, 1, None) # 將predictions和labels調整統一的shape labels = tf.reshape(y, [-1]) predictions = tf.reshape(predictions, [-1]) loss = tf.losses.mean_squared_error(predictions, labels) train_op = tf.contrib.layers.optimize_loss(loss, tf.contrib.framework.get_global_step(), optimizer="Adagrad", learning_rate=0.1) return predictions, loss, train_op # 進行訓練 # 封裝之前定義的lstm regressor = SKCompat(learn.Estimator(model_fn=lstm_model, model_dir="Models/model_2")) # 生成資料 test_start = TRAINING_EXAMPLES * SAMPLE_GAP test_end = (TRAINING_EXAMPLES + TESTING_EXAMPLES) * SAMPLE_GAP train_X, train_y = generate_data(np.sin(np.linspace(0, test_start, TRAINING_EXAMPLES, dtype=np.float32))) test_X, test_y = generate_data(np.sin(np.linspace(test_start, test_end, TESTING_EXAMPLES, dtype=np.float32))) # 擬合數據 regressor.fit(train_X, train_y, batch_size=BATCH_SIZE, steps=TRAINING_STEPS) # 計算預測值 predicted = [[pred] for pred in regressor.predict(test_X)] # 計算MSE rmse = np.sqrt(((predicted - test_y) ** 2).mean(axis=0)) print("Mean Square Error is:%f" % rmse[0]) plot_predicted, = plt.plot(predicted, label='predicted') plot_test, = plt.plot(test_y, label='real_sin') plt.legend([plot_predicted, plot_test],['predicted', 'real_sin']) plt.show()

預測的結果如下所示

2017-05-21 17:43:49.057377: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations.
2017-05-21 17:43:49.057871: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE2 instructions, but these are available on your machine and could speed up CPU computations.
2017-05-21 17:43:49.058284: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
2017-05-21 17:43:49.058626: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-05-21 17:43:49.058981: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-05-21 17:43:49.059897: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-05-21 17:43:49.060207: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-05-21 17:43:49.060843: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Mean Square Error is:0.001686