1. 程式人生 > >How to use Different Batch Sizes when Training and Predicting with LSTMs

How to use Different Batch Sizes when Training and Predicting with LSTMs

Keras uses fast symbolic mathematical libraries as a backend, such as TensorFlow and Theano.

A downside of using these libraries is that the shape and size of your data must be defined once up front and held constant regardless of whether you are training your network or making predictions.

On sequence prediction problems, it may be desirable to use a large batch size when training the network and a batch size of 1 when making predictions in order to predict the next step in the sequence.

In this tutorial, you will discover how you can address this problem and even use different batch sizes during training and predicting.

After completing this tutorial, you will know:

  • How to design a simple sequence prediction problem and develop an LSTM to learn it.
  • How to vary an LSTM configuration for online and batch-based learning and predicting.
  • How to vary the batch size used for training from that used for predicting.

Let’s get started.

How to use Different Batch Sizes for Training and Predicting in Python with Keras

How to use Different Batch Sizes for Training and Predicting in Python with Keras
Photo by steveandtwyla, some rights reserved.

Tutorial Overview

This tutorial is divided into 6 parts, as follows:

  1. On Batch Size
  2. Sequence Prediction Problem Description
  3. LSTM Model and Varied Batch Size
  4. Solution 1: Online Learning (Batch Size = 1)
  5. Solution 2: Batch Forecasting (Batch Size = N)
  6. Solution 3: Copy Weights

Tutorial Environment

A Python 2 or 3 environment is assumed to be installed and working.

This includes SciPy with NumPy and Pandas. Keras version 2.0 or higher must be installed with either the TensorFlow or Keras backend.

For help setting up your Python environment, see the post:

Need help with LSTMs for Sequence Prediction?

Take my free 7-day email course and discover 6 different LSTM architectures (with code).

Click to sign-up and also get a free PDF Ebook version of the course.

On Batch Size

A benefit of using Keras is that it is built on top of symbolic mathematical libraries such as TensorFlow and Theano for fast and efficient computation. This is needed with large neural networks.

A downside of using these efficient libraries is that you must define the scope of your data upfront and for all time. Specifically, the batch size.

The batch size limits the number of samples to be shown to the network before a weight update can be performed. This same limitation is then imposed when making predictions with the fit model.

Specifically, the batch size used when fitting your model controls how many predictions you must make at a time.

This is often not a problem when you want to make the same number predictions at a time as the batch size used during training.

This does become a problem when you wish to make fewer predictions than the batch size. For example, you may get the best results with a large batch size, but are required to make predictions for one observation at a time on something like a time series or sequence problem.

This is why it may be desirable to have a different batch size when fitting the network to training data than when making predictions on test data or new input data.

In this tutorial, we will explore different ways to solve this problem.

Sequence Prediction Problem Description

We will use a simple sequence prediction problem as the context to demonstrate solutions to varying the batch size between training and prediction.

A sequence prediction problem makes a good case for a varied batch size as you may want to have a batch size equal to the training dataset size (batch learning) during training and a batch size of 1 when making predictions for one-step outputs.

The sequence prediction problem involves learning to predict the next step in the following 10-step sequence:

1 [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]

We can create this sequence in Python as follows:

123 length=10sequence=[i/float(length)foriinrange(length)]print(sequence)

Running the example prints our sequence:

1 [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]

We must convert the sequence to a supervised learning problem. That means when 0.0 is shown as an input pattern, the network must learn to predict the next step as 0.1.

We can do this in Python using the Pandas shift() function as follows:

12345678910 from pandas import concatfrom pandas import DataFrame# create sequencelength=10sequence=[i/float(length)foriinrange(length)]# create X/y pairsdf=DataFrame(sequence)df=concat([df,df.shift(1)],axis=1)df.dropna(inplace=True)print(df)

Running the example shows all input and output pairs.

123456789 1  0.1  0.02  0.2  0.13  0.3  0.24  0.4  0.35  0.5  0.46  0.6  0.57  0.7  0.68  0.8  0.79  0.9  0.8

We will be using a recurrent neural network called a long short-term memory network to learn the sequence. As such, we must transform the input patterns from a 2D array (1 column with 9 rows) to a 3D array comprised of [rows, timesteps, columns] where timesteps is 1 because we only have one timestep per observation on each row.

We can do this using the NumPy function reshape() as follows:

1234567891011121314 from pandas import concatfrom pandas import DataFrame# create sequencelength=10sequence=[i/float(length)foriinrange(length)]# create X/y pairsdf=DataFrame(sequence)df=concat([df,df.shift(1)],axis=1)df.dropna(inplace=True)# convert to LSTM friendly formatvalues=df.valuesX,y=values[:,0],values[:,1]X=X.reshape(len(X),1,1)print(X.shape,y.shape)

Running the example creates X and y arrays ready for use with an LSTM and prints their shape.

1 (9, 1, 1) (9,)

LSTM Model and Varied Batch Size

In this section, we will design an LSTM network for the problem.

The training batch size will cover the entire training dataset (batch learning) and predictions will be made one at a time (one-step prediction). We will show that although the model learns the problem, that one-step predictions result in an error.

We will use an LSTM network fit for 1000 epochs.

The weights will be updated at the end of each training epoch (batch learning) meaning that the batch size will be equal to the number of training observations (9).

For these experiments, we will require fine-grained control over when the internal state of the LSTM is updated. Normally LSTM state is cleared at the end of each batch in Keras, but we can control it by making the LSTM stateful and calling model.reset_state() to manage this state manually. This will be needed in later sections.

The network has one input, a hidden layer with 10 units, and an output layer with 1 unit. The default tanh activation functions are used in the LSTM units and a linear activation function in the output layer.

A mean squared error optimization function is used for this regression problem with the efficient ADAM optimization algorithm.

The example below configures and creates the network.

123456789 # configure networkn_batch=len(X)n_epoch=1000n_neurons=10# design networkmodel=Sequential()model.add(LSTM(n_neurons,batch_input_shape=(n_batch,X.shape[1],X.shape[2]),stateful=True))model.add(Dense(1))model.compile(loss='mean_squared_error',optimizer='adam')

We will fit the network to all of the examples each epoch and reset the state of the network at the end of each epoch manually.

1234 # fit networkforiinrange(n_epoch):model.fit(X,y,epochs=1,batch_size=n_batch,verbose=1,shuffle=False)model.reset_states()

Finally, we will forecast each step in the sequence one at a time.

This requires a batch size of 1, that is different to the batch size of 9 used to fit the network, and will result in an error when the example is run.

123456 # online forecastforiinrange(len(X)):testX,testy=X[i],y[i]testX=testX.reshape(1,1,1)yhat=model.predict(testX,batch_size=1)print('>Expected=%.1f, Predicted=%.1f'%(testy,yhat))

Below is the complete code example.

1234567891011121314151617181920212223242526272829303132333435 from pandas import DataFramefrom pandas import concatfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.layers import LSTM# create sequencelength=10sequence=[i/float(length)foriinrange(length)]# create X/y pairsdf=DataFrame(sequence)df=concat([df,df.shift(1)],axis=1)df.dropna(inplace=True)# convert to LSTM friendly formatvalues=df.valuesX,y=values[:,0],values[:,1]X=X.reshape(len(X),1,1)# configure networkn_batch=len(X)n_epoch=1000n_neurons=10# design networkmodel=Sequential()model.add(LSTM(n_neurons,batch_input_shape=(n_batch,X.shape[1],X.shape[2]),stateful=True))model.add(Dense(1))model.compile(loss='mean_squared_error',optimizer='adam')# fit networkforiinrange(n_epoch):model.fit(X,y,epochs=1,batch_size=n_batch,verbose=1,shuffle=False)model.reset_states()# online forecastforiinrange(len(X)):testX,testy=X[i],y[i]testX=testX.reshape(1,1,1)yhat=model.predict(testX,batch_size=1)print('>Expected=%.1f, Predicted=%.1f'%(testy,yhat))

Running the example fits the model fine and results in an error when making a prediction.

The error reported is as follows:

1 ValueError: Cannot feed value of shape (1, 1, 1) for Tensor 'lstm_1_input:0', which has shape '(9, 1, 1)'

Solution 1: Online Learning (Batch Size = 1)

One solution to this problem is to fit the model using online learning.

This is where the batch size is set to a value of 1 and the network weights are updated after each training example.

This can have the effect of faster learning, but also adds instability to the learning process as the weights widely vary with each batch.

Nevertheless, this will allow us to make one-step forecasts on the problem. The only change required is setting n_batch to 1 as follows:

1 n_batch=1

The complete code listing is