Minimal RNN example in tensorflow

Anona112 picture Anona112 · Dec 22, 2015 · Viewed 11.1k times · Source

Trying to implement a minimal toy RNN example in tensorflow. The goal is to learn a mapping from the input data to the target data, similar to this wonderful concise example in theanets.

Update: We're getting there. The only part remaining is to make it converge (and less convoluted). Could someone help to turn the following into running code or provide a simple example?

import tensorflow as tf
from tensorflow.python.ops import rnn_cell

init_scale = 0.1
num_steps = 7
num_units = 7
input_data = [1, 2, 3, 4, 5, 6, 7]
target = [2, 3, 4, 5, 6, 7, 7]
#target = [1,1,1,1,1,1,1] #converges, but not what we want


batch_size = 1

with tf.Graph().as_default(), tf.Session() as session:
  # Placeholder for the inputs and target of the net
  # inputs = tf.placeholder(tf.int32, [batch_size, num_steps])
  input1 = tf.placeholder(tf.float32, [batch_size, 1])
  inputs = [input1 for _ in range(num_steps)]
  outputs = tf.placeholder(tf.float32, [batch_size, num_steps])

  gru = rnn_cell.GRUCell(num_units)
  initial_state = state = tf.zeros([batch_size, num_units])
  loss = tf.constant(0.0)

  # setup model: unroll
  for time_step in range(num_steps):
    if time_step > 0: tf.get_variable_scope().reuse_variables()
    step_ = inputs[time_step]
    output, state = gru(step_, state)
    loss += tf.reduce_sum(abs(output - target))  # all norms work equally well? NO!
  final_state = state

  optimizer = tf.train.AdamOptimizer(0.1)  # CONVERGEs sooo much better
  train = optimizer.minimize(loss)  # let the optimizer train

  numpy_state = initial_state.eval()
  session.run(tf.initialize_all_variables())
  for epoch in range(10):  # now
    for i in range(7): # feed fake 2D matrix of 1 byte at a time ;)
      feed_dict = {initial_state: numpy_state, input1: [[input_data[i]]]} # no
      numpy_state, current_loss,_ = session.run([final_state, loss,train], feed_dict=feed_dict)
    print(current_loss)  # hopefully going down, always stuck at 189, why!?

Answer

Lukasz Kaiser picture Lukasz Kaiser · Dec 23, 2015

I think there are a few problems with your code, but the idea is right.

The main issue is that you're using a single tensor for inputs and outputs, as in:
inputs = tf.placeholder(tf.int32, [batch_size, num_steps]).

In TensorFlow the RNN functions take a list of tensors (because num_steps can vary in some models). So you should construct inputs like this:
inputs = [tf.placeholder(tf.int32, [batch_size, 1]) for _ in xrange(num_steps)]

Then you need to take care of the fact that your inputs are int32s, but a RNN cell works on float vectors - that's what embedding_lookup is for.

And finally you'll need to adapt your feed to put in the input list.

I think the ptb tutorial is a reasonable place to look, but if you want an even more minimal example of an out-of-the-box RNN you can take a look at some of the rnn unit tests, e.g., here. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/kernel_tests/rnn_test.py#L164