There are several classes in tf.nn
that relate to RNNs. In the examples I find on the web, tf.nn.dynamic_rnn
and tf.nn.rnn
seem to be used interchangeably or at least I cannot seem to figure out why one is used in place of the other. What is the difference?
From RNNs in Tensorflow, a Practical Guide and Undocumented Features by Denny Britz, published in August 21, 2016.
tf.nn.rnn
creates an unrolled graph for a fixed RNN length. That means, if you calltf.nn.rnn
with inputs having 200 time steps you are creating a static graph with 200 RNN steps. First, graph creation is slow. Second, you’re unable to pass in longer sequences (> 200) than you’ve originally specified.
tf.nn.dynamic_rnn
solves this. It uses atf.While
loop to dynamically construct the graph when it is executed. That means graph creation is faster and you can feed batches of variable size.