I am trying an Op that is not behaving as expected.
graph = tf.Graph()
with graph.as_default():
train_dataset = tf.placeholder(tf.int32, shape=[128, 2])
embeddings = tf.Variable(
tf.random_uniform([50000, 64], -1.0, 1.0))
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
embed = tf.reduce_sum(embed, reduction_indices=0)
So I need to know the dimensions of the Tensor embed
. I know that it can be done at the run time but it's too much work for such a simple operation. What's the easier way to do it?
I see most people confused about tf.shape(tensor)
and tensor.get_shape()
Let's make it clear:
tf.shape
tf.shape
is used for dynamic shape. If your tensor's shape is changable, use it.
An example: a input is an image with changable width and height, we want resize it to half of its size, then we can write something like:
new_height = tf.shape(image)[0] / 2
tensor.get_shape
tensor.get_shape
is used for fixed shapes, which means the tensor's shape can be deduced in the graph.
Conclusion:
tf.shape
can be used almost anywhere, but t.get_shape
only for shapes can be deduced from graph.