Bidirectional LSTM output question in PyTorch

ZH LIU picture ZH LIU · Oct 26, 2018 · Viewed 9.3k times · Source

Hi I have a question about how to collect the correct result from a BI-LSTM module’s output.

Suppose I have a 10-length sequence feeding into a single-layer LSTM module with 100 hidden units:

lstm = nn.LSTM(5, 100, 1, bidirectional=True)

output will be of shape:

[10 (seq_length), 1 (batch),  200 (num_directions * hidden_size)]
# or according to the doc, can be viewed as
[10 (seq_length), 1 (batch),  2 (num_directions), 100 (hidden_size)]

If I want to get the 3rd (1-index) input’s output at both directions (two 100-dim vectors), how can I do it correctly?

I know output[2, 0] will give me a 200-dim vector. Does this 200 dim vector represent the output of 3rd input at both directions?

A thing bothering me is that when do reverse feeding, the 3rd (1-index) output vector is calculated from the 8th(1-index) input, right?

Will pytorch automatically take care of this and group output considering direction?

Thanks!

Answer

MBT picture MBT · Oct 29, 2018

Yes, when using a BiLSTM the hidden states of the directions are just concatenated (the second part after the middle is the hidden state for feeding in the reversed sequence).
So splitting up in the middle works just fine.

As reshaping works from the right to the left dimensions you won't have any problems in separating the two directions.


Here is a small example:

# so these are your original hidden states for each direction
# in this case hidden size is 5, but this works for any size
direction_one_out = torch.tensor(range(5))
direction_two_out = torch.tensor(list(reversed(range(5))))
print('Direction one:')
print(direction_one_out)
print('Direction two:')
print(direction_two_out)

# before outputting they will be concatinated 
# I'm adding here batch dimension and sequence length, in this case seq length is 1
hidden = torch.cat((direction_one_out, direction_two_out), dim=0).view(1, 1, -1)
print('\nYour hidden output:')
print(hidden, hidden.shape)

# trivial case, reshaping for one hidden state
hidden_reshaped = hidden.view(1, 1, 2, -1)
print('\nReshaped:')
print(hidden_reshaped, hidden_reshaped.shape)

# This works as well for abitrary sequence lengths as you can see here
# I've set sequence length here to 5, but this will work for any other value as well
print('\nThis also works for more multiple hidden states in a tensor:')
multi_hidden = hidden.expand(5, 1, 10)
print(multi_hidden, multi_hidden.shape)
print('Directions can be split up just like this:')
multi_hidden = multi_hidden.view(5, 1, 2, 5)
print(multi_hidden, multi_hidden.shape)

Output:

Direction one:
tensor([0, 1, 2, 3, 4])
Direction two:
tensor([4, 3, 2, 1, 0])

Your hidden output:
tensor([[[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]]]) torch.Size([1, 1, 10])

Reshaped:
tensor([[[[0, 1, 2, 3, 4],
          [4, 3, 2, 1, 0]]]]) torch.Size([1, 1, 2, 5])

This also works for more multiple hidden states in a tensor:
tensor([[[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],

        [[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],

        [[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],

        [[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],

        [[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]]]) torch.Size([5, 1, 10])
Directions can be split up just like this:
tensor([[[[0, 1, 2, 3, 4],
          [4, 3, 2, 1, 0]]],


        [[[0, 1, 2, 3, 4],
          [4, 3, 2, 1, 0]]],


        [[[0, 1, 2, 3, 4],
          [4, 3, 2, 1, 0]]],


        [[[0, 1, 2, 3, 4],
          [4, 3, 2, 1, 0]]],


        [[[0, 1, 2, 3, 4],
          [4, 3, 2, 1, 0]]]]) torch.Size([5, 1, 2, 5])

Hope this helps! :)