PyTorch - contiguous()

MBT picture MBT · Feb 21, 2018 · Viewed 57.4k times · Source

I was going through this example of a LSTM language model on github (link). What it does in general is pretty clear to me. But I'm still struggling to understand what calling contiguous() does, which occurs several times in the code.

For example in line 74/75 of the code input and target sequences of the LSTM are created. Data (stored in ids) is 2-dimensional where first dimension is the batch size.

for i in range(0, ids.size(1) - seq_length, seq_length):
    # Get batch inputs and targets
    inputs = Variable(ids[:, i:i+seq_length])
    targets = Variable(ids[:, (i+1):(i+1)+seq_length].contiguous())

So as a simple example, when using batch size 1 and seq_length 10 inputs and targets looks like this:

inputs Variable containing:
0     1     2     3     4     5     6     7     8     9
[torch.LongTensor of size 1x10]

targets Variable containing:
1     2     3     4     5     6     7     8     9    10
[torch.LongTensor of size 1x10]

So in general my question is, what does contiguous() and why do I need it?

Further I don't understand why the method is called for the target sequence and but not the input sequence as both variables are comprised of the same data.

How could targets be uncontiguous and inputs still be contiguous?

EDIT: I tried to leave out calling contiguous(), but this leads to an error message when computing the loss.

RuntimeError: invalid argument 1: input is not contiguous at .../src/torch/lib/TH/generic/THTensor.c:231

So obviously calling contiguous() in this example is necessary.

(For keeping this readable I avoided posting the full code here, it can be found by using the GitHub link above.)

Thanks in advance!

Answer

Shital Shah picture Shital Shah · Sep 7, 2018

There are few operations on Tensor in PyTorch that do not really change the content of the tensor, but only how to convert indices in to tensor to byte location. These operations include:

narrow(), view(), expand() and transpose()

For example: when you call transpose(), PyTorch doesn't generate new tensor with new layout, it just modifies meta information in Tensor object so offset and stride are for new shape. The transposed tensor and original tensor are indeed sharing the memory!

x = torch.randn(3,2)
y = torch.transpose(x, 0, 1)
x[0, 0] = 42
print(y[0,0])
# prints 42

This is where the concept of contiguous comes in. Above x is contiguous but y is not because its memory layout is different than a tensor of same shape made from scratch. Note that the word "contiguous" is bit misleading because its not that the content of tensor is spread out around disconnected blocks of memory. Here bytes are still allocated in one block of memory but the order of the elements is different!

When you call contiguous(), it actually makes a copy of tensor so the order of elements would be same as if tensor of same shape created from scratch.

Normally you don't need to worry about this. If PyTorch expects contiguous tensor but if its not then you will get RuntimeError: input is not contiguous and then you just add a call to contiguous().