"RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 3 3, but got 3-dimensional input of size [3, 224, 224] instead"?

JobHunter69 picture JobHunter69 · Jul 28, 2019 · Viewed 35.7k times · Source

I am trying to use a pre-trained model. Here's where the problem occurs

Isn't the model supposed to take in a simple colored image? Why is it expecting a 4-dimensional input?

RuntimeError                              Traceback (most recent call last)
<ipython-input-51-d7abe3ef1355> in <module>()
     33 
     34 # Forward pass the data through the model
---> 35 output = model(data)
     36 init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
     37 

5 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input)
    336                             _pair(0), self.dilation, self.groups)
    337         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 338                         self.padding, self.dilation, self.groups)
    339 
    340 

RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 3 3, but got 3-dimensional input of size [3, 224, 224] instead

Where

inception = models.inception_v3()
model = inception.to(device)

Answer

Shai picture Shai · Jul 28, 2019

As Usman Ali wrote in his comment, pytorch (and most other DL toolboxes) expects a batch of images as an input. Thus you need to call

output = model(data[None, ...])  

Inserting a singleton "batch" dimension to your input data.

Please also note that the model you are using might expect a different input size (3x229x229) and not 3x224x224.