What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool
of tensorflow
?
In my opinion, 'VALID' means there will be no zero padding outside the edges when we do max pool.
According to A guide to convolution arithmetic for deep learning, it says that there will be no padding in pool operator, i.e. just use 'VALID' of tensorflow
.
But what is 'SAME' padding of max pool in tensorflow
?
If you like ascii art:
"VALID"
= without padding:
inputs: 1 2 3 4 5 6 7 8 9 10 11 (12 13)
|________________| dropped
|_________________|
"SAME"
= with zero padding:
pad| |pad
inputs: 0 |1 2 3 4 5 6 7 8 9 10 11 12 13|0 0
|________________|
|_________________|
|________________|
In this example:
Notes:
"VALID"
only ever drops the right-most columns (or bottom-most rows)."SAME"
tries to pad evenly left and right, but if the amount of columns to be added is odd, it will add the extra column to the right, as is the case in this example (the same logic applies vertically: there may be an extra row of zeros at the bottom).Edit:
About the name:
"SAME"
padding, if you use a stride of 1, the layer's outputs will have the same spatial dimensions as its inputs."VALID"
padding, there's no "made-up" padding inputs. The layer only uses valid input data.