I have been trying to do Convolution of a 2D Matrix using SciPy, and Numpy but have failed. For SciPy I tried, sepfir2d and scipy.signal.convolve and Convolve2D for Numpy. Is there a simple function like conv2 in Matlab for Python?
Here is an example:
A= [ 5 4 5 4;
3 2 3 2;
5 4 5 4;
3 2 3 2 ]
I want to convolve it with [0.707 0.707]
And the result as by conv2 from Matlab is
3.5350 6.3630 6.3630 6.3630 2.8280
2.1210 3.5350 3.5350 3.5350 1.4140
3.5350 6.3630 6.3630 6.3630 2.8280
2.1210 3.5350 3.5350 3.5350 1.4140
Some function to compute this output in Python? I will be grateful for a response.
There are a number of different ways to do it with scipy
, but 2D convolution isn't directly included in numpy
. (It's also easy to implement with an fft using only numpy, if you need to avoid a scipy dependency.)
scipy.signal.convolve2d
, scipy.signal.convolve
, scipy.signal.fftconvolve
, and scipy.ndimage.convolve
will all handle a 2D convolution (the last three are N-d) in different ways.
scipy.signal.fftconvolve
does the convolution in the fft domain (where it's a simple multiplication). This is much faster in many cases, but can lead to very small differences in edge effects than the discrete case, and your data will be coerced into floating point with this particular implementation. Additionally, there's unnecessary memory usage when convolving a small array with a much larger array. All in all, fft-based methods can be dramatically faster, but there are some common use cases where scipy.signal.fftconvolve
is not an ideal solution.
scipy.signal.convolve2d
, scipy.signal.convolve
, and scipy.ndimage.convolve
all use a discrete convolution implemented in C, however, they implement it in different ways.
scipy.ndimage.convolve
keeps the same data type, and gives you control over the location of the output to minimize memory usage. If you're convolving uint8
's (e.g. image data), it's often the best option. The output will always be the same shape as the first input array, which makes sense for images, but perhaps not for more general convolution. ndimage.convolve
gives you a lot of control over how edge effects are handled through the mode
kwarg (which functions completely differently than scipy.signal
's mode
kwarg).
Avoid scipy.signal.convolve
if you're working with 2d arrays. It works for the N-d case, but it's suboptimal for 2d arrays, and scipy.signal.convolve2d
exists to do the exact same thing a bit more efficiently. The convolution functions in scipy.signal
give you control over the output shape using the mode
kwarg. (By default, they'll behave just like matlab's conv2
.) This is useful for general mathematical convolution, but less useful for image processing. However, scipy.signal.convolve2d
is generally slower than scipy.ndimage.convolve
.
There are a lot of different options partly due to duplication in the different submodules of scipy
and partly because there are different ways to implement a convolution that have different performance tradeoffs.
If you can give a bit more detail about your use case, we can recommend a better solution. If you're convolving two arrays of roughly the same size, and they're already floats, fftconvolve
is an excellent choice. Otherwise, scipy.ndimage.convolve
may beat it.