Parallelise python loop with numpy arrays and shared-memory

tiago picture tiago · Oct 25, 2012 · Viewed 17k times · Source

I am aware of several questions and answers on this topic, but haven't found a satisfactory answer to this particular problem:

What is the easiest way to do a simple shared-memory parallelisation of a python loop where numpy arrays are manipulated through numpy/scipy functions?

I am not looking for the most efficient way, I just wanted something simple to implement that doesn't require a significant rewrite when the loop is not run in parallel. Just like OpenMP implements in lower level languages.

The best answer I've seen in this regard is this one, but this is a rather clunky way that requires one to express the loop into a function that takes a single argument, several lines of shared-array converting crud, seems to require that the parallel function is called from __main__, and it doesn't seem to work well from the interactive prompt (where I spend a lot of my time).

With all of Python's simplicity is this really the best way to parellelise a loop? Really? This is something trivial to parallelise in OpenMP fashion.

I have painstakingly read through the opaque documentation of the multiprocessing module, only to find out that it is so general that it seems suited to everything but a simple loop parallelisation. I am not interested in setting up Managers, Proxies, Pipes, etc. I just have a simple loop, fully parallel that doesn't have any communication between tasks. Using MPI to parallelise such a simple situation seems like overkill, not to mention it would be memory-inefficient in this case.

I haven't had time to learn about the multitude of different shared-memory parallel packages for Python, but was wondering if someone has more experience in this and can show me a simpler way. Please do not suggest serial optimisation techniques such as Cython (I already use it), or using parallel numpy/scipy functions such as BLAS (my case is more general, and more parallel).

Answer

pv. picture pv. · Oct 26, 2012

With Cython parallel support:

# asd.pyx
from cython.parallel cimport prange

import numpy as np

def foo():
    cdef int i, j, n

    x = np.zeros((200, 2000), float)

    n = x.shape[0]
    for i in prange(n, nogil=True):
        with gil:
            for j in range(100):
                x[i,:] = np.cos(x[i,:])

    return x

On a 2-core machine:

$ cython asd.pyx
$ gcc -fPIC -fopenmp -shared -o asd.so asd.c -I/usr/include/python2.7
$ export OMP_NUM_THREADS=1
$ time python -c 'import asd; asd.foo()'
real    0m1.548s
user    0m1.442s
sys 0m0.061s

$ export OMP_NUM_THREADS=2
$ time python -c 'import asd; asd.foo()'
real    0m0.602s
user    0m0.826s
sys 0m0.075s

This runs fine in parallel, since np.cos (like other ufuncs) releases the GIL.

If you want to use this interactively:

# asd.pyxbdl
def make_ext(modname, pyxfilename):
    from distutils.extension import Extension
    return Extension(name=modname,
                     sources=[pyxfilename],
                     extra_link_args=['-fopenmp'],
                     extra_compile_args=['-fopenmp'])

and (remove asd.so and asd.c first):

>>> import pyximport
>>> pyximport.install(reload_support=True)
>>> import asd
>>> q1 = asd.foo()
# Go to an editor and change asd.pyx
>>> reload(asd)
>>> q2 = asd.foo()

So yes, in some cases you can parallelize just by using threads. OpenMP is just a fancy wrapper for threading, and Cython is therefore only needed here for the easier syntax. Without Cython, you can use the threading module --- works similarly as multiprocessing (and probably more robustly), but you don't need to do anything special to declare arrays as shared memory.

However, not all operations release the GIL, so YMMV for the performance.

***

And another possibly useful link scraped from other Stackoverflow answers --- another interface to multiprocessing: http://packages.python.org/joblib/parallel.html