I have a fairly straightforward nested for loop that iterates over four arrays:
for a in a_grid:
for b in b_grid:
for c in c_grid:
for d in d_grid:
do_some_stuff(a,b,c,d) # perform calculations and write to file
Maybe this isn't the most efficient way to perform calculations over a 4D grid to begin with. I know joblib
is capable of parallelizing two nested for loops like this, but I'm having trouble generalizing it to four nested loops. Any ideas?
I usually use code of this form:
#!/usr/bin/env python3
import itertools
import multiprocessing
#Generate values for each parameter
a = range(10)
b = range(10)
c = range(10)
d = range(10)
#Generate a list of tuples where each tuple is a combination of parameters.
#The list will contain all possible combinations of parameters.
paramlist = list(itertools.product(a,b,c,d))
#A function which will process a tuple of parameters
def func(params):
a = params[0]
b = params[1]
c = params[2]
d = params[3]
return a*b*c*d
#Generate processes equal to the number of cores
pool = multiprocessing.Pool()
#Distribute the parameter sets evenly across the cores
res = pool.map(func,paramlist)