I'm working on a renderfarm, and I need my clients to be able to launch multiple instances of a renderer, without blocking so the client can receive new commands. I've got that working correctly, however I'm having trouble terminating the created processes.
At the global level, I define my pool (so that I can access it from any function):
p = Pool(2)
I then call my renderer with apply_async:
for i in range(totalInstances):
p.apply_async(render, (allRenderArgs[i],args[2]), callback=renderFinished)
p.close()
That function finishes, launches the processes in the background, and waits for new commands. I've made a simple command that will kill the client and stop the renders:
def close():
'''
close this client instance
'''
tn.write ("say "+USER+" is leaving the farm\r\n")
try:
p.terminate()
except Exception,e:
print str(e)
sys.exit()
It doesn't seem to give an error (it would print the error), the python terminates but the background processes are still running. Can anyone recommend a better way of controlling these launched programs?
I found solution: stop pool in separate thread, like this:
def close_pool():
global pool
pool.close()
pool.terminate()
pool.join()
def term(*args,**kwargs):
sys.stderr.write('\nStopping...')
# httpd.shutdown()
stophttp = threading.Thread(target=httpd.shutdown)
stophttp.start()
stoppool=threading.Thread(target=close_pool)
stoppool.daemon=True
stoppool.start()
signal.signal(signal.SIGTERM, term)
signal.signal(signal.SIGINT, term)
signal.signal(signal.SIGQUIT, term)
Works fine and always i tested.
signal.SIGINT
Interrupt from keyboard (CTRL + C). Default action is to raise KeyboardInterrupt.
signal.SIGKILL
Kill signal. It cannot be caught, blocked, or ignored.
signal.SIGTERM
Termination signal.
signal.SIGQUIT
Quit with core dump.