I've been using subprocess.check_output()
for some time to capture output from subprocesses, but ran into some performance problems under certain circumstances. I'm running this on a RHEL6 machine.
The calling Python environment is linux-compiled and 64-bit. The subprocess I'm executing is a shell script which eventually fires off a Windows python.exe process via Wine (why this foolishness is required is another story). As input to the shell script, I'm piping in a small bit of Python code that gets passed off to python.exe.
While the system is under moderate/heavy load (40 to 70% CPU utilization), I've noticed that using subprocess.check_output(cmd, shell=True)
can result in a significant delay (up to ~45 seconds) after the subprocess has finished execution before the check_output command returns. Looking at output from ps -efH
during this time shows the called subprocess as sh <defunct>
, until it finally returns with a normal zero exit status.
Conversely, using subprocess.call(cmd, shell=True)
to run the same command under the same moderate/heavy load will cause the subprocess to return immediately with no delay, all output printed to STDOUT/STDERR (rather than returned from the function call).
Why is there such a significant delay only when check_output()
is redirecting the STDOUT/STDERR output into its return value, and not when the call()
simply prints it back to the parent's STDOUT/STDERR?
Reading the docs, both subprocess.call
and subprocess.check_output
are use-cases of subprocess.Popen
. One minor difference is that check_output
will raise a Python error if the subprocess returns a non-zero exit status. The greater difference is emphasized in the bit about check_output
(my emphasis):
The full function signature is largely the same as that of the Popen constructor, except that stdout is not permitted as it is used internally. All other supplied arguments are passed directly through to the Popen constructor.
So how is stdout
"used internally"? Let's compare call
and check_output
:
def call(*popenargs, **kwargs):
return Popen(*popenargs, **kwargs).wait()
def check_output(*popenargs, **kwargs):
if 'stdout' in kwargs:
raise ValueError('stdout argument not allowed, it will be overridden.')
process = Popen(stdout=PIPE, *popenargs, **kwargs)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise CalledProcessError(retcode, cmd, output=output)
return output
Now we have to look at Popen.communicate
as well. Doing this, we notice that for one pipe, communicate
does several things which simply take more time than simply returning Popen().wait()
, as call
does.
For one thing, communicate
processes stdout=PIPE
whether you set shell=True
or not. Clearly, call
does not. It just lets your shell spout whatever... making it a security risk, as Python describes here.
Secondly, in the case of check_output(cmd, shell=True)
(just one pipe)... whatever your subprocess sends to stdout
is processed by a thread in the _communicate
method. And Popen
must join the thread (wait on it) before additionally waiting on the subprocess itself to terminate!
Plus, more trivially, it processes stdout
as a list
which must then be joined into a string.
In short, even with minimal arguments, check_output
spends a lot more time in Python processes than call
does.