bash: force exec'd process to have unbuffered stdout

bstpierre picture bstpierre · Jul 26, 2010 · Viewed 18.6k times · Source

I've got a script like:

#!/bin/bash
exec /usr/bin/some_binary > /tmp/my.log 2>&1

Problem is that some_binary sends all of its logging to stdout, and buffering makes it so that I only see output in chunks of a few lines. This is annoying when something gets stuck and I need to see what the last line says.

Is there any way to make stdout unbuffered before I do the exec that will affect some_binary so it has more useful logging?

(The wrapper script is only setting a few environment variables before the exec, so a solution in perl or python would also be feasible.)

Answer

zaga picture zaga · Jul 30, 2010

GNU coreutils-8.5 also has the stdbuf command to modify I/O stream buffering:

http://www.pixelbeat.org/programming/stdio_buffering/

So, in your example case, simply invoke:

stdbuf -oL /usr/bin/some_binary > /tmp/my.log 2>&1

This will allow text to appear immediately line-by-line (once a line is completed with the end-of-line "\n" character in C). If you really want immediate output, use -o0 instead.

This way could be more desirable if you do not want to introduce dependency to expect via unbuffer command. The unbuffer way, on the other hand, is needed if you have to fool some_binary into thinking that it is facing a real tty standard output.