Heroku describes logs in its Twelve-Factor App manifest as simple event streams:
Logs are the stream of aggregated, time-ordered events collected from the output streams of all running processes and backing services. Logs in their raw form are typically a text format with one event per line (though backtraces from exceptions may span multiple lines). Logs have no fixed beginning or end, but flow continuously as long as the app is operating.
Additionally, apps should simply write logs to stdout
, leaving the task to the "environment".
A twelve-factor app never concerns itself with routing or storage of its output stream. It should not attempt to write to or manage logfiles. Instead, each running process writes its event stream, unbuffered, to stdout. During local development, the developer will view this stream in the foreground of their terminal to observe the app’s behavior.
In staging or production deploys, each process’ stream will be captured by the execution environment, collated together with all other streams from the app, and routed to one or more final destinations for viewing and long-term archival. These archival destinations are not visible to or configurable by the app, and instead are completely managed by the execution environment. Open-source log routers (such as Logplex and Fluent) are available for this purpose.
So what's the best way to achieve this in a docker environment in terms of reliability, efficiency and ease of use? I think the following questions come to mind:
docker logs
)?stdout
be redirected to a file directly (disk space)?docker run --volume=[]
)?Docker 1.6 introduced the notion of logging drivers to offer more control over log output. The --log-driver
flag configures where stdout
& stderr
from the process running in a container should be directed. See also Configuring Logging drivers.
Several drivers are available. Note that all of these except json-file
disable the use of docker logs
to gather container logs.
/var/lib/docker/containers/<containerid>/<containerid>-json.log
--log-opt
to direct log messages to a specified syslog via TCP, UDP or Unix domain socket. Also disables docker logs
* New in Docker 1.8
** New in Docker 1.9
For example:
docker run --log-driver=syslog --log-opt syslog-address=tcp://10.0.0.10:1514 ...
This is the Docker-recommended solution for software that writes its log messages to stdout
& stderr
. Some software, however, does not write log messages to stdout/stderr
. They instead write to log files or to syslog, for example. In those cases, some of the details from the original answer below still apply. To recap:
If the app writes to a local log file, mount a volume from the host (or use a data-only container to the container and write log messages to that location.
If the app writes to syslog, there are several options:
/dev/log
) to the container using -v /dev/log:/dev/log
. Don't forget that any logs within a container should be rotated just as they would on a host OS.
Is it safe to rely on Docker's own log facility (docker logs)?
docker logs
prints the entire stream each time, not just new logs, so it's not appropriate. docker logs --follow
will give tail -f
-like functionality, but then you have a docker CLI command running all the time. Thus while it is safe to run docker logs
, it's not optimal.
Is it safe to run docker undetached and consider its output as the logging stream?
You can start containers with systemd and not daemonize, thus capturing all the stdout in the systemd journal which can then be managed by the host however you'd like.
Can stdout be redirected to a file directly (disk space)?
You could do this with docker run ... > logfile
of course, but it feels brittle and harder to automate and manage.
If using a file, should it be inside the docker image or a bound volume (docker run --volume=[])?
If you write inside the container then you need to run logrotate or something in the container to manage the log files. Better to mount a volume from the host and control it using the host's log rotation daemon.
Is logrotation required?
Sure, if the app writes logs you need to rotate them just like in a native OS environment. But it's harder if you write inside the container since the log file location isn't as predictable. If you rotate on the host, the log file would live under, for example with devicemapper as the storage driver, /var/lib/docker/devicemapper/mnt/<containerid>/rootfs/...
. Some ugly wrapper would be needed to have logrotate find the logs under that path.
Is it safe to redirect stdout directly into a logshipper (and which logshipper)?
Better to use syslog and let the log collector deal with syslog.
Is a named pipe (aka FIFO) an option?
A named pipe isn't ideal because if the reading end of the pipe dies, the writer (the container) will get a broken pipe. Even if that event is handled by the app, it will be blocked until there is a reader again. Plus it circumvents docker logs
.
See also this post on fluentd with docker.
See Jeff Lindsay's tool logspout that collects logs from running containers and routes them however you want.
Finally, note that stdout from the container logs to a file on the host in /var/lib/docker/containers/<containerid>/<containerid>-json.log
.