Supervisord does not show stdout from processes
Trying to capture logs of my app with supervisor in docker.
Here's my supervisord.conf:
[supervisord]
logfile=/dev/null
nodaemon=true
[program:autofs]
command=automount -f
redirect_stderr=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
[program:split-pdf]
command=bin/split-pdf-server
directory=/root/split-pdf
redirect_stderr=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
After starting container everything works, and i can see the result of my app running (it creates pdf files on network share)
But the log shows no output from my app:
015-07-02 00:39:26,119 CRIT Supervisor running as root (no user in config file)
2015-07-02 00:39:26,124 INFO supervisord started with pid 5
2015-07-02 00:39:27,127 INFO spawned: 'split-pdf' with pid 8
2015-07-02 00:39:27,130 INFO spawned: 'autofs' with pid 9
2015-07-02 00:39:28,132 INFO success: split-pdf entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2015-07-02 00:39:28,132 INFO success: autofs entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
That's the only output i see when attaching to docker container.
I'm on ubuntu 15.04 docker 1.7.0
This is not a duplicate of this question, because i'm running more than one process in a container.
It turns out everything works, but with some delay. When i tried to create a container for some other app that produces much more log messages, the messages started appearing in the log file, but with delay.
The first app i was testing it with had only 2 lines per task in the log, and i guess there's some kind of buffer that needs to be filled before it starts flushing to the log file.
This has to do with pipe buffering.
I was able to circumvent that problem by running python in unbuffered mode:
$ docker run -e PYTHONUNBUFFERED=1 imagename
See supervisor-stdout issue #10 for a discussion on this.
链接地址: http://www.djcxy.com/p/86234.html