I am launching a django application on aws elastic beanstalk. I'd like to run background task or worker in order order to run celery.
I can not find if it is possible or not. If yes how could it be achieved?
Here is what I am doing right now, but this is producing an event type error every time.
container_commands:
01_syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
50_sqs_email:
command: "./manage.py celery worker --loglevel=info"
leader_only: true
As @chris-wheadon suggested in his comment, you should try to run celery as a deamon in the background. AWS Elastic Beanstalk uses supervisord already to run some deamon processes. So you can leverage that to run celeryd and avoid creating a custom AMI for this. It works nicely for me.
What I do is to programatically add a celeryd config file to the instance after the app is deployed to it by EB. The tricky part is that the file needs to set the required environmental variables for the deamon (such as AWS access keys if you use S3 or other services in your app).
Below there is a copy of the script that I use, add this script to your .ebextensions
folder that configures your EB environment.
The setup script creates a file in the /opt/elasticbeanstalk/hooks/appdeploy/post/
folder (documentation) that lives on all EB instances. Any shell script in there will be executed post deployment. The shell script that is placed there works as follows:
celeryenv
variable, the virutalenv environment is stored in
a format that follows the supervisord notation. This is a comma
separated list of env variables.celeryconf
that contains the
configuration file as a string, which includes the previously parsed
env variables.celeryd.conf
, a
supervisord configuration file for the celery daemon.supervisord.conf
file, if it is not already there.Here is a copy of the script:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}
# Create celery configuraiton script
celeryconf="[program:celeryd]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A myappname --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd