I have a need to combine the php-fpm with nginx in one dockerfile for production deployment.
So is it better to :
(1) Start the dockerfile using php:7.1.8-fpm and then install nginx image layer on top of it ?
(2) Or do you recommend using nginx image and then installing php-fpm using apt-get ?
PS: I do not have a docker-compose build option for production deployment. On my development environment, I already use docker-compose and build multi-container app easily from two images. Our organization devops do not support docker-compose based deployment for prod environment.
Nginx installation is much easier than PHP so it should be easier for you to install Nginx into a ready-to-use official PHP image. Here is the example Dockerfile showing how your goal can be reached with an example of installing few PHP extensions:
FROM php:7.2-fpm
RUN apt-get update -y \
&& apt-get install -y nginx
# PHP_CPPFLAGS are used by the docker-php-ext-* scripts
ENV PHP_CPPFLAGS="$PHP_CPPFLAGS -std=c++11"
RUN docker-php-ext-install pdo_mysql \
&& docker-php-ext-install opcache \
&& apt-get install libicu-dev -y \
&& docker-php-ext-configure intl \
&& docker-php-ext-install intl \
&& apt-get remove libicu-dev icu-devtools -y
RUN { \
echo 'opcache.memory_consumption=128'; \
echo 'opcache.interned_strings_buffer=8'; \
echo 'opcache.max_accelerated_files=4000'; \
echo 'opcache.revalidate_freq=2'; \
echo 'opcache.fast_shutdown=1'; \
echo 'opcache.enable_cli=1'; \
} > /usr/local/etc/php/conf.d/php-opocache-cfg.ini
COPY nginx-site.conf /etc/nginx/sites-enabled/default
COPY entrypoint.sh /etc/entrypoint.sh
COPY --chown=www-data:www-data . /var/www/mysite
WORKDIR /var/www/mysite
EXPOSE 80 443
ENTRYPOINT ["/etc/entrypoint.sh"]
The nginx-site.conf
file contain your nginx http host configuration. The example below is for Symfony app:
server {
root /var/www/mysite/web;
include /etc/nginx/default.d/*.conf;
index app.php index.php index.html index.htm;
client_max_body_size 30m;
location / {
try_files $uri $uri/ /app.php$is_args$args;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
# Mitigate https://httpoxy.org/ vulnerabilities
fastcgi_param HTTP_PROXY "";
fastcgi_pass 127.0.0.1:9000;
fastcgi_index app.php;
include fastcgi.conf;
}
}
The entrypoint.sh
will run Nginx and php-fpm on container startup (otherwise only php-fpm will be started as the default action of the official PHP image):
#!/usr/bin/env bash
service nginx start
php-fpm
Of course, this is not the best way from the best practice perspective, but I hope this is the answer to your question.
Update:
If you get the permission denied error on the entrypoint.sh
file, check that this file has the executable your permission if you're building from under Linux, or add the RUN chmod +x /etc/entrypoint.sh
to the Dockerfile if you're under Windows (all files from under Windows are copied without the executable permission to the container).
If you're running under Google Cloud Run, keep in mind that Nginx startups before PHP and it does that much quicker than PHP. This leads to the issue that when Cloud Run send the first request, it comes at the moment when Nginx is already initialized, but PHP is not yet and Cloud Run request fails. To fix that, you should change your entrypoint to startup PHP before Nginx:
#!/usr/bin/env sh
set -e
php-fpm -D
nginx -g 'daemon off;'
This script is tested under Alpine Linux only. I guess it should also work on other images. This script runs php-fpm first in the background, and then Nginx without exiting. In this way, Nginx always starts listening ports after PHP is initialized.