I'm new to docker/k8s world... I was asked if I could deploy a container using args to modify the behavior (typically if the app is working in "master" or "slave" version), which I did. Maybe not the optimal solution but it works:
This is a simple test to verify. I made a custom image with a script inside: role.sh:
#!/bin/sh
ROLE=$1
echo "You are running "$ROLE" version of your app"
Dockerfile:
FROM centos:7.4.1708
COPY ./role.sh /usr/local/bin
RUN chmod a+x /usr/local/bin/role.sh
ENV ROLE=""
ARG ROLE
ENTRYPOINT ["role.sh"]
CMD ["${ROLE}"]
If I start this container with docker using the following command:
docker run -dit --name test docker.local:5000/test master
I end up with the following log, which is exactly what I am looking for:
You are running master version of your app
Now I want to have the same behavior on k8s, using a yaml file. I tried several ways but none worked.
YAML file:
apiVersion: v1
kind: Pod
metadata:
name: master-pod
labels:
app: test-master
spec:
containers:
- name: test-master-container
image: docker.local:5000/test
command: ["role.sh"]
args: ["master"]
I saw so many different ways to do this and I must say that I still don't get the difference between ARG and ENV.
I also tried with
- name: test-master-container
image: docker.local:5000/test
env:
- name: ROLE
value: master
and
- name: test-master-container
image: docker.local:5000/test
args:
- master
but none of these worked, my pods are always in CrashLoopBackOff state.. Thanks in advance for your help!
In terms of specific fields:
command:
matches Docker's "entrypoint" concept, and whatever is specified here is run as the main process of the container. You don't need to specify a command:
in a pod spec if your Dockerfile has a correct ENTRYPOINT
already.args:
matches Docker's "command" concept, and whatever is specified here is passed as command-line arguments to the entrypoint.ARG
specifies a build-time configuration setting for an image. The expansion rules and interaction with environment variables are a little odd. In my experience this has a couple of useful use cases ("which JVM version do I actually want to build against?"), but every container built from an image will have the same inherited ARG
value; it's not a good mechanism for run-time configuration.ENV
variables, EXPOSE
d ports, a default CMD
, especially VOLUME
) there's no particular need to "declare" them in the Dockerfile to be able to set them at run time.There are a couple of more-or-less equivalent ways to do what you're describing. (I will use docker run
syntax for the sake of compactness.) Probably the most flexible way is to have ROLE
set as an environment variable; when you run the entrypoint script you can assume $ROLE
has a value, but it's worth checking.
#!/bin/sh
# --> I expect $ROLE to be set
# --> Pass some command to run as additional arguments
if [ -z "$ROLE" ]; then
echo "Please set a ROLE environment variable" >&2
exit 1
fi
echo "You are running $ROLE version of your app"
exec "$@"
docker run --rm -e ROLE=some_role docker.local:5000/test /bin/true
In this case you can specify a default ROLE
in the Dockerfile if you want to.
FROM centos:7.4.1708
COPY ./role.sh /usr/local/bin
RUN chmod a+x /usr/local/bin/role.sh
ENV ROLE="default_role"
ENTRYPOINT ["role.sh"]
A second path is to take the role as a command-line parameter:
#!/bin/sh
# --> pass a role name, then a command, as parameters
ROLE="$1"
if [ -z "$ROLE" ]; then
echo "Please pass a role as a command-line option" >&2
exit 1
fi
echo "You are running $ROLE version of your app"
shift # drops first parameter
export ROLE # makes it an environment variable
exec "$@"
docker run --rm docker.local:5000/test some_role /bin/true
I would probably prefer the environment-variable path both for it being a little easier to supply multiple unrelated options and to not mix "settings" and "the command" in the "command" part of the Docker invocation.
As to why your pod is "crashing": Kubernetes generally expects pods to be long-running, so if you write a container that just prints something and exits, Kubernetes will restart it, and when it doesn't stay up, it will always wind up in CrashLoopBackOff
state. For what you're trying to do right now, don't worry about it and look at the kubectl logs
of the pod. Consider setting the pod spec's restart policy if this bothers you.