I have a Jenkinsfile or a Jenkins pipeline which creates a new image and starts a container out of that image. It works well for the first time. But on subsequent runs, I want the previous container to be stopped and removed. My Jenkinsfile is as follows:
node {
def commit_id
stage('Preparation') {
checkout scm
sh "git rev-parse --short HEAD > .git/commit-id"
commit_id = readFile('.git/commit-id').trim()
}
stage('docker build/push') {
docker.withRegistry('https://index.docker.io/v1/', 'dockerhub') {
def app = docker.build("my-docker-id/my-api:${commit_id}", '.').push()
}
}
stage('docker stop container') {
def apiContainer = docker.container('api-server')
apiContainer.stop()
}
stage('docker run container') {
def apiContainer = docker.image("my-docker-id/my-api:${commit_id}").run("--name api-server --link mysql_server:mysql --publish 3100:3100")
}
}
The stage 'docker stop container' is failing. That is because I don't know the right API to get the container and stop it. Thanks.
As in this Jenkinsfile, you can use sh
commands instead.
That way, you can use lines like:
sh 'docker ps -f name=zookeeper -q | xargs --no-run-if-empty docker container stop'
sh 'docker container ls -a -fname=zookeeper -q | xargs -r docker container rm'
That would ensure a container x
(here named zookeper
), if it was running, is first stopped and removed.
Michael A. points out in the comments this is not a proper solution, and assume docker being installed on the slave.
He refers to jenkinsci/plugins/docker/workflow/Docker.groovy
, but a container method for the Docker
class is not implemented yet.
Update August 2018:
Pieter Vogelaar points out in the comments to "Jenkinsfile Docker pipeline multi stage" he wrote about:
By using a global
pipelineContext
object, it's possible to use the returned container object in a further stage.
It is:
pipelineContext
global variable which is of type LinkedHashMap.
The Jenkinsfile programming language is Groovy. In the Groovy this comes close to the equivalent of the JavaScript object. This variable makes it possible to share data or objects between stages.
So this is a Declarative Pipeline, starting with:
// Initialize a LinkedHashMap / object to share between stages
def pipelineContext = [:]