5

I'm running a build in Jenkins which is running inside Kubernetes.

When the job starts, a container starts in the cluster and the build runs inside it.

A step in the build requires building an RPM using rpmbuild and for that matter I use this docker image.

So what happens is that when the job starts, it fires up a jenkins slave jnlp dind container with a volume mount of /var/run/docker.sock:/var/run/docker.sock, it then pulls the code from git and running a Makefile to build the code.

When the build process finishes, I want to fire up another container (the one containing rpmbuild), so from within the jnlp dind container I run:

docker run -e WORKSPACE=/home/jenkins/agent/workspace/product-Provisioner-3.17 -e PROV_NAME=product-provisioner -v ${WORKSPACE}:/srv --privileged -t registry/rpmbuild:latest /usr/bin/rpmbuild -v -bb --clean --define '_workspace /srv' /srv/provisioner/SPECS/${PROV_NAME}.spec

But for some reason the volume is not mounted and when I do kubectl exec -it jnlp-container sh, I see that the /srv directory doesn't contain the files in the ${WORKSPACE}.

Running the rpmbuild container just with --help works so the docker inside docker inside docker works, just the volume isn't mounted properly.

As far as I know, running a container from within another container should be doable.

What could be the reason that /srv is not mounted properly?

Itai Ganot
  • 410
  • 2
  • 11

1 Answers1

1

So this happens to me often and I'm sure there is a better solution, such as using docker volumes but this is what I do.

So first the reason that it doesn't work as expected. Is because we think of the relationship between the first container and the second container being "nested" like this...

k8 -> jenkins -> rpmbuild

Right?

Thats what I thought was going on... but its actually like this:

k8 -> jenkins
k8 -> rpmbuild

So they are not "nested", they actually run in parallel to each other side by side.

So when you try to mount a path from inside the jenkins container like /var/jenkins_home/some_workspace/rpmbuild that gets sent back to the docker daemon running on the K8. And it's like... I dont have that path on me.

So what I do on my non-k8 setup, is I mount the jenkins home to someplace like /tmp/jenkins_home/, but dont use tmp this is just an example.

Jenkins = docker run -v /tmp/jenkins_home:/var/jenkins_home

Now when I want to start the rpm container:

rpmbuild = docker run -v /var/jenkins_home/some_workspace/rpmbuild

Now your rpmbuild will have the files like you expected it to. I'm not sure how you will implement this on K8's but what I did was pass the location on the host where I'm mounting the jenkins home directory to the jenkins container as an environment variable like -e DIND_JENKINS_HOME=/tmp/jenkins_home. Then for the pipeline logic to create the path to pass to the rpmbuild container:

    List parts = env.WORKSPACE.split('/workspace/')
rpmWorkspace= "${env.DIND_JENKINS_HOME}/workspace/${parts[1]}/product-Provisioner-3.17"

Then just start rpmbuild with docker run -v ${rpmWorkspace}:/srv

This took me a long time figure out and it was incredibly frustrating. Googling did not really help. So I hope this clears things up for you and bravo for you coming forward to ask this question.

Levi
  • 1,084
  • 6
  • 18