0

I would like to use images from DockerHub. I trust the applications that the containers are for and I trust the maintainers to update the DockerHub image if their application gets an update. I however do not trust them to docker build --pull every time the baseimage they use gets an update (for example when they use FROM debian:bullseye-slim in their Dockerfile).

What I want: to check continuously that everything (including the "platform" aka the Debian baseimage in this example) is up to date without building everything myself (if possible).

My main problem: There seems to be no way to check a base image from a pulled docker image (see https://stackoverflow.com/questions/35310212/docker-missing-layer-ids-in-output/35312577#35312577 and other stackexchange questions about the same thing). Therefore I cannot automatically and generally compare the layers of the image from my running container with the upstream baseimage (or even the date of the last update of the application image with that of the baseimage).

Is there a way to do what I described above? Or do I really have to

a) fully trust the maintainer of the DockerHub image to have their build pipeline in check or

b) build everything myself (in which case: why would there even exist a DockerHub instead of a DockerFile hub)?

Edit: If that may be relevant: This is sort of a extension of this older question: How to handle security updates within Docker containers?

1 Answers1

2

It sounds like you can achieve this by using an external service like snyk to scan your images, and using full image versions instead of "latest".

Snyk will let you know in case it has any known security vulnerability. Additionally, snyk can scan both your docker image and your code to check you you have a vulnerable library (from an npm install, for example).

Furthermore, if you're using a cloud provider image register like GCP's Cloud Register or AWS ECR, you also leverage the provider's service for containers security scanning.