10. Docker Security Best Practices | The Complete Docker Handbook.
Welcome to Article 10 of The Complete Docker Handbook.
In Article 9, we mastered advanced Docker Compose techniques like scaling, health checks, and profiles. Your applications are now robust and manageable. However, there is one critical pillar remaining: Security.
By default, Docker is convenient, but not always secure.
- Many official images run as the root user.
- Images may contain known vulnerabilities (CVEs).
- Secrets are often accidentally hardcoded in Dockerfiles.
- A runaway container can consume all your server's resources.
In this article, we will shift our mindset from "making it work" to "making it safe." You will learn how to scan images, harden containers, manage secrets properly, and limit resources to prevent Denial of Service (DoS) attacks.
1. Image Security: Scanning for Vulnerabilities
Docker images are built on layers of existing software. Sometimes, those layers contain known security flaws (Common Vulnerabilities and Exposures, or CVEs).
Why Scan?
Imagine building your app on python:3.9. If that base image has a critical security flaw in its underlying Linux OS, your app is vulnerable too, even if your code is perfect.
Tools for Scanning
- Docker Scout: Docker's official tool (integrated into Docker Desktop).
- Trivy: A popular, open-source vulnerability scanner by Aqua Security.
- Grype: Another excellent open-source scanner.
Example: Scanning with Trivy
- Install Trivy: (Follow instructions at aquasecurity.github.io/trivy).
- Run Scan:
Plain Text
1bash 2trivy image my-python-app - Analyze Output:
Plain Text
1text 2my-python-app (alpine 3.14.2) 3============================= 4Total: 5 (UNKNOWN: 0, LOW: 2, MEDIUM: 1, HIGH: 2, CRITICAL: 0)- Action: If you see
HIGHorCRITICALvulnerabilities, update your base image (e.g., changeFROM python:3.9toFROM python:3.9.18).
- Action: If you see
2. Running as Non-Root User
By default, most containers run as the root user. If a hacker exploits a vulnerability in your app, they gain root access to the container. While container isolation exists, breaking out of a root container is easier than breaking out of a non-root container.
Best Practice: Create a User in Dockerfile
❌ Insecure:
Plain Text1FROM python:3.9 2COPY . . 3CMD ["python", "app.py"] 4# Runs as root by default
✅ Secure:
Plain Text1FROM python:3.9 2 3# 1. Create a group and user 4RUN groupadd -r appgroup && useradd -r -g appgroup appuser 5 6WORKDIR /app 7 8COPY . . 9 10# 2. Change ownership of files 11RUN chown -R appuser:appgroup /app 12 13# 3. Switch to the non-root user 14USER appuser 15 16CMD ["python", "app.py"]
Why this matters: If the container is compromised, the attacker only has the permissions of appuser, not the root of the host system.
3. Secrets Management
Never hardcode passwords, API keys, or tokens in your Dockerfile or docker-compose.yml if that file is committed to Git.
❌ Bad Practice
Plain Text1# Dockerfile 2ENV DB_PASSWORD=supersecret123
Yaml1# docker-compose.yml
2services:
3 db:
4 environment:
5 - POSTGRES_PASSWORD=supersecret123
Risk: Anyone with access to the image or repo can see the password.
✅ Good Practice: Environment Variables
Use a .env file for local development and inject secrets at runtime.
- Create
.env:Plain Text1text 2DB_PASSWORD=supersecret123 - Add to
.gitignore:Plain Text1text 2.env - Reference in Compose:
Plain Text
1yaml 2services: 3 db: 4 environment: 5 - POSTGRES_PASSWORD=${DB_PASSWORD}
✅ Best Practice: Docker Secrets (Swarm/K8s)
For high-security production environments (especially Docker Swarm), use Docker Secrets. This encrypts secrets at rest and in transit.
Yaml1services:
2 db:
3 secrets:
4 - db_password
5secrets:
6 db_password:
7 external: true
Note: For standard Compose setups, .env files with strict permissions are the standard approach.
4. Resource Limits (Preventing DoS)
A container with no limits can consume all available RAM or CPU on your host. This can crash your server or affect other containers (Denial of Service).
Setting Limits in Compose
You can restrict resources per service in your docker-compose.yml.
Yaml1services:
2 api:
3 build: .
4 deploy:
5 resources:
6 limits:
7 cpus: '0.5' # Max 50% of one CPU core
8 memory: 512M # Max 512 MB RAM
9 reservations:
10 cpus: '0.25'
11 memory: 256M
Setting Limits in CLI
Bashdocker run --memory="512m" --cpus="0.5" my-app
Why this matters: If your app has a memory leak, it will crash itself when it hits 512MB, rather than taking down your entire server.
5. General Hardening Techniques
1. Read-Only Filesystem
If your app doesn't need to write files (other than to volumes), make the container filesystem read-only.
Yaml1services:
2 api:
3 read_only: true
4 tmpfs:
5 - /tmp # Allow writing only to temp folder
Benefit: Prevents attackers from writing malicious scripts to the container disk.
2. Drop Capabilities
Linux capabilities grant specific privileges (like changing network settings). Drop what you don't need.
Yaml1services:
2 api:
3 cap_drop:
4 - ALL
5 cap_add:
6 - CHOWN # Only add back what is strictly needed
3. Keep Images Updated
Regularly rebuild your images to pull in security patches from base images.
Bash1docker compose pull 2docker compose up -d --build
6. Security Checklist for Production
Before deploying any container to production, ask these questions:
- Scan: Have I scanned the image for vulnerabilities?
- User: Is the container running as a non-root user?
- Secrets: Are passwords removed from the Dockerfile and Git repo?
- Limits: Are CPU and Memory limits defined?
- Network: Are unnecessary ports closed? (e.g., Database not exposed to public internet).
- Updates: Is the base image up to date?
Summary Checklist
By the end of this article, you should be able to:
- Scan Docker images for vulnerabilities using tools like Trivy.
- Configure a Dockerfile to run as a non-root user.
- Manage secrets using
.envfiles and.gitignore. - Set CPU and Memory limits to prevent resource exhaustion.
- Apply hardening techniques like read-only filesystems.
What's Next?
You now know how to build secure, efficient, and scalable containerized applications. But manually building and pushing images every time you change code is slow and error-prone.
In the modern DevOps world, we automate this process.
In Article 11, we will integrate Docker into CI/CD Pipelines.
- Automating builds with GitHub Actions.
- Automatically pushing images to Docker Hub.
- Triggering deployments on code commit.
Link: Read Article 11: CI/CD Integration with Docker
Challenge: Take your existing Dockerfile. Add a USER command to run as non-root. Run a vulnerability scan (using Docker Desktop's built-in scanner or Trivy) and try to reduce the number of vulnerabilities by updating the base image version.
Next Up: CI/CD Integration with Docker