Security

Docker Security Best Practices: Rootless, Read-Only, and Scanning

May 01, 2026

Back to Blog

The Container Security Paradox

Containers feel secure because they are isolated. You pull an image, run it, and it operates in its own namespace with its own filesystem. But this perception of security is misleading. By default, Docker containers run as root, share the host kernel, and can potentially escape their sandbox if misconfigured. A compromised container running as root with the --privileged flag is effectively a root shell on your host machine.

Docker security is not a feature you enable — it is a discipline you practice. This guide walks through every major security concern, from the images you build to the containers you run, and provides concrete steps to harden your Docker deployment.

Sobering Statistic: According to recent security audits, over 50% of Docker images on public registries contain at least one critical vulnerability, and many production deployments still run containers as root. The good news is that every issue in this article has a straightforward fix.

Never Run Containers as Root

This is the single most important Docker security practice. By default, the process inside a container runs as root (UID 0). While container namespaces provide some isolation, a kernel vulnerability or container escape exploit gives an attacker root access to the host.

Method 1: USER Directive in Dockerfile

# Dockerfile FROM node:20-alpine WORKDIR /app # Create a non-root user RUN addgroup -g 1001 appgroup && \ adduser -u 1001 -G appgroup -D appuser # Install dependencies as root (needs write access) COPY package*.json ./ RUN npm ci --only=production # Copy application code COPY --chown=appuser:appgroup . . # Switch to non-root user USER appuser EXPOSE 3000 CMD ["node", "server.js"]

Method 2: Runtime User Override

# Run any image as a non-root user $ docker run -d --user 1001:1001 nginx:alpine # In docker-compose.yml services: web: image: myapp:latest user: "1001:1001"
Verify your containers: Run docker exec mycontainer whoami on every running container. If the answer is "root," you have work to do.

Rootless Docker: Defense in Depth

Even better than running containers as non-root is running the Docker daemon itself as a non-root user. This is called Rootless Docker, and it means that even if an attacker escapes the container, they only have the privileges of a regular user — not root.

# Install rootless Docker $ dockerd-rootless-setuptool.sh install # Set environment variables $ export PATH=/usr/bin:$PATH $ export DOCKER_HOST=unix:///run/user/1000/docker.sock # Verify rootless mode $ docker info | grep -i rootless Security Options: rootless

Rootless Docker Limitations

FeatureRegular DockerRootless Docker
Port binding < 1024YesNo (use port > 1024)
Host networkingYesNo
Overlay filesystemsYesLimited (fuse-overlayfs)
Cgroup managementFullCgroups v2 only
Container escape impactRoot accessUser access only

Read-Only Containers

If your application does not need to write to the filesystem, make the entire container read-only. This prevents attackers from dropping malware, modifying binaries, or creating backdoors inside a compromised container.

# Run with read-only filesystem $ docker run -d \ --read-only \ --tmpfs /tmp:rw,size=64m \ --tmpfs /var/run:rw,size=1m \ myapp:latest
# docker-compose.yml services: api: image: myapp:latest read_only: true tmpfs: - /tmp:rw,size=64m - /var/run:rw,size=1m

The tmpfs mounts provide writable temporary storage in RAM for directories that the application needs to write to (like /tmp for session files or /var/run for PID files). These are ephemeral — data is lost when the container stops.

Read-Only + tmpfs is an incredibly effective combination. Even if an attacker gains code execution inside the container, they cannot install tools, modify the application, or persist any changes. The only writable areas are RAM-backed tmpfs mounts that disappear on restart.

Image Scanning: Find Vulnerabilities Before Production

Every Docker image is built on layers of software — base OS packages, runtime libraries, application dependencies. Any of these can contain known vulnerabilities. Image scanning tools check every package against vulnerability databases (CVE lists) and alert you to issues.

Popular Scanning Tools

ToolTypeCostIntegration
Docker ScoutBuilt into Docker CLIFree tierDocker Desktop, CLI, CI/CD
TrivyOpen source (Aqua)FreeCLI, CI/CD, Kubernetes
GrypeOpen source (Anchore)FreeCLI, CI/CD
Snyk ContainerSaaS + CLIFreemiumIDE, CI/CD, registries
ClairOpen source (Quay)FreeRegistry integration

Scanning in Practice

# Docker Scout (built-in) $ docker scout cves myapp:latest Target: myapp:latest CRITICAL: 2 HIGH: 5 MEDIUM: 12 LOW: 28 CVE-2024-3094 libxz CRITICAL Remote code execution CVE-2024-21626 runc HIGH Container escape # Trivy (comprehensive, fast) $ trivy image myapp:latest myapp:latest (alpine 3.19.1) Total: 47 (CRITICAL: 2, HIGH: 5, MEDIUM: 12, LOW: 28) # Trivy - fail CI if critical vulnerabilities found $ trivy image --exit-code 1 --severity CRITICAL myapp:latest # Grype $ grype myapp:latest
CI/CD Integration: Add image scanning to your CI/CD pipeline and configure it to fail the build on CRITICAL or HIGH vulnerabilities. Scanning in production is too late — catch issues before images reach your servers.

Minimal Base Images: Smaller Is Safer

Every package in your image is a potential attack surface. The fewer packages you include, the fewer vulnerabilities you expose. Choose base images deliberately.

Base ImageSizePackagesShell?Best For
ubuntu:24.04~78 MB~100+YesDevelopment, debugging
alpine:3.19~7 MB~15Yes (ash)General production use
distroless~2-20 MB~0NoMaximum security
scratch0 MB0NoStatic binaries (Go, Rust)

Using Distroless Images

Google's distroless images contain only the application runtime and its dependencies — no shell, no package manager, no utilities. An attacker who gains code execution cannot run bash, curl, wget, or any other tool.

# Multi-stage build with distroless FROM golang:1.22 AS builder WORKDIR /app COPY . . RUN CGO_ENABLED=0 go build -o /server FROM gcr.io/distroless/static-debian12 COPY --from=builder /server /server USER nonroot:nonroot ENTRYPOINT ["/server"]
78 MB
ubuntu:24.04 — 100+ packages, full attack surface
2 MB
distroless/static — 0 packages, minimal attack surface

Never Use --privileged

The --privileged flag is the nuclear option. It gives the container full access to all host devices, disables all security restrictions (AppArmor, seccomp, capabilities), and allows the container to modify the host kernel. A privileged container is effectively identical to running as root on the host.

# NEVER do this in production $ docker run --privileged myapp:latest # Instead, grant ONLY the specific capabilities needed $ docker run --cap-drop ALL \ --cap-add NET_BIND_SERVICE \ myapp:latest

Linux Capabilities: Fine-Grained Privileges

Instead of --privileged, Docker supports adding and dropping individual Linux capabilities:

CapabilityPurposeKeep?
NET_BIND_SERVICEBind ports below 1024If needed
CHOWNChange file ownershipRarely
SYS_ADMINMount filesystems, admin operationsAlmost never
SYS_PTRACEDebug other processesDevelopment only
NET_RAWRaw sockets (ping)Usually not
SYS_MODULELoad kernel modulesNever
Best Practice: Start with --cap-drop ALL and add back only what your application actually needs. Most applications need zero additional capabilities beyond Docker's already-reduced default set.

Seccomp Profiles: System Call Filtering

Seccomp (Secure Computing Mode) filters which system calls a container can make. Docker includes a default seccomp profile that blocks approximately 44 of the 300+ Linux system calls. You can create custom profiles for even tighter restrictions.

# Check if seccomp is active $ docker info | grep -i seccomp Security Options: seccomp # Use a custom seccomp profile $ docker run --security-opt seccomp=/path/to/profile.json myapp:latest # NEVER disable seccomp (unless debugging) $ docker run --security-opt seccomp=unconfined myapp:latest # BAD!

AppArmor and SELinux

Docker integrates with Linux Security Modules (LSM) to provide mandatory access control. On Ubuntu/Debian systems, this is AppArmor; on RHEL/CentOS/Fedora, it is SELinux.

AppArmor (Ubuntu/Debian)

Docker applies a default AppArmor profile (docker-default) to every container. This profile restricts file access, mount operations, and network capabilities. Custom profiles can further restrict container behavior.

$ docker run --security-opt \ apparmor=my-custom-profile \ myapp:latest

SELinux (RHEL/Fedora)

With SELinux in enforcing mode, Docker applies the container_t type to containers. This prevents containers from accessing host files, even if mounted improperly.

$ docker run --security-opt \ label=type:container_t \ myapp:latest

Secrets Management

Credentials, API keys, and certificates must never be baked into Docker images or passed as plain environment variables (visible in docker inspect).

Anti-Patterns vs Best Practices

Bad Hardcoded in Dockerfile

ENV DB_PASSWORD=mysecret123

Visible in image layers, Docker history, and any registry. Anyone who pulls the image gets the credentials.

Good Runtime Secrets

$ docker run \ -v /secrets/db_pass:/run/secrets/db:ro \ myapp:latest

Application reads from file. Secret is not in image, not in environment, and the mount is read-only.

# Docker Swarm secrets (encrypted at rest) $ echo "super_secret_password" | docker secret create db_password - # docker-compose.yml (Swarm mode) services: web: image: myapp:latest secrets: - db_password secrets: db_password: external: true # Secret available at /run/secrets/db_password inside container

Resource Limits: Preventing Denial of Service

Without resource limits, a single misbehaving container can consume all host CPU, memory, or disk I/O, effectively taking down every other container on the same host.

# Command line resource limits $ docker run -d \ --memory=512m \ --memory-swap=512m \ --cpus=1.5 \ --pids-limit=100 \ myapp:latest
FlagPurposeRecommended For
--memoryHard memory limitAlways set
--memory-swapMemory + swap limit (set equal to --memory to disable swap)Always set
--cpusCPU core limit (e.g., 1.5 = 1.5 cores)Always set
--pids-limitMaximum process count (prevents fork bombs)Always set
--ulimit nofile=1024File descriptor limitAs needed
--storage-opt size=10GContainer writable layer size limitIf available

Docker Content Trust: Image Signing

Docker Content Trust (DCT) ensures that the images you pull are exactly what the publisher intended — not tampered with in transit or at rest in the registry.

# Enable Docker Content Trust $ export DOCKER_CONTENT_TRUST=1 # Now pulls and pushes require signed images $ docker pull nginx:latest Pull (1 of 1): nginx:latest@sha256:abc123... Tagging nginx:latest@sha256:abc123... as nginx:latest sha256:abc123...: Pulling from library/nginx Digest: sha256:abc123... Status: Image is up to date for nginx:latest Signer: Official Docker Library

Docker Bench Security: Automated Auditing

Docker Bench for Security is an official script that checks dozens of common security best practices against your Docker host and containers. It is based on the CIS Docker Benchmark.

# Run Docker Bench Security $ docker run --rm --net host --pid host --userns host \ --cap-add audit_control \ -v /var/lib:/var/lib:ro \ -v /var/run/docker.sock:/var/run/docker.sock:ro \ -v /etc:/etc:ro \ docker/docker-bench-security [INFO] 1 - Host Configuration [PASS] 1.1 - Ensure a separate partition for containers exists [WARN] 1.2 - Ensure only trusted users in docker group [INFO] 4 - Container Images and Build File [WARN] 4.1 - Ensure a user for the container has been created [PASS] 4.6 - Ensure HEALTHCHECK instructions have been added

Supply Chain Security Checklist

  • Pin image versions with digests (nginx:alpine@sha256:abc...), never use :latest in production
  • Use official images or verified publishers from Docker Hub
  • Scan images in CI/CD pipeline before deployment
  • Enable Docker Content Trust for image signing verification
  • Keep base images updated (automate with Dependabot or Renovate)
  • Review Dockerfiles for secrets, unnecessary packages, and root usage
  • Use multi-stage builds to exclude build tools from production images
  • Store images in a private registry with access controls
  • Implement image admission policies in your orchestrator
  • Audit and rotate container credentials regularly

Docker Security with Panelica

Panelica enforces container isolation with per-container Cgroups v2 resource limits and integrates Docker containers within the panel's 5-layer security isolation architecture. Every Docker container deployed through Panelica is automatically placed within the user's cgroup slice, inheriting CPU, memory, I/O, and process count limits. This means even if a container attempts to consume unlimited resources, the cgroup enforcement stops it before it affects other users or the host system.

The panel's Docker module also enforces RBAC: each user can only see and manage containers labeled with their user ID. Root and admin users have broader visibility based on their role hierarchy, but no user can accidentally — or deliberately — interact with another user's containers.

Defense in Depth: Panelica combines Cgroups v2 resource enforcement, namespace isolation, AppArmor profiles, and RBAC access controls to secure Docker containers. This multi-layer approach means that even if one security mechanism is bypassed, others remain in effect.

Security Hardening Checklist Summary

CategoryActionPriority
UserRun as non-root (USER in Dockerfile)Critical
CapabilitiesDrop all, add only neededCritical
FilesystemUse --read-only with tmpfsHigh
ImagesScan for CVEs in CI/CDCritical
Base ImagesUse minimal (Alpine, distroless)High
ResourcesSet memory, CPU, PID limitsCritical
SecretsNever hardcode, use file mountsCritical
NetworkBind to 127.0.0.1, use internal networksHigh
PrivilegedNever use --privilegedCritical
SigningEnable Docker Content TrustHigh

Conclusion

Docker security is not about a single configuration — it is a layered defense strategy. Start with the highest-impact changes: run containers as non-root, scan images for vulnerabilities, set resource limits, and never use --privileged. Then layer on read-only filesystems, minimal base images, seccomp profiles, and image signing for comprehensive protection.

The principle of least privilege applies to every aspect of container security. Drop all capabilities and add back only what you need. Use internal networks and bind ports to localhost. Mount filesystems as read-only. Choose distroless over Ubuntu. Every unnecessary permission you remove is one less attack vector an adversary can exploit. Security is not a destination — it is a continuous practice of reducing your attack surface while monitoring for new threats.

Share:
See the Demo