Tutorial

Nginx Performance Tuning: Handle 10,000 Concurrent Connections Like a Pro

March 31, 2026

Back to Blog

Why Docker Compose Still Wins for Production Deployments

Kubernetes gets all the headlines. But for the vast majority of teams running services on one or a few servers, Docker Compose remains the most pragmatic tool available. No cluster overhead, no control plane to babysit, no YAML sprawl across 12 files just to run a blog.

This guide gives you 15 complete, production-ready Docker Compose stacks. Each one is tested, includes proper environment variable management, named volumes, health checks, and notes on what to watch out for in production. Copy, adjust your domain and passwords, and deploy.

Docker Compose Basics: A Quick Recap

If you already know Compose well, skip ahead. If you're coming from Docker CLI or need a refresher, here's the structure you'll see in every stack below.

The Core Structure

services:
  app:
    image: myapp:1.2.3        # Always pin versions — never :latest in production
    container_name: myapp
    restart: unless-stopped
    environment:
      - ENV_VAR=value
    env_file:
      - .env
    ports:
      - "127.0.0.1:8080:8080" # Bind to localhost — let your reverse proxy handle public
    volumes:
      - app_data:/var/lib/app  # Named volume, not bind mount
    networks:
      - internal
    depends_on:
      db:
        condition: service_healthy
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 512M

  db:
    image: postgres:17-alpine
    restart: unless-stopped
    volumes:
      - db_data:/var/lib/postgresql/data
    networks:
      - internal
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  app_data:
  db_data:

networks:
  internal:
    driver: bridge

Essential Commands

# Start in detached mode
docker compose up -d

# View running services
docker compose ps

# Follow logs for all services
docker compose logs -f

# Follow logs for one service
docker compose logs -f app

# Stop and remove containers (keep volumes)
docker compose down

# Stop and remove containers AND volumes (data loss!)
docker compose down -v

# Pull latest images and recreate containers
docker compose pull && docker compose up -d

# Execute a command inside a running container
docker compose exec app bash

# Restart a single service
docker compose restart app

One important habit: always bind ports to 127.0.0.1 (e.g., 127.0.0.1:8080:8080) instead of 0.0.0.0. This prevents the service from being exposed publicly before your reverse proxy is in place.


Stack 1: WordPress + MySQL + Redis

The most deployed stack on the internet. Redis handles object caching and cuts database load dramatically on busy WordPress sites. WP-CLI is available inside the container for maintenance tasks.

# .env file
MYSQL_ROOT_PASSWORD=change_this_root_pass
MYSQL_DATABASE=wordpress
MYSQL_USER=wordpress
MYSQL_PASSWORD=change_this_wp_pass
WORDPRESS_TABLE_PREFIX=wp_
# docker-compose.yml
services:
  db:
    image: mysql:8.0
    container_name: wp_mysql
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    volumes:
      - wp_db:/var/lib/mysql
    networks:
      - wp_internal
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-p${MYSQL_ROOT_PASSWORD}"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    container_name: wp_redis
    restart: unless-stopped
    command: redis-server --maxmemory 256mb --maxmemory-policy allkeys-lru
    volumes:
      - wp_redis:/data
    networks:
      - wp_internal
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 3s
      retries: 3

  wordpress:
    image: wordpress:6.7-php8.3-fpm-alpine
    container_name: wp_app
    restart: unless-stopped
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_USER: ${MYSQL_USER}
      WORDPRESS_DB_PASSWORD: ${MYSQL_PASSWORD}
      WORDPRESS_DB_NAME: ${MYSQL_DATABASE}
      WORDPRESS_TABLE_PREFIX: ${WORDPRESS_TABLE_PREFIX}
      WORDPRESS_CONFIG_EXTRA: |
        define('WP_REDIS_HOST', 'redis');
        define('WP_REDIS_PORT', 6379);
        define('WP_CACHE', true);
    volumes:
      - wp_data:/var/www/html
    networks:
      - wp_internal
      - wp_proxy

  nginx:
    image: nginx:1.27-alpine
    container_name: wp_nginx
    restart: unless-stopped
    ports:
      - "127.0.0.1:8001:80"
    volumes:
      - wp_data:/var/www/html:ro
      - ./nginx-wp.conf:/etc/nginx/conf.d/default.conf:ro
    depends_on:
      - wordpress
    networks:
      - wp_proxy

volumes:
  wp_db:
  wp_redis:
  wp_data:

networks:
  wp_internal:
    driver: bridge
  wp_proxy:
    driver: bridge

Post-deploy notes: Install the Redis Object Cache plugin and activate it. Create an nginx-wp.conf with FastCGI pass to wordpress:9000. Point your reverse proxy to 127.0.0.1:8001.


Stack 2: Ghost Blog + MySQL

Ghost is a clean, fast publishing platform. The Node.js runtime is lightweight and the admin interface is excellent for non-technical writers.

# .env
MYSQL_ROOT_PASSWORD=change_this
MYSQL_DATABASE=ghost
MYSQL_USER=ghost
MYSQL_PASSWORD=change_this_ghost
GHOST_URL=https://blog.yourdomain.com
[email protected]
GHOST_MAIL_HOST=smtp.yourdomain.com
GHOST_MAIL_PORT=587
GHOST_MAIL_USER=your_smtp_user
GHOST_MAIL_PASS=your_smtp_pass
# docker-compose.yml
services:
  db:
    image: mysql:8.0
    container_name: ghost_mysql
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    volumes:
      - ghost_db:/var/lib/mysql
    networks:
      - ghost_internal
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
      interval: 10s
      timeout: 5s
      retries: 5

  ghost:
    image: ghost:5-alpine
    container_name: ghost_app
    restart: unless-stopped
    ports:
      - "127.0.0.1:8002:2368"
    depends_on:
      db:
        condition: service_healthy
    environment:
      url: ${GHOST_URL}
      database__client: mysql
      database__connection__host: db
      database__connection__user: ${MYSQL_USER}
      database__connection__password: ${MYSQL_PASSWORD}
      database__connection__database: ${MYSQL_DATABASE}
      mail__transport: SMTP
      mail__from: ${GHOST_MAIL_FROM}
      mail__options__host: ${GHOST_MAIL_HOST}
      mail__options__port: ${GHOST_MAIL_PORT}
      mail__options__auth__user: ${GHOST_MAIL_USER}
      mail__options__auth__pass: ${GHOST_MAIL_PASS}
      NODE_ENV: production
    volumes:
      - ghost_data:/var/lib/ghost/content
    networks:
      - ghost_internal

volumes:
  ghost_db:
  ghost_data:

networks:
  ghost_internal:
    driver: bridge

Post-deploy notes: The Ghost admin is at https://blog.yourdomain.com/ghost. First-run setup creates the admin account. Ghost requires the url variable to match the exact URL (including https) — wrong URL causes asset loading failures.


Stack 3: Nextcloud + MariaDB + Redis

Self-hosted cloud storage, calendar, contacts, and collaboration. Redis handles file locking and session caching, which is essential for performance on multi-user setups.

# .env
MYSQL_ROOT_PASSWORD=change_this
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
MYSQL_PASSWORD=change_this_nc
NEXTCLOUD_ADMIN_USER=admin
NEXTCLOUD_ADMIN_PASSWORD=change_this_admin
NEXTCLOUD_TRUSTED_DOMAINS=cloud.yourdomain.com
REDIS_HOST_PASSWORD=change_this_redis
# docker-compose.yml
services:
  db:
    image: mariadb:11.4
    container_name: nc_mariadb
    restart: unless-stopped
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    volumes:
      - nc_db:/var/lib/mysql
    networks:
      - nc_internal
    healthcheck:
      test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    container_name: nc_redis
    restart: unless-stopped
    command: redis-server --requirepass ${REDIS_HOST_PASSWORD}
    volumes:
      - nc_redis:/data
    networks:
      - nc_internal

  nextcloud:
    image: nextcloud:29-apache
    container_name: nc_app
    restart: unless-stopped
    ports:
      - "127.0.0.1:8003:80"
    depends_on:
      db:
        condition: service_healthy
    environment:
      MYSQL_HOST: db
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
      NEXTCLOUD_ADMIN_USER: ${NEXTCLOUD_ADMIN_USER}
      NEXTCLOUD_ADMIN_PASSWORD: ${NEXTCLOUD_ADMIN_PASSWORD}
      NEXTCLOUD_TRUSTED_DOMAINS: ${NEXTCLOUD_TRUSTED_DOMAINS}
      REDIS_HOST: redis
      REDIS_HOST_PASSWORD: ${REDIS_HOST_PASSWORD}
      PHP_MEMORY_LIMIT: 1G
      PHP_UPLOAD_LIMIT: 10G
    volumes:
      - nc_data:/var/www/html
    networks:
      - nc_internal

  cron:
    image: nextcloud:29-apache
    container_name: nc_cron
    restart: unless-stopped
    entrypoint: /cron.sh
    depends_on:
      - nextcloud
    volumes:
      - nc_data:/var/www/html
    networks:
      - nc_internal

volumes:
  nc_db:
  nc_redis:
  nc_data:

networks:
  nc_internal:
    driver: bridge

Post-deploy notes: The cron service handles Nextcloud background jobs. After first start, configure your reverse proxy with proper headers (X-Forwarded-For, X-Forwarded-Proto). Add your domain to config.php trusted proxies if you see overwrite issues.


Stack 4: Gitea (Self-Hosted Git) + PostgreSQL

Gitea is a lightweight GitHub alternative. Fast, resource-efficient, and feature-complete for most teams. SSH on port 2222 handles git over SSH without conflicting with your system SSH.

# .env
POSTGRES_DB=gitea
POSTGRES_USER=gitea
POSTGRES_PASSWORD=change_this_gitea
GITEA_DOMAIN=git.yourdomain.com
GITEA_SSH_PORT=2222
GITEA_ADMIN_PASSWD=change_this_admin
GITEA_SECRET_KEY=change_this_64char_random_string
# docker-compose.yml
services:
  db:
    image: postgres:17-alpine
    container_name: gitea_db
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - gitea_db:/var/lib/postgresql/data
    networks:
      - gitea_internal
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
      interval: 10s
      timeout: 5s
      retries: 5

  gitea:
    image: gitea/gitea:1.22
    container_name: gitea_app
    restart: unless-stopped
    ports:
      - "127.0.0.1:8004:3000"
      - "2222:22"
    depends_on:
      db:
        condition: service_healthy
    environment:
      USER_UID: 1000
      USER_GID: 1000
      GITEA__database__DB_TYPE: postgres
      GITEA__database__HOST: db:5432
      GITEA__database__NAME: ${POSTGRES_DB}
      GITEA__database__USER: ${POSTGRES_USER}
      GITEA__database__PASSWD: ${POSTGRES_PASSWORD}
      GITEA__server__DOMAIN: ${GITEA_DOMAIN}
      GITEA__server__SSH_DOMAIN: ${GITEA_DOMAIN}
      GITEA__server__SSH_PORT: ${GITEA_SSH_PORT}
      GITEA__server__ROOT_URL: https://${GITEA_DOMAIN}
      GITEA__security__SECRET_KEY: ${GITEA_SECRET_KEY}
      GITEA__service__DISABLE_REGISTRATION: "false"
    volumes:
      - gitea_data:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    networks:
      - gitea_internal

volumes:
  gitea_db:
  gitea_data:

networks:
  gitea_internal:
    driver: bridge

Post-deploy notes: First visit triggers the installation page — most settings are pre-filled from environment variables. Open port 2222 in your firewall for SSH git access. To use standard port 22 for git SSH, use SSH_PASS_THROUGH with an authorized_keys wrapper on the host.


Stack 5: Uptime Kuma (Monitoring)

Uptime Kuma monitors your services and sends alerts via Telegram, Discord, Slack, email, and more. The UI is excellent and the setup is genuinely simple — no database required.

# docker-compose.yml
services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    container_name: uptime_kuma
    restart: unless-stopped
    ports:
      - "127.0.0.1:8005:3001"
    volumes:
      - kuma_data:/app/data
    healthcheck:
      test: ["CMD", "extra/healthcheck"]
      interval: 30s
      timeout: 10s
      retries: 3
    deploy:
      resources:
        limits:
          cpus: '0.3'
          memory: 256M
volumes:
  kuma_data:

Post-deploy notes: Uptime Kuma stores everything in SQLite inside the named volume. Set up your monitors, then configure notification channels immediately — you want alerts before a problem occurs, not after. Use docker compose pull && docker compose up -d to update (it handles migrations automatically).


Stack 6: Portainer (Docker GUI)

Portainer gives you a web UI to manage Docker containers, images, volumes, and networks. The Community Edition is free and covers everything you need for single-server use.

# docker-compose.yml
services:
  portainer:
    image: portainer/portainer-ce:2.21
    container_name: portainer
    restart: unless-stopped
    ports:
      - "127.0.0.1:8006:9000"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - portainer_data:/data
    healthcheck:
      test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9000"]
      interval: 30s
      timeout: 10s
      retries: 3
    deploy:
      resources:
        limits:
          cpus: '0.2'
          memory: 128M

volumes:
  portainer_data:

Post-deploy notes: The first admin user must be created within 5 minutes of starting Portainer (it locks you out otherwise). The Docker socket mount is read-only (:ro) — remove that if you want full management capabilities, but understand the security implications: anyone with Portainer access effectively has root.


Stack 7: Traefik Reverse Proxy + Auto SSL

Traefik acts as the edge router for all your other stacks. It auto-discovers containers via Docker labels and provisions Let's Encrypt SSL automatically. This stack is meant to run alongside your other stacks — not replace your system Nginx.

# .env
[email protected]
# docker-compose.yml
services:
  traefik:
    image: traefik:v3.1
    container_name: traefik
    restart: unless-stopped
    command:
      - --api.dashboard=true
      - --providers.docker=true
      - --providers.docker.exposedbydefault=false
      - --entrypoints.web.address=:80
      - --entrypoints.websecure.address=:443
      - --entrypoints.web.http.redirections.entryPoint.to=websecure
      - --entrypoints.web.http.redirections.entryPoint.scheme=https
      - --certificatesresolvers.letsencrypt.acme.tlschallenge=true
      - --certificatesresolvers.letsencrypt.acme.email=${ACME_EMAIL}
      - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - traefik_certs:/letsencrypt
    labels:
      - traefik.enable=true
      - traefik.http.routers.dashboard.rule=Host(`traefik.yourdomain.com`)
      - traefik.http.routers.dashboard.entrypoints=websecure
      - traefik.http.routers.dashboard.tls.certresolver=letsencrypt
      - traefik.http.routers.dashboard.service=api@internal
      - traefik.http.routers.dashboard.middlewares=auth
      - traefik.http.middlewares.auth.basicauth.users=admin:$$apr1$$HASH_HERE  # htpasswd -nb admin password
    networks:
      - traefik_public

volumes:
  traefik_certs:

networks:
  traefik_public:
    name: traefik_public
    driver: bridge

To expose any other container through Traefik, add these labels to that container and connect it to the traefik_public network:

labels:
  - traefik.enable=true
  - traefik.http.routers.myapp.rule=Host(`myapp.yourdomain.com`)
  - traefik.http.routers.myapp.entrypoints=websecure
  - traefik.http.routers.myapp.tls.certresolver=letsencrypt
  - traefik.http.services.myapp.loadbalancer.server.port=8080

networks:
  - traefik_public

Post-deploy notes: Generate the acme.json file with correct permissions before starting: touch acme.json && chmod 600 acme.json. Generate the basic auth hash with: docker run --rm httpd:2.4-alpine htpasswd -nb admin yourpassword — escape $ signs as $$ in the label.


Stack 8: MinIO (S3-Compatible Object Storage)

MinIO gives you S3-compatible object storage on your own server. Use it for application file uploads, backup targets, or as a CDN origin. The console UI is clean and the API is fully compatible with AWS S3 SDKs.

# .env
MINIO_ROOT_USER=admin
MINIO_ROOT_PASSWORD=change_this_minio_pass_16chars_min
MINIO_DOMAIN=s3.yourdomain.com
# docker-compose.yml
services:
  minio:
    image: minio/minio:RELEASE.2024-11-07T00-52-20Z
    container_name: minio
    restart: unless-stopped
    ports:
      - "127.0.0.1:9000:9000"   # S3 API
      - "127.0.0.1:9001:9001"   # Console
    environment:
      MINIO_ROOT_USER: ${MINIO_ROOT_USER}
      MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
      MINIO_DOMAIN: ${MINIO_DOMAIN}
      MINIO_BROWSER_REDIRECT_URL: https://console.${MINIO_DOMAIN}
    command: server /data --console-address ":9001"
    volumes:
      - minio_data:/data
    healthcheck:
      test: ["CMD", "mc", "ready", "local"]
      interval: 30s
      timeout: 20s
      retries: 3
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 1G

volumes:
  minio_data:

Post-deploy notes: MinIO requires the root password to be at least 8 characters. Set up service accounts (access key + secret key) for individual applications instead of using root credentials. For virtual-hosted style URLs (bucketname.s3.yourdomain.com), your reverse proxy needs wildcard DNS and wildcard SSL.


Stack 9: Plausible Analytics (Privacy-First)

Plausible is a lightweight, GDPR-compliant alternative to Google Analytics. No cookies, no personal data, no consent banners required. The Clickhouse database handles high event volumes efficiently.

# .env
POSTGRES_PASSWORD=change_this
SECRET_KEY_BASE=change_this_64char_hex_string
TOTP_VAULT_KEY=change_this_32char_base64
BASE_URL=https://analytics.yourdomain.com
# docker-compose.yml
services:
  mail:
    image: bytemark/smtp
    container_name: plausible_smtp
    restart: unless-stopped
    networks:
      - plausible_internal

  plausible_db:
    image: postgres:17-alpine
    container_name: plausible_pg
    restart: unless-stopped
    volumes:
      - plausible_db:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    networks:
      - plausible_internal
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  plausible_events_db:
    image: clickhouse/clickhouse-server:24.3-alpine
    container_name: plausible_ch
    restart: unless-stopped
    volumes:
      - plausible_events:/var/lib/clickhouse
      - ./clickhouse/logs.xml:/etc/clickhouse-server/config.d/logs.xml:ro
    networks:
      - plausible_internal
    healthcheck:
      test: ["CMD", "wget", "--spider", "-q", "localhost:8123/ping"]
      interval: 10s
      timeout: 5s
      retries: 5
    ulimits:
      nofile:
        soft: 262144
        hard: 262144

  plausible:
    image: ghcr.io/plausible/community-edition:v2.1
    container_name: plausible_app
    restart: unless-stopped
    command: sh -c "sleep 10 && /entrypoint.sh db createdb && /entrypoint.sh db migrate && /entrypoint.sh run"
    ports:
      - "127.0.0.1:8009:8000"
    depends_on:
      plausible_db:
        condition: service_healthy
      plausible_events_db:
        condition: service_healthy
    environment:
      BASE_URL: ${BASE_URL}
      SECRET_KEY_BASE: ${SECRET_KEY_BASE}
      TOTP_VAULT_KEY: ${TOTP_VAULT_KEY}
      DATABASE_URL: postgres://postgres:${POSTGRES_PASSWORD}@plausible_db/plausible_db
      CLICKHOUSE_DATABASE_URL: http://plausible_events_db:8123/plausible_events_db
      MAILER_ADAPTER: Bamboo.Mua
      SMTP_HOST_ADDR: mail
      SMTP_HOST_PORT: 25
    networks:
      - plausible_internal

volumes:
  plausible_db:
  plausible_events:

networks:
  plausible_internal:
    driver: bridge

Create the minimal ClickHouse logs config to suppress noisy log output:

# clickhouse/logs.xml
<clickhouse>
  <logger>
    <level>warning</level>
    <console>true</console>
  </logger>
  <query_thread_log remove="remove"/>
  <query_log remove="remove"/>
  <text_log remove="remove"/>
  <trace_log remove="remove"/>
  <metric_log remove="remove"/>
  <asynchronous_metric_log remove="remove"/>
</clickhouse>

Post-deploy notes: Generate SECRET_KEY_BASE with openssl rand -hex 64 and TOTP_VAULT_KEY with openssl rand -base64 32. The first user to register becomes the site owner.


Stack 10: Grafana + Prometheus + Node Exporter

The classic observability stack. Prometheus scrapes metrics from Node Exporter (system-level) and any other exporters you add. Grafana visualizes everything. Useful for monitoring the host server alongside your application metrics.

# docker-compose.yml
services:
  prometheus:
    image: prom/prometheus:v2.55.1
    container_name: prometheus
    restart: unless-stopped
    ports:
      - "127.0.0.1:9090:9090"
    command:
      - --config.file=/etc/prometheus/prometheus.yml
      - --storage.tsdb.path=/prometheus
      - --storage.tsdb.retention.time=30d
      - --web.enable-lifecycle
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - prometheus_data:/prometheus
    networks:
      - monitoring

  node_exporter:
    image: prom/node-exporter:v1.8.2
    container_name: node_exporter
    restart: unless-stopped
    command:
      - --path.rootfs=/host
    volumes:
      - /:/host:ro,rslave
    pid: host
    networks:
      - monitoring

  grafana:
    image: grafana/grafana:11.3.0
    container_name: grafana
    restart: unless-stopped
    ports:
      - "127.0.0.1:8010:3000"
    environment:
      GF_SECURITY_ADMIN_PASSWORD: change_this_admin_pass
      GF_USERS_ALLOW_SIGN_UP: "false"
      GF_SERVER_ROOT_URL: https://grafana.yourdomain.com
    volumes:
      - grafana_data:/var/lib/grafana
    depends_on:
      - prometheus
    networks:
      - monitoring

volumes:
  prometheus_data:
  grafana_data:

networks:
  monitoring:
    driver: bridge

The prometheus.yml scrape config:

# prometheus.yml
global:
  scrape_interval: 15s
  evaluation_interval: 15s

scrape_configs:
  - job_name: prometheus
    static_configs:
      - targets: ['localhost:9090']

  - job_name: node
    static_configs:
      - targets: ['node_exporter:9100']

Post-deploy notes: In Grafana, add Prometheus as a data source (http://prometheus:9090) and import dashboard ID 1860 (Node Exporter Full) — it's the best pre-built system metrics dashboard available.


Stack 11: n8n (Workflow Automation) + PostgreSQL

n8n connects APIs, services, and tools with visual workflows. Think Zapier, but self-hosted and with full code access. PostgreSQL for persistence (the SQLite default isn't suitable for production use).

# .env
POSTGRES_DB=n8n
POSTGRES_USER=n8n
POSTGRES_PASSWORD=change_this_n8n
N8N_ENCRYPTION_KEY=change_this_32char_random
N8N_HOST=n8n.yourdomain.com
WEBHOOK_URL=https://n8n.yourdomain.com
# docker-compose.yml
services:
  db:
    image: postgres:17-alpine
    container_name: n8n_db
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - n8n_db:/var/lib/postgresql/data
    networks:
      - n8n_internal
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
      interval: 10s
      timeout: 5s
      retries: 5

  n8n:
    image: n8nio/n8n:1.68.0
    container_name: n8n_app
    restart: unless-stopped
    ports:
      - "127.0.0.1:8011:5678"
    depends_on:
      db:
        condition: service_healthy
    environment:
      DB_TYPE: postgresdb
      DB_POSTGRESDB_HOST: db
      DB_POSTGRESDB_PORT: 5432
      DB_POSTGRESDB_DATABASE: ${POSTGRES_DB}
      DB_POSTGRESDB_USER: ${POSTGRES_USER}
      DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD}
      N8N_HOST: ${N8N_HOST}
      N8N_PORT: 5678
      N8N_PROTOCOL: https
      WEBHOOK_URL: ${WEBHOOK_URL}
      N8N_ENCRYPTION_KEY: ${N8N_ENCRYPTION_KEY}
      EXECUTIONS_PROCESS: main
      N8N_METRICS: "true"
      GENERIC_TIMEZONE: UTC
    volumes:
      - n8n_data:/home/node/.n8n
    networks:
      - n8n_internal
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 1G

volumes:
  n8n_db:
  n8n_data:

networks:
  n8n_internal:
    driver: bridge

Post-deploy notes: Generate N8N_ENCRYPTION_KEY with openssl rand -hex 16. All workflow credentials are encrypted with this key — back it up separately. Losing it means losing access to all stored credentials.


Stack 12: Vaultwarden (Bitwarden-Compatible Password Manager)

Vaultwarden is a lightweight, unofficial Bitwarden server implementation written in Rust. It's fully compatible with all official Bitwarden clients and uses a fraction of the resources of the official server.

# .env
VAULTWARDEN_ADMIN_TOKEN=change_this_long_random_token
VAULTWARDEN_DOMAIN=vault.yourdomain.com
VAULTWARDEN_SIGNUPS_ALLOWED=false
# docker-compose.yml
services:
  vaultwarden:
    image: vaultwarden/server:1.32.1
    container_name: vaultwarden
    restart: unless-stopped
    ports:
      - "127.0.0.1:8012:80"
    environment:
      DOMAIN: https://${VAULTWARDEN_DOMAIN}
      ADMIN_TOKEN: ${VAULTWARDEN_ADMIN_TOKEN}
      SIGNUPS_ALLOWED: ${VAULTWARDEN_SIGNUPS_ALLOWED}
      WEBSOCKET_ENABLED: "true"
      SMTP_HOST: smtp.yourdomain.com
      SMTP_FROM: [email protected]
      SMTP_PORT: 587
      SMTP_SECURITY: starttls
      SMTP_USERNAME: your_smtp_user
      SMTP_PASSWORD: your_smtp_pass
      LOG_LEVEL: warn
    volumes:
      - vaultwarden_data:/data
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost/alive"]
      interval: 30s
      timeout: 10s
      retries: 3
    deploy:
      resources:
        limits:
          cpus: '0.2'
          memory: 128M

volumes:
  vaultwarden_data:

Post-deploy notes: Set SIGNUPS_ALLOWED=false immediately after creating your account. The admin panel is at /admin — you can invite users from there. Your reverse proxy must support WebSocket connections for real-time vault sync across devices.


Stack 13: Outline Wiki + PostgreSQL + Redis + MinIO

Outline is a modern, fast team knowledge base. It requires more dependencies than most apps — PostgreSQL, Redis, and an S3-compatible store for file uploads. Worth it for the quality of the result.

# .env
POSTGRES_PASSWORD=change_this
REDIS_URL=redis://redis:6379
SECRET_KEY=change_this_32byte_hex
UTILS_SECRET=change_this_another_32byte_hex
AWS_ACCESS_KEY_ID=outline_minio_user
AWS_SECRET_ACCESS_KEY=change_this_minio_pass
MINIO_ROOT_USER=admin
MINIO_ROOT_PASSWORD=change_this_minio_pass
OUTLINE_URL=https://wiki.yourdomain.com
OIDC_CLIENT_ID=your_oidc_id
OIDC_CLIENT_SECRET=your_oidc_secret
OIDC_AUTH_URI=https://your-auth-server/authorize
OIDC_TOKEN_URI=https://your-auth-server/token
OIDC_USERINFO_URI=https://your-auth-server/userinfo
# docker-compose.yml
services:
  postgres:
    image: postgres:17-alpine
    container_name: outline_db
    restart: unless-stopped
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: outline
      POSTGRES_USER: outline
    volumes:
      - outline_db:/var/lib/postgresql/data
    networks:
      - outline_internal
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U outline"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    container_name: outline_redis
    restart: unless-stopped
    volumes:
      - outline_redis:/data
    networks:
      - outline_internal

  minio:
    image: minio/minio:RELEASE.2024-11-07T00-52-20Z
    container_name: outline_minio
    restart: unless-stopped
    command: server /data --console-address ":9001"
    environment:
      MINIO_ROOT_USER: ${MINIO_ROOT_USER}
      MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
    volumes:
      - outline_minio:/data
    networks:
      - outline_internal

  outline:
    image: outlinewiki/outline:0.80.0
    container_name: outline_app
    restart: unless-stopped
    ports:
      - "127.0.0.1:8013:3000"
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_started
      minio:
        condition: service_started
    environment:
      DATABASE_URL: postgres://outline:${POSTGRES_PASSWORD}@postgres:5432/outline
      REDIS_URL: ${REDIS_URL}
      URL: ${OUTLINE_URL}
      SECRET_KEY: ${SECRET_KEY}
      UTILS_SECRET: ${UTILS_SECRET}
      AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
      AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
      AWS_REGION: us-east-1
      AWS_S3_UPLOAD_BUCKET_URL: http://minio:9000
      AWS_S3_UPLOAD_BUCKET_NAME: outline
      AWS_S3_FORCE_PATH_STYLE: "true"
      FILE_STORAGE: s3
      OIDC_CLIENT_ID: ${OIDC_CLIENT_ID}
      OIDC_CLIENT_SECRET: ${OIDC_CLIENT_SECRET}
      OIDC_AUTH_URI: ${OIDC_AUTH_URI}
      OIDC_TOKEN_URI: ${OIDC_TOKEN_URI}
      OIDC_USERINFO_URI: ${OIDC_USERINFO_URI}
      OIDC_DISPLAY_NAME: Login
      OIDC_USERNAME_CLAIM: email
    networks:
      - outline_internal
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 1G

volumes:
  outline_db:
  outline_redis:
  outline_minio:

networks:
  outline_internal:
    driver: bridge

Post-deploy notes: Outline requires an authentication provider — OIDC, Google, Slack, or GitHub. The OIDC option works with Authentik, Keycloak, or any compliant provider. Create the MinIO bucket named outline and the service account before starting Outline.


Stack 14: Umami Analytics + PostgreSQL

Umami is a simpler Plausible alternative — lighter on dependencies (no ClickHouse required), still privacy-respecting and GDPR-compliant. Good choice when you want analytics without the overhead of a columnar database.

# .env
POSTGRES_DB=umami
POSTGRES_USER=umami
POSTGRES_PASSWORD=change_this_umami
APP_SECRET=change_this_32char_random
# docker-compose.yml
services:
  db:
    image: postgres:17-alpine
    container_name: umami_db
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - umami_db:/var/lib/postgresql/data
    networks:
      - umami_internal
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
      interval: 10s
      timeout: 5s
      retries: 5

  umami:
    image: ghcr.io/umami-software/umami:postgresql-v2.13.2
    container_name: umami_app
    restart: unless-stopped
    ports:
      - "127.0.0.1:8014:3000"
    depends_on:
      db:
        condition: service_healthy
    environment:
      DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
      DATABASE_TYPE: postgresql
      APP_SECRET: ${APP_SECRET}
      DISABLE_TELEMETRY: 1
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:3000/api/heartbeat"]
      interval: 30s
      timeout: 10s
      retries: 3
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 512M

volumes:
  umami_db:

networks:
  umami_internal:
    driver: bridge

Post-deploy notes: Default login is admin / umami — change it immediately after first login. Add the tracking script to your sites from the websites dashboard. The script is minimal (~2KB) and non-blocking.


Stack 15: Immich (Photo Management) + PostgreSQL + Redis

Immich is the best self-hosted alternative to Google Photos. It supports mobile backups via iOS and Android apps, face recognition, object search, timeline view, and album sharing. Resource-heavy but worth it for photo management at scale.

# .env
POSTGRES_DB=immich
POSTGRES_USER=immich
POSTGRES_PASSWORD=change_this_immich
UPLOAD_LOCATION=/data/immich/upload
IMMICH_VERSION=v1.122.3
# docker-compose.yml
services:
  immich-server:
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION}
    container_name: immich_server
    restart: unless-stopped
    ports:
      - "127.0.0.1:8015:2283"
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - /etc/localtime:/etc/localtime:ro
    environment:
      DB_HOSTNAME: database
      DB_USERNAME: ${POSTGRES_USER}
      DB_PASSWORD: ${POSTGRES_PASSWORD}
      DB_DATABASE_NAME: ${POSTGRES_DB}
      REDIS_HOSTNAME: redis
    depends_on:
      database:
        condition: service_healthy
      redis:
        condition: service_started
    networks:
      - immich_internal

  immich-machine-learning:
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION}
    container_name: immich_ml
    restart: unless-stopped
    volumes:
      - immich_model_cache:/cache
    environment:
      DB_HOSTNAME: database
      DB_USERNAME: ${POSTGRES_USER}
      DB_PASSWORD: ${POSTGRES_PASSWORD}
      DB_DATABASE_NAME: ${POSTGRES_DB}
    networks:
      - immich_internal
    deploy:
      resources:
        limits:
          cpus: '2.0'
          memory: 2G

  redis:
    image: redis:7-alpine
    container_name: immich_redis
    restart: unless-stopped
    networks:
      - immich_internal

  database:
    image: ghcr.io/immich-app/postgres:14-v0.3.0
    container_name: immich_db
    restart: unless-stopped
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_INITDB_ARGS: --data-checksums
    volumes:
      - immich_db:/var/lib/postgresql/data
    networks:
      - immich_internal
    healthcheck:
      test: pg_isready --dbname='${POSTGRES_DB}' --username='${POSTGRES_USER}' || exit 1; Chksum="$$(psql --dbname='${POSTGRES_DB}' --username='${POSTGRES_USER}' --tuples-only --no-align --command='SELECT COALESCE(SUM(checksum_failures), 0) FROM pg_stat_database')"; echo "checksum failure count is $$Chksum"; [ "$$Chksum" = '0' ] || exit 1
      interval: 5m
      start_interval: 30s
      start_period: 5m

volumes:
  immich_db:
  immich_model_cache:

networks:
  immich_internal:
    driver: bridge

Post-deploy notes: Immich uses a custom PostgreSQL image with pgvecto.rs for vector similarity search (face recognition). Do not substitute with a standard PostgreSQL image — it will fail to start. The machine learning container downloads models on first run (~600MB). Ensure at least 4GB RAM on the host for comfortable operation. Always pin the IMMICH_VERSION and update both server and machine-learning containers together.


Production Best Practices

1. Always Pin Image Versions

Never use :latest in production. A docker compose pull can silently upgrade your database or application to a breaking version.

# Bad
image: postgres:latest

# Good
image: postgres:17.2-alpine

2. Use Named Volumes, Not Bind Mounts (For Data)

Named volumes are managed by Docker and work predictably across systems. Bind mounts are fine for configuration files, but never use them for database data or application state.

# Bad — permission issues, host-path dependency
volumes:
  - ./data/postgres:/var/lib/postgresql/data

# Good
volumes:
  - postgres_data:/var/lib/postgresql/data

3. Separate Secrets Into .env Files

Never hardcode passwords in docker-compose.yml. Use a .env file and add it to .gitignore immediately.

# .gitignore
.env
.env.*
!.env.example

4. Add Health Checks to Every Service

Without health checks, depends_on only waits for the container to start — not for the service inside to be ready. A database container can be "running" while PostgreSQL is still initializing.

healthcheck:
  test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER"]
  interval: 10s
  timeout: 5s
  retries: 5
  start_period: 30s

5. Set Resource Limits

Without limits, one runaway container can starve others on the same host.

deploy:
  resources:
    limits:
      cpus: '0.5'
      memory: 512M
    reservations:
      cpus: '0.1'
      memory: 128M

6. Configure Logging

Default Docker logging has no rotation — logs accumulate indefinitely.

logging:
  driver: "json-file"
  options:
    max-size: "10m"
    max-file: "3"

7. Use Network Isolation

Create separate internal networks for each stack. Only the services that need public access (typically just the app or nginx container) should be connected to the proxy network.

networks:
  internal:        # No external access
    driver: bridge
  proxy:           # Connected to Traefik or host proxy
    external: true
    name: traefik_public

8. Bind Ports to Localhost

If you're using a reverse proxy, your application ports should not be publicly accessible. Always bind to 127.0.0.1:

# Exposes port 8080 publicly — dangerous
ports:
  - "8080:8080"

# Only accessible from localhost — correct
ports:
  - "127.0.0.1:8080:8080"

Reverse Proxy Patterns

Most of the stacks above bind to a local port. You need a reverse proxy to terminate SSL and route traffic. Here are the three most common patterns.

Traefik (Docker-Native, Auto-SSL)

Best for Docker-heavy setups. Traefik discovers services automatically via container labels and provisions Let's Encrypt SSL without manual configuration. See Stack 7 above for the full setup.

Nginx with docker-gen

The classic approach for teams already running Nginx. docker-gen watches for new containers and automatically generates Nginx config. Pair it with acme-companion for auto SSL.

# Reference implementation:
# https://github.com/nginx-proxy/nginx-proxy

Caddy as Reverse Proxy

The simplest option. Caddy handles HTTPS by default with zero configuration for Let's Encrypt.

# Caddyfile
app.yourdomain.com {
    reverse_proxy 127.0.0.1:8001
}

grafana.yourdomain.com {
    reverse_proxy 127.0.0.1:8010
}

Quick Reference

Stack RAM (min) Local Port Use Case
WordPress + MySQL + Redis512 MB8001CMS, blogs, e-commerce
Ghost + MySQL256 MB8002Publishing, newsletters
Nextcloud + MariaDB + Redis512 MB8003File storage, calendar, contacts
Gitea + PostgreSQL256 MB8004Git hosting
Uptime Kuma64 MB8005Uptime monitoring
Portainer64 MB8006Docker management GUI
Traefik64 MB80/443Reverse proxy, auto-SSL
MinIO256 MB9000/9001S3-compatible object storage
Plausible + ClickHouse1 GB8009Privacy-first web analytics
Grafana + Prometheus512 MB8010Metrics & dashboards
n8n + PostgreSQL512 MB8011Workflow automation
Vaultwarden64 MB8012Password manager
Outline + PostgreSQL + MinIO512 MB8013Team wiki / knowledge base
Umami + PostgreSQL256 MB8014Simple web analytics
Immich + PostgreSQL + Redis2 GB8015Photo backup & management

Running Containers Inside Panelica

If you're managing your server with Panelica, Docker Compose stacks can be deployed directly through the built-in Docker Manager. The interface exposes container management, volume inspection, network isolation, and resource limits — all within the per-user isolation model that keeps containers from interfering with each other or the host system.

Each user's containers run under a dedicated cgroup slice with memory and CPU limits enforced at the kernel level. No root access required for end users, and no container escapes into other users' file systems.

The Docker App Templates feature in Panelica provides one-click deployment for common stacks — including several from this guide — with subdomain routing and SSL handled automatically.

Share: