Tutorial

Docker Networking Explained: Bridge, Host, Overlay, and Port Mapping

April 30, 2026

Back to Blog

Why Docker Networking Matters

Docker containers are isolated by design, but isolation without communication is useless. Your web application needs to talk to its database. Your API gateway needs to route traffic to microservices. Your monitoring stack needs to scrape metrics from every container. Docker networking is the system that makes all of this possible while maintaining security boundaries.

Understanding Docker networking is not optional for anyone running containers in production. Misconfigured networks lead to containers that cannot find each other, databases exposed to the internet, and troubleshooting sessions that consume hours. This guide covers every Docker network driver, shows you how containers discover each other through DNS, explains port mapping in detail, and walks through real security configurations.

Prerequisites: Basic Docker knowledge (pulling images, running containers). You should be comfortable with TCP/IP concepts like ports, IP addresses, and DNS. A machine with Docker installed is required to follow along.

Docker Network Drivers Overview

Docker provides several network drivers, each designed for different use cases. When you create a network, you choose a driver that determines how containers on that network communicate.

DriverIsolationPerformanceUse CaseCross-Host
bridgeHighGoodDefault, single-host containersNo
hostNoneBestMaximum performance, no NAT overheadNo
overlayHighMediumDocker Swarm multi-host communicationYes
macvlanHighBestContainers appear as physical devices on LANYes (L2)
noneCompleteN/ANo networking at allNo

Bridge Networks: The Foundation

The bridge driver is Docker's default and most commonly used network type. When Docker starts, it creates a default bridge network called bridge (also known as docker0). Every container that does not specify a network joins this default bridge automatically.

Default Bridge vs Custom Bridge

There is a critical distinction between the default bridge and custom bridge networks that many beginners miss:

Default Bridge Avoid

  • Containers communicate by IP only
  • No automatic DNS resolution
  • All containers share the same network
  • No isolation between unrelated containers
  • Must use --link (deprecated) for name resolution

Custom Bridge Recommended

  • Containers communicate by name
  • Automatic DNS resolution
  • Network-level isolation between services
  • Can connect/disconnect containers at runtime
  • Configurable subnets, gateways, and options

Creating and Using Custom Bridge Networks

# Create a custom bridge network $ docker network create --driver bridge my-app-network a1b2c3d4e5f6... # Create with a specific subnet $ docker network create \ --driver bridge \ --subnet 172.20.0.0/16 \ --gateway 172.20.0.1 \ my-custom-network # Run containers on the custom network $ docker run -d --name web --network my-app-network nginx $ docker run -d --name api --network my-app-network node:18-alpine # Now 'web' can reach 'api' by name! $ docker exec web curl http://api:3000/health {"status": "ok"}
Key Insight: On a custom bridge network, Docker runs an embedded DNS server that resolves container names to their IP addresses. This is why curl http://api:3000 works — Docker resolves "api" to the container's IP automatically.

How Bridge Networking Works Under the Hood

Container
eth0: 172.20.0.2
veth pair
Virtual cable
Bridge (docker0)
172.20.0.1
iptables NAT
MASQUERADE
Host eth0
Internet

Each container gets a virtual Ethernet interface (eth0) connected to the bridge via a veth pair — a virtual network cable with one end in the container and the other on the bridge. The bridge acts as a Layer 2 switch, forwarding packets between containers. For outbound internet access, iptables NAT rules masquerade container traffic behind the host's IP.

Host Networking: Maximum Performance

The host driver removes all network isolation between the container and the host. The container shares the host's network stack directly — same IP address, same ports, no NAT translation.

# Run nginx directly on the host network $ docker run -d --name nginx-host --network host nginx # nginx is now listening on host's port 80 directly $ curl http://localhost:80 <html>Welcome to nginx!</html>
When to use host networking: Only when you need absolute maximum network performance (eliminating NAT overhead) or when the container needs to bind to many dynamic ports (like a SIP server). The trade-off is zero network isolation — the container can see all host network interfaces, all ports, and all traffic.

Performance Comparison

MetricBridgeHostDifference
Latency (local)~0.05ms~0.02ms60% lower
Throughput (iperf3)~30 Gbps~42 Gbps40% higher
CPU overheadModerate (NAT)NoneEliminated
Network isolationFullNoneSecurity trade-off

Overlay Networks: Multi-Host Communication

Overlay networks extend Docker networking across multiple hosts, enabling containers on different machines to communicate as if they were on the same local network. This is the foundation of Docker Swarm's service discovery.

# Initialize Swarm (required for overlay) $ docker swarm init # Create an overlay network $ docker network create --driver overlay --attachable my-overlay # Containers on different hosts can now communicate # Host A: $ docker run -d --name api --network my-overlay myapp:latest # Host B: $ docker run -d --name worker --network my-overlay myworker:latest # 'worker' on Host B can reach 'api' on Host A by name

Overlay networks use VXLAN tunneling to encapsulate container-to-container traffic inside UDP packets that traverse the physical network. This adds some latency overhead compared to bridge networks, but provides seamless multi-host connectivity.

Macvlan Networks: Containers as First-Class Network Citizens

Macvlan makes containers appear as physical devices on your LAN. Each container gets its own MAC address and IP address directly on the host's network segment. This is ideal for legacy applications that expect to be directly addressable on the LAN.

# Create a macvlan network on the host's physical interface $ docker network create -d macvlan \ --subnet=192.168.1.0/24 \ --gateway=192.168.1.1 \ -o parent=eth0 \ my-macvlan # Run a container with a specific LAN IP $ docker run -d --name legacy-app \ --network my-macvlan \ --ip 192.168.1.50 \ myapp:latest # Container is reachable directly at 192.168.1.50 $ ping 192.168.1.50
Macvlan Caveat: The host cannot communicate with macvlan containers directly because of the way macvlan filters traffic at the kernel level. To work around this, create a macvlan sub-interface on the host. This is an advanced topic but important to know before deploying.

Port Mapping Deep Dive

Port mapping (publishing) is how you make a containerized service accessible from outside the Docker network. It sets up iptables rules to forward traffic from a host port to a container port.

Port Mapping Syntax

# Map host port 8080 to container port 80 $ docker run -d -p 8080:80 nginx # Bind to specific host interface only $ docker run -d -p 127.0.0.1:8080:80 nginx # Let Docker choose a random host port $ docker run -d -p 80 nginx # Map UDP port $ docker run -d -p 5353:53/udp dns-server # Map multiple ports $ docker run -d -p 80:80 -p 443:443 nginx
Security Critical: When you use -p 3306:3306, Docker creates iptables rules that bypass UFW/firewalld. Your MySQL container becomes accessible from the internet even if you have a firewall blocking port 3306. Always bind to 127.0.0.1 for services that should only be accessible locally: -p 127.0.0.1:3306:3306.

Viewing Port Mappings

$ docker port mycontainer 80/tcp -> 0.0.0.0:8080 443/tcp -> 0.0.0.0:8443 # See the actual iptables rules Docker created $ sudo iptables -t nat -L DOCKER -n Chain DOCKER (2 references) target prot source destination DNAT tcp 0.0.0.0/0 172.17.0.2 tcp dpt:8080 to:172.17.0.2:80

Container DNS Resolution

Docker's embedded DNS server is one of its most powerful features. On custom networks, every container can reach every other container by name. Understanding how this works helps you troubleshoot connectivity issues.

App Container
curl http://db:5432
Docker DNS
127.0.0.11
Name Resolution
db = 172.20.0.3
DB Container
Connected
# Check DNS resolution inside a container $ docker exec web nslookup db Server: 127.0.0.11 Address: 127.0.0.11#53 Non-authoritative answer: Name: db Address: 172.20.0.3 # Docker Compose services are discoverable by service name # In docker-compose.yml, if you have a service named "postgres", # other services can reach it at hostname "postgres"

DNS Aliases and Network Aliases

# Give a container multiple DNS names $ docker run -d --name postgres-primary \ --network my-network \ --network-alias db \ --network-alias database \ postgres:16 # Container is reachable as 'postgres-primary', 'db', or 'database'

Docker Compose Networking

Docker Compose simplifies networking significantly. By default, Compose creates a single network for your entire stack, and every service is automatically attached to it. Services can reach each other using their service name as the hostname.

# docker-compose.yml services: api: image: myapp:latest networks: - frontend - backend db: image: postgres:16 networks: - backend # Only on backend - not reachable from nginx nginx: image: nginx:alpine ports: - "80:80" networks: - frontend # Can reach 'api' but NOT 'db' networks: frontend: backend: internal: true # No internet access
Security Pattern: Use separate networks to enforce the principle of least privilege. Your reverse proxy only needs to reach the application server, never the database. By placing the database on an internal network that only the application can access, you prevent both external access and accidental internet exposure.

Network Management Commands

CommandDescription
docker network lsList all networks
docker network create NAMECreate a new network
docker network inspect NAMEShow detailed network info (containers, subnet, gateway)
docker network connect NET CONTAINERAttach a running container to a network
docker network disconnect NET CONTAINERDetach a container from a network
docker network rm NAMERemove a network (no active containers)
docker network pruneRemove all unused networks

Inspecting Networks

$ docker network inspect my-app-network [ { "Name": "my-app-network", "Driver": "bridge", "IPAM": { "Config": [{ "Subnet": "172.20.0.0/16", "Gateway": "172.20.0.1" }] }, "Containers": { "a1b2c3...": { "Name": "web", "IPv4Address": "172.20.0.2/16" }, "d4e5f6...": { "Name": "api", "IPv4Address": "172.20.0.3/16" } } } ]

Network Security Best Practices

1
Always use custom bridge networks. The default bridge network has no DNS resolution and no isolation. Create purpose-specific networks like frontend, backend, and monitoring.
2
Use internal networks for databases. Set internal: true on networks containing your database, cache, and other backend services. This prevents containers on those networks from accessing the internet.
3
Bind ports to 127.0.0.1. Instead of -p 3306:3306 (accessible from anywhere), use -p 127.0.0.1:3306:3306 (localhost only). Docker's port mapping bypasses firewall rules.
4
Minimize port exposure. If containers only need to communicate with each other on a shared network, do not publish any ports at all. DNS-based service discovery handles internal communication.
5
Use network segmentation. Place your web-facing container on one network and your database on another. Only the application server should bridge both networks.

Troubleshooting Docker Networking

When containers cannot communicate, follow this systematic debugging approach:

# Step 1: Are both containers on the same network? $ docker inspect web --format '{{json .NetworkSettings.Networks}}' | jq $ docker inspect db --format '{{json .NetworkSettings.Networks}}' | jq # Step 2: Can you resolve the hostname? $ docker exec web nslookup db # Step 3: Can you ping (ICMP)? $ docker exec web ping -c 3 db # Step 4: Can you reach the service port? $ docker exec web curl -v http://db:5432 # Step 5: Is the target container listening? $ docker exec db ss -tlnp LISTEN 0 128 *:5432 *:* users:(("postgres",pid=1,fd=3)) # Step 6: Check iptables rules $ sudo iptables -L DOCKER -n -v
SymptomCommon CauseFix
Cannot resolve hostnameContainers on default bridgeUse a custom bridge network
Connection refusedService not started or wrong portCheck docker logs and ss -tlnp
Connection timed outDifferent networks / firewallVerify both are on the same network
Port works locally, not remotelyBound to 127.0.0.1Bind to 0.0.0.0 or omit IP
Service accessible despite firewallDocker bypasses UFWUse DOCKER-USER iptables chain

Docker Networking with Panelica

Panelica's Docker module manages container networking through the panel, making port mapping and network isolation point-and-click simple. Administrators can create custom networks, assign containers to networks, and configure port mappings through the panel's GUI without writing any Docker commands. Each user's containers are automatically placed in isolated networks, and the RBAC system ensures users can only see and manage their own network configurations.

Integrated Reverse Proxy: When you deploy a web-facing container through Panelica, the panel automatically configures the nginx reverse proxy to route traffic to your container. You get SSL termination, domain management, and container networking handled in a single workflow.

Conclusion

Docker networking is the glue that holds containerized applications together. We have covered the four main network drivers — bridge for standard container communication, host for maximum performance, overlay for multi-host deployments, and macvlan for LAN-native containers. You now understand how DNS resolution works between containers, why custom bridge networks are always better than the default, how port mapping interacts (and sometimes conflicts) with firewalls, and how to secure your network topology with segmentation and internal networks.

The most important takeaway: always use custom bridge networks with internal: true for backend services, bind published ports to 127.0.0.1 when possible, and use Docker Compose's multi-network support to enforce proper isolation. Master these patterns and you will have a secure, debuggable, and production-ready container network every time.

Share:
See the Demo