Deep dive into container-to-container communication, custom networks, service discovery, and real-world AI microservice stacks with OpenClaw + Kimi 2.5.
By default, containers are isolated. Docker networking is the layer that lets them talk to each other β and to the outside world β with full control over what can reach what.
Docker ships with multiple network drivers. Choosing the right one changes security, performance, and connectivity significantly.
Creates an isolated software-defined network. Containers on the same bridge can reach each other by service name (automatic DNS). Containers on different bridges are fully isolated. Use this for production multi-service apps.
Container shares the host's network stack directly β no NAT overhead. The container's ports are host ports. Much faster for throughput-heavy services but you lose network isolation. Use for performance-critical services like Nginx or Prometheus.
Spans multiple Docker hosts (used in Docker Swarm or Kubernetes). Creates a virtual network that lets containers on different machines talk as if they're local. Required for distributed deployments.
Assigns a real MAC address to the container, making it appear as a physical device on your network. Useful for legacy apps that need to be directly on the LAN. Requires promiscuous mode on the NIC.
| Driver | DNS by name | Isolation | Multi-host | Best For |
|---|---|---|---|---|
bridge (default) | No (by IP) | Shared | No | Quick testing |
bridge (user-defined) | Yes | Isolated | No | Production apps β |
host | Yes (host) | None | No | Max performance |
overlay | Yes | Isolated | Yes | Swarm/Kubernetes |
none | No | Full | No | Batch jobs, no net |
Once containers are on the same Docker network, Docker's embedded DNS server lets them find each other by service name. No IPs needed β the name IS the address.
api-gateway in docker-compose, every other container on that network can resolve http://api-gateway:8000 automatically. Docker runs an internal DNS server at 127.0.0.11 inside each container.Most common. Service A calls http://service-b:8001/api/data. Simple, universal, great for synchronous request-response. Use for API calls between services.
Google's high-performance RPC protocol over HTTP/2. Binary protocol (protobuf) β much faster and smaller than JSON. Ideal for internal microservices with strict latency needs.
Async communication via RabbitMQ or Redis Streams. Service A drops a message; Service B picks it up when ready. Decouples services completely β great for AI inference queues.
Persistent bidirectional connections. Service A keeps a socket open to Service B for streaming data β like streaming LLM token output back to the frontend in real time.
Containers mount the same named volume. One writes files; another reads them. Common for batch ML pipelines where a preprocessor writes data that a model container reads.
Redis acts as a message broker. Services publish events to channels; subscribers receive them instantly. Very low latency, simple to set up, great for real-time coordination.
Modern AI applications aren't a single container. They're composed of multiple specialized services β an API gateway, an LLM proxy, a vector database, a cache layer, and the AI model API itself.
Let's build a complete, production-style AI stack using LiteLLM (OpenAI-compatible proxy) connecting to Kimi k2 (Moonshot AI) β all wired together through Docker networks. The pattern works with any OpenAI-compatible endpoint.
http://litellm:4000/v1/chat/completions and LiteLLM routes it to the right provider. Think of it as the "OpenClaw" / universal LLM adapter layer.ai-net. Nginx is only on public-net. Even if Nginx was compromised, it cannot reach the database β they're on completely different Docker networks with no route between them.Production containers need visibility. Docker provides built-in logging, and integrates cleanly with Prometheus + Grafana for metrics.
Patterns that separate hobby projects from real deployments.
Use multiple FROM stages to separate build dependencies from the final runtime image. A Go app that builds to 1.4GB can ship as a 12MB final image.
Never run containers as root. Add RUN adduser --disabled-password appuser and USER appuser to your Dockerfile. Limits blast radius if a container is compromised.
Use python:3.12.4-slim not python:latest. Pinned tags make builds reproducible and prevent surprise breakage when base images update.
Order Dockerfile instructions from least-to-most-changing. Copy requirements.txt and install deps before copying your app code β so code changes don't invalidate the deps layer.
Always set memory and CPU limits in compose. Without limits, one runaway container can starve all others on the same host.
Set restart: unless-stopped on critical services so they survive reboots and crashes automatically without manual intervention.
The Docker default settings are not production-secure. Here's the hardening checklist every deployment needs.
Postgres and Redis should have NO ports: section in compose. They're accessible by service name within Docker networks β there's never a reason to expose them to the host.
Never hardcode API keys in Dockerfiles or compose files. Use .env files, Docker secrets, or environment injection from a vault at runtime.
Add read_only: true to services that don't need to write. If an attacker gets code execution, they can't modify the container filesystem.
Use docker scout cves myimage:tag or Trivy to scan for known CVEs in your base images and dependencies before deploying.
.env and any file containing secrets to your .gitignore immediately. The most common Docker security breach is accidentally committing API keys to a public repo where Docker images are built in CI./home/yourname/projects) not Windows filesystem (C:\Users\...) β file I/O is 10-20x faster.wsl --shutdown + wsl to restart WSL2 if Docker acts up.%USERPROFILE%\.wslconfig, set memory=8GB and processors=4 to cap WSL2 resource usage.