How to Run Multiple Containers with Docker Compose: A Developer‘s Guide
As developers building modern applications, we need ways to easily run the complex multi-service architectures that power our apps in development and production alike. This is where containerization – and specifically tools like Docker Compose – come in very handy.
Docker Compose allows developers to define, build, and launch multi-container applications by using declarative YAML-formatted compose files. With a single command you can provision and connect the databases, API services, message queues, caches, and everything else that makes up your microservices architecture.
In this comprehensive guide written from a developer‘s perspective, we will dig into the key capabilities of Docker Compose for running groups of containers, including:
- Compose file basics: services, environment variables and volumes
- Networking and communicating between containers
- Building images as part of compose workflows
- Launching containers for development, test, and CI/CD
- Deploying changes and reconfiguring containers
- Scaling containers in development and production
- Comparing Docker Compose to Kubernetes
Let‘s get started exploring how Docker Compose can make managing containers for multi-service apps much easier!
Why Docker Compose Matters for Developers
First, its important to understand why container adoption has grown so quickly among development teams. According to recent surveys from Aqua Security:
- 83% of organizations now use container technologies
- On average companies run 41% of their apps on containers
- By 2025 analysts forecast 500 million enterprise apps will run in containers
Containers package up entire runtime environments into portable, isolated processes that run consistently on any platform – Linux, Windows, cloud VMs and more. This makes migrating and scaling applications much simpler compared to virtual machines and bare metal servers.
However, most modern applications have outgrown the single container model. Mobile apps might link a React frontend container to Node, Python and PostgreSQL services. E-commerce sites can run dozens of containers for the web storefront, recommendation and payment services, inventory databases etc.
And herein lies the challenge for developers – how to reliably manage all these containers and their connectivity. Doing this manually with Docker run and bash scripts may work initially but does not scale. This is where Docker Compose comes in as a simpler way to define and run multi-container applications, compared to full-blown orchestrators like Kubernetes.
Compose File 101
The foundation of Docker Compose is the YAML-formatted docker-compose file that describes the containers (services) that make up an application stack:
services:
frontend:
image: myapp:react
users:
image: mongo
products:
image: postgres:14
In this simple example we describe three services – a React frontend, MongoDB database and PostgreSQL database. That‘s just scratching the surface as compose files can handle extremely complex applications with 20+ services potentially.
Some key aspects that can be configured per service:
Images – The container image to use, including optional tags and custom repos
Ports – External port mappings to internal container ports
Environment – Environment variables loaded from Docker, compose files or external files
Volumes – Local storage volumes mounted into the container filesystem
Networks – The internal virtual networks to join for service-to-service communication
Beyond per-service configuration, compose offers powerful features for the overall application:
- Reliable networking between all services for DNS-based access
- Image building tied to Dockerfiles for each component
- Container restart policies when containers exit or crash
- Resource limitations on CPU, memory usage etc.
And with just three key commands – docker-compose build
, docker-compose up
and docker-compose down
– developers can fully build and control the entire application lifecycle. Now let‘s explore some key aspects of Docker Compose for developers in more detail.
Defining Environment Variables
A major appeal of containers for developers is encapsulating an application with its required environment dependencies into a single packaged unit. This allows moving code between development laptops, testing servers, CI pipelines and production clusters very reliably.
Docker Compose adds value by centralizing the environment configuration for an entire application stack in one place – the compose file itself:
services:
webapp:
image: myapp
env_file:
- config/.env
db:
image: postgres:14
environment:
POSTGRES_DB: myapp
The webapp
service loads variables from an external file, very useful for secret values and config that changes across environments.
The db
service defines the values directly in compose, great for static config that stays the same.
This allows developers to fully configure service interactions externally without rebuilding images.
Linking Containers on the Same Network
Integrating containers that need to communicate is where Docker Compose really shines through for developers trying to build an application from mix-and-match components.
Each container launched with Compose joins a common app-specific network so that services can talk to each other via predictable DNS hostnames:
webapp -> makes API call to -> api.myapp.local
This networking happens automatically behind the scenes based on service names. There is no need to manually configure container IPs, port mappings, host files etc. Developers can just focus on application logic rather than networking code.
Network configurations can get quite advanced with custom drivers and segmented networks per concern:
networks:
frontend:
backend:
driver: overlay
services:
webapp:
networks:
- frontend
api:
networks:
- frontend
- backend
Here our front-facing webapp
can talk to the api
service, but back-end databases could be firewalled from external traffic.
Persisting App Data with Volumes
Another aspect developers must consider with containerized apps is how data persists across deployments. Running containers are by default ephemeral – stopping a container destroys the filesystem and causes data loss.
Docker Compose sets up named volume mounts from the host machine to designated paths in the container:
volumes:
dbdata:
driver: nfs
cache:
driver: local
services:
db:
volumes:
- dbdata:/var/lib/db
webapp:
volumes:
- cache:/tmp
This persistently stores critical database content on a shared NFS server, while keeping transient cache files locally on the host.
The actual volume resources are created automatically at runtime without extra setup. Developers don‘t have to manually mount storage or configure permissions, streamlining development.
Building Container Images
While it‘s possible to run all your application services using pre-built images from Docker Hub, many apps need custom images tailored to their code, frameworks and dependencies.
Docker Compose features native image building tied to Dockerfiles stored alongside application code:
services:
webapp:
build:
context: ./code
dockerfile: webapp.Dockerfile
Running docker-compose build
will fire up Docker and build the webapp image matching our Dockerfile. This image tag can be referenced in the same compose file.
The context directory allows sharing common files between multiple services builds. Running Compose handles caching images layers so rebuilding is very fast as developers make code changes.
For larger apps, custom images may handled in a CI/CD pipeline instead while developers use stock images. But having image build capabilities handy helps workflows.
Launching Containers for Development
Once we‘ve defined all our application services, variables, volumes etc. – how do we start things running?
With a single command, docker-compose up
will:
- Validate the compose file
- Pull required images
- Build any custom images
- Create networks
- Start and connect containers
- Tail container logs
This brings up the full application with all its services in the foreground, shutting down gracefully on Ctrl+C.
For non-interactive background execution more suited to production, run:
docker-compose up -d
Here Docker Compose runs as a daemon, manageable via docker-compose logs/ps/stop
. Running silent in the background avoids scrolling logs or seeing crashes.
Either option offers developers a very quick way to move code from their laptop into a real live running environment for testing. Just tweak the compose file as services scale up.
And common actions like building images, restarting containers, or recreating them from scratch are handled through simple intuitive commands.
Deploying Application Changes
As developers build features or fix bugs, they‘ll frequently rebuild images and restart containers to test code changes.
Docker Compose supports powerful reconfiguration abilities without fully resetting your app stack after updates:
# Rebuild/recreate any changed services
docker-compose up --build
# Restart containers to reload env, app server etc
docker-compose restart webapp
# Stop services without destroying containers
docker-compose stop
# Clean up containers and volumes
docker-compose down
Only updated services need to rebuilt after code changes. And restarting containers to refresh the application is much faster than doing a full reprovision.
Docker actively caches image layers so rebuilds that reuse 90% existing code are very fast. Together this makes the container-build-run cycle extremely tight as developers iterate and test locally.
Running & Scaling Services in Production
Once an application graduates from a developer‘s laptop, we need ways to deploy docker-compose apps onto production infrastructure reliably.
When it comes to moving containers into production, orchestrators like Kubernetes are very popular. But using Docker Compose directly in production is also a viable approach:
docker-compose --scale webapp=10 --scale api=5 up
# Stopping
docker-compose stop webapp
# Zero-downtime deploy
docker service update --image myapp:2.1 webapp
Here Docker Compose relies on Docker Swarm behind the scenes for clustering management. We can easily scale up container instances and gracefully deploy updates.
Running real production workloads directly with Compose may not suit larger complex applications. But for simpler apps and microservices it allows sticking with familiar Compose constructs at scale.
Comparing Docker Compose to Kubernetes
Developers very quickly start hitting the edges of Docker Compose capabilities as application complexity increases:
- Handling failovers of stateful services
- Fine-grained cluster management and scheduling
- Multiple environments and releases
- Canary deployments, A/B testing etc
Migrating to dedicated orchestrators like Kubernetes becomes necessary for very large container fleets with 100s of services. These offer powerful cluster management, deployment features, and native high availability.
However, Kubernetes does ramp up initial complexity significantly through its own set of CRDs, controllers, operators and custom resources. This can frustrate developers just looking to build and test applications rapidly as adding infrastructure logic distracts from coding.
Docker Compose hits a nice sweet spot for accelerating multi-container development while avoiding advanced orchestrator overhead. It‘s a great on-ramp for teams starting with containers, while still relevant even alongside Kubernetes in production.
Key Takeways for Developers
Docker Compose makes it simple for developers to locally build, run and test complex multi-container applications as part of standard coding workflows:
Declarative yaml for readable reusable container environment configs
Powerful container linking for inter-service communication on private networks
Image building tied to code checkins and testing cycles
End-to-end app visibility with aggregated logging and monitoring
Simplified deployment from laptops to test, staging and even production
For modern containerized application stacks, Docker Compose is an essential tool for any developer working with microservices. It tames container complexity, accelerating development velocity.