Docker Compose in Production: A Practical Guide (No Fluff)

A
Admin
·3 min read
0 views
Docker Compose In ProductionHow To Manage Docker ContainersDocker Compose Orphan ContainersDocker Disk Space ManagementWhy Does Docker Healthcheck FailProduction Docker Best Practices

Should You Run Plain Docker Compose in Production?

If you’re still debating whether to run plain Docker Compose in production in 2026, stop looking for a "yes" or "no" answer. The real question is whether you’re prepared to handle the operational gaps that Compose leaves wide open. Most engineers treat Compose like a set-and-forget tool, but it’s actually a manual reconciliation engine that assumes you’re the one doing the heavy lifting.

Here’s the reality: Compose is a fantastic fit for single-node deployments, edge computing, or long-tail services that don't justify the massive overhead of a Kubernetes cluster. It’s simple, declarative, and easy to reason about. But because it lacks a persistent control plane, it won't clean up after itself. If you don't build guardrails, your production host will eventually choke on its own debris.

The Orphan Container Problem

The most common failure mode I see is the "zombie" container. When you remove a service from your docker-compose.yaml and run docker compose up -d, the old container keeps running. It’s detached from your project but still hogging ports and memory.

You won't see it in docker compose ps, which is why it stays hidden for months. The fix is simple but mandatory: always use the --remove-orphans flag.

docker compose up -d --remove-orphans

This tells Compose to prune any container that isn't defined in your current file. If you’re shipping software to customers, you need to automate this. If you don't, you’ll eventually deal with support tickets about "the old version still answering on port 8080."

Managing Disk Bloat

Docker is a disk-space hog by design. Every docker compose pull leaves the old image on your drive, and the default json-file log driver will happily write until your disk is full. When the disk hits 100%, Docker stops writing metadata, and your containers start failing in ways that make no sense.

Don't wait for an outage to check your storage. Use docker system df -v to see exactly what’s eating your space. To prevent the log-driven death spiral, cap your logs at the daemon level in /etc/docker/daemon.json:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

After a systemctl restart docker, every new container will rotate its logs at 10MB. This simple change creates a 30MB ceiling per container, preventing the "disk full" scenario that kills production hosts.

The Health Check Trap

Here is the part nobody talks about: adding a HEALTHCHECK to your Compose file does absolutely nothing to restart an unhealthy container. Docker Engine will report the status, but it won't act on it. The restart: unless-stopped policy only triggers if the container process actually exits.

If your app is stuck in a deadlock or a zombie state, it will stay "unhealthy" forever while your users suffer. You need an external watchdog or a sidecar process to monitor the health status and force a restart.

Running plain Docker Compose in production is perfectly viable if you treat it as a manual orchestration layer. You have to be the one to prune images, cap logs, and monitor health. If you aren't willing to manage these operational gaps, you’re better off moving to a managed platform. Try this today and share what you find in the comments.

A

Written by Admin

Sharing insights on software engineering, system design, and modern development practices on ByteSprint.io.

See all posts →