My MongoDB container wouldn’t restart. No matter what I tried, Docker just threw this back at me:
Cannot restart container sudden-mongodb: mkdir /mnt/.../overlay2/.../merged: no space left on device
Here’s how I diagnosed it, fixed it, and made sure it won’t happen again.
The Symptoms
The container was technically “running” but stuck in health: starting status. Docker’s health check kept returning results, but MongoDB itself was in a crash loop. Every restart attempt failed with the same no space left on device error.
Finding the Root Cause
First, I checked the disk:
df -h /mnt/HC_Volume_104227187
Filesystem Size Used Avail Use%
/dev/sdb 9.8G 9.8G 0 100%
100% full. That explains it. But what filled it up?
du -h --max-depth=2 /mnt/HC_Volume_104227187/docker/ | sort -rh | head -10
The culprit was sitting right there — Docker container logs. The MongoDB container alone had accumulated a 4 GB log file. Another container had 728 MB. Together with Docker images and overlay layers, the 9.8 GB volume was completely maxed out.
What Happened Inside MongoDB
Looking at the MongoDB logs told the full crash story:
WiredTiger error message: pwrite: failed to write 128 bytes at offset 0
error_str: "No space left on device"
MongoDB’s storage engine (WiredTiger) tried to write to its journal, couldn’t because the disk was full, and panicked. Literally — it issued a WT_PANIC and aborted. On every subsequent restart attempt, the same thing happened because Docker couldn’t even create the overlay filesystem needed to start the container.
The Fix
Step 1: Free up space by truncating the bloated logs.
# Truncate MongoDB container log (4 GB → 0)
truncate -s 0 /var/lib/docker/containers/<container-id>/<container-id>-json.log
# Truncate other large container logs
truncate -s 0 /var/lib/docker/containers/<other-id>/<other-id>-json.log
This only clears stdout/stderr logs — your actual MongoDB data is untouched.
Step 2: Prune unused Docker images.
docker image prune -f
This freed another ~2.5 GB of old, unused images.
Step 3: Restart the container.
docker restart sudden-mongodb
After about 20 seconds, MongoDB was back to healthy status. Disk usage dropped from 100% to 29%.
Preventing It From Happening Again
The real problem was that Docker’s default logging has no size limit. Containers will happily write logs until the disk is full. The fix is configuring log rotation in /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "3"
}
}
This caps each container’s logs at 300 MB max (3 rotated files × 100 MB each). After editing the file, restart the Docker daemon:
systemctl restart docker
Note: Existing containers will pick up the new log settings on their next restart or recreate. New containers get them immediately.
Key Takeaways
no space left on deviceis Docker’s way of ruining your morning. Always checkdf -hfirst when containers refuse to start.- Docker container logs grow unbounded by default. A busy container can fill a disk in days.
- Always configure log rotation in
daemon.json. It’s one of those things you set once and forget — until you forget to set it. truncate -s 0is your friend. It clears log files instantly without needing to stop the container, and it doesn’t touch your actual data.- Run
docker image pruneperiodically. Old images pile up fast, especially if you’re doing frequent deployments.