Fix: Linux No Space Left on Device (Disk Full Error)
Quick Answer
How to fix 'No space left on device' errors on Linux — find what is consuming disk space with df and du, clean up logs, Docker images, old kernels, and temporary files, and prevent disk full situations.
The Error
A command fails with:
No space left on deviceOr writing a file fails:
cp largefile.tar.gz /var/backups/
cp: error writing '/var/backups/largefile.tar.gz': No space left on deviceOr a process crashes with no obvious reason and logs show:
OSError: [Errno 28] No space left on device
write error: No space left on deviceOr your application stops working — database writes fail, logs stop updating, or the web server returns 500 errors — all caused silently by a full disk.
Why This Happens
Linux filesystems have a fixed capacity. When that capacity is exhausted, no new data can be written — not even to append to log files. Common culprits:
- Log files that grow unbounded — application logs, system logs, or journal logs that are never rotated or truncated.
- Docker images and containers — unused images, stopped containers, and dangling volumes accumulate silently.
- Old Linux kernel packages — every kernel update leaves the old kernel installed until explicitly removed.
- Large temporary files —
/tmp, build artifacts, core dumps. - Database files or backups — databases that grow continuously without archiving.
- Inode exhaustion — the filesystem runs out of inodes (file metadata slots) before running out of raw space.
df -hshows space available, butdf -ishows inodes at 100%.
Fix 1: Identify the Problem with df and du
Start by confirming which filesystem is full and then finding what is consuming space:
Check filesystem usage:
df -hOutput:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 50G 0 100% /
/dev/sdb1 200G 45G 155G 23% /data
tmpfs 7.8G 1.2G 6.6G 16% /dev/shmThe 100% on /dev/sda1 is the problem. Now find what is using the space:
Find the largest directories:
# Top-level directories on the root filesystem
du -h --max-depth=1 / 2>/dev/null | sort -rh | head -20
# Drill into a specific directory
du -h --max-depth=2 /var 2>/dev/null | sort -rh | head -20Find the largest files:
# Find files larger than 100MB anywhere on the filesystem
find / -xdev -type f -size +100M -exec ls -lh {} \; 2>/dev/null | sort -k5 -rh | head -20
# Or use du to find large files quickly
find /var/log -type f -exec du -h {} \; | sort -rh | head -20Check inode usage (if df -h shows space but writes still fail):
df -iOutput:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 3276800 3276800 0 100% /If IUse% is 100%, you have run out of inodes. This happens when a directory contains millions of small files. Find the culprit:
# Find directories with the most files
find / -xdev -type d -exec sh -c 'echo "$(find "$1" -maxdepth 1 | wc -l) $1"' _ {} \; 2>/dev/null | sort -rn | head -20Fix 2: Clean Up Log Files
Log files are the most common cause of unexpected disk fill on production servers:
Check current log sizes:
du -h /var/log/* 2>/dev/null | sort -rh | head -20
# Check systemd journal size
journalctl --disk-usageTruncate large log files (immediate, no restart needed):
# Truncate to zero without deleting (safe for running processes)
truncate -s 0 /var/log/nginx/access.log
truncate -s 0 /var/log/nginx/error.log
# Or using shell redirection
> /var/log/myapp/application.logWarning: Do not
rma log file that a running process has open — the process keeps writing to the deleted inode, and the disk space is not freed until the process is restarted. Usetruncateor>instead, which empties the file while keeping the inode open.
Vacuum systemd journal logs:
# Keep only the last 500MB of journal logs
journalctl --vacuum-size=500M
# Keep only logs from the last 7 days
journalctl --vacuum-time=7d
# Check the result
journalctl --disk-usageConfigure logrotate to prevent recurrence:
# /etc/logrotate.d/myapp
/var/log/myapp/*.log {
daily
rotate 7
compress
delaycompress
missingok
notifempty
copytruncate
}Force logrotate to run immediately:
logrotate -f /etc/logrotate.d/myappFix 3: Clean Up Docker
Docker is a notorious disk consumer — unused images, stopped containers, and volumes accumulate silently:
Check Docker disk usage:
docker system dfOutput:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 47 12 18.5GB 14.2GB (76%)
Containers 23 3 1.2GB 1.1GB (92%)
Local Volumes 18 5 8.4GB 6.2GB (73%)
Build Cache - - 4.1GB 4.1GBRemove everything unused in one command:
# Remove stopped containers, unused networks, dangling images, and build cache
docker system prune -f
# Also remove unused volumes (careful — this deletes data)
docker system prune -f --volumes
# Remove unused images (not just dangling — includes tagged images not used by any container)
docker image prune -a -fTargeted cleanup:
# Remove all stopped containers
docker container prune -f
# Remove dangling images (untagged layers)
docker image prune -f
# Remove all images not used by any container
docker image prune -a -f
# Remove unused volumes
docker volume prune -f
# Remove unused networks
docker network prune -fPro Tip: Add
docker system prune -fto a weekly cron job on Docker hosts to prevent gradual disk fill. Combine with--volumesonly if you are certain the pruned volumes are not needed.
Fix 4: Remove Old Linux Kernel Packages
Every apt upgrade installs a new kernel but leaves the old one installed. Over time these accumulate:
Check installed kernels:
dpkg --list | grep linux-image
# or
uname -r # Shows the currently running kernel — do not remove this oneRemove old kernels automatically (Ubuntu/Debian):
# This removes kernels no longer needed to boot
apt autoremove --purge
# Or explicitly remove old kernel packages
apt purge linux-image-5.15.0-89-generic linux-headers-5.15.0-89-genericFor systems with many old kernels:
# List kernels not currently running
dpkg --list | grep linux-image | grep -v $(uname -r) | awk '{print $2}'
# Remove them all (confirm the currently running kernel is not in the list)
apt purge $(dpkg --list | grep linux-image | grep -v $(uname -r | sed 's/-generic//') | grep -v linux-image-generic | awk '{print $2}')On RHEL/CentOS/Fedora:
# Automatically keep only the last 2 kernel versions
dnf remove --oldinstallonly --setopt installonly_limit=2 kernel
# Or manually
rpm -qa | grep kernel
yum remove kernel-old-versionFix 5: Find and Remove Large Temporary Files and Core Dumps
Clean temporary directories:
# /tmp is usually safe to clear (check first for important files)
ls -lah /tmp | sort -k5 -rh | head -20
rm -rf /tmp/*
# Clear the apt cache
apt clean
apt autoclean
# Clear pip cache
pip cache purge
# Clear npm cache
npm cache clean --forceFind and remove core dumps:
# Core dumps can be gigabytes each
find / -name "core" -type f -size +100M 2>/dev/null
find / -name "core.*" -type f -size +100M 2>/dev/null
# Remove them
find / -name "core" -type f -delete 2>/dev/null
find / -name "core.*" -type f -delete 2>/dev/null
# Check where core dumps are being written
cat /proc/sys/kernel/core_patternClean build artifacts:
# Node.js projects
find /home /var /srv -name "node_modules" -type d -prune 2>/dev/null | head -20
# Remove specific node_modules you no longer need
# Maven/Gradle caches
du -sh ~/.m2/repository
du -sh ~/.gradle/caches
# Python pip cache
du -sh ~/.cache/pip
pip cache purgeFix 6: Free Space Immediately When Disk Is 100% Full
When disk is completely full, you cannot even create new files to help diagnose. Use these techniques to get immediate space:
Delete the most obvious large files first:
# Find and delete large rotated log files
find /var/log -name "*.gz" -o -name "*.1" -o -name "*.2" | xargs du -sh | sort -rh | head -20
find /var/log -name "*.gz" -delete
# Delete old journal logs
journalctl --vacuum-size=100MClear the package cache:
apt clean # Clears /var/cache/apt/archives — often several GB
yum clean all
dnf clean allIf you have a large swap file on the root filesystem:
ls -lh /swapfile
# Temporarily turn off swap to reclaim space
swapoff /swapfile
# Then delete or move it after freeing other spaceUse lsof to find deleted files still held open by processes:
# Deleted files still consuming space because a process has them open
lsof | grep deleted | sort -k8 -rn | head -20
# Output shows the file size in the 7th column
# Restart the listed process to release the spaceReal-world scenario: A web server logs to
/var/log/nginx/access.log. A logrotate failure causes the file to grow to 40GB.df -hshows 100%. Runningrm /var/log/nginx/access.logdeletes the file but nginx still holds it open — disk space is not freed. The fix:truncate -s 0 /var/log/nginx/access.log(or restart nginx after deleting the file). Always checklsof | grep deletedwhendf -hshows full disk butdutotals do not match.
Fix 7: Prevent Disk Full in the Future
Set up disk usage alerts:
# Simple cron job to email when disk exceeds 80%
# /etc/cron.d/disk-alert
0 * * * * root df -h | awk 'NR>1 && $5+0 >= 80 {print $0}' | mail -s "Disk Alert: $(hostname)" [email protected]Configure log retention in your applications:
# For applications using Winston (Node.js)
const winston = require('winston');
require('winston-daily-rotate-file');
const transport = new winston.transports.DailyRotateFile({
filename: 'application-%DATE%.log',
maxSize: '50m',
maxFiles: '14d', // Keep 14 days of logs
});For systemd journal — set a permanent size limit:
# /etc/systemd/journald.conf
[Journal]
SystemMaxUse=2G
SystemKeepFree=1G
MaxFileSec=1month# Apply changes
systemctl restart systemd-journaldMonitor disk usage with a proper tool:
# Install ncdu for interactive disk usage explorer
apt install ncdu
ncdu /
# ncdu shows an interactive tree — navigate with arrow keys, delete with 'd'Still Not Working?
Verify it is actually a disk space issue and not permissions. ENOSPC (errno 28) means no space, but confirm with df -h. If df -h shows available space but writes still fail, check inodes with df -i.
Check for filesystem errors. A corrupted filesystem can report incorrect free space:
# Check filesystem status (do not run on a mounted filesystem)
fsck -n /dev/sda1Check for disk quotas. User or group quotas can limit a specific user’s disk usage even if the overall filesystem has space:
quota -u username
repquota -aCheck for a full /boot partition. /boot is often a small separate partition (500MB–1GB) that fills up with old kernel files. Run df -h /boot to check it separately.
df -h /boot
# If full, remove old kernels as described in Fix 4For related Linux issues, see Fix: Linux Too Many Open Files and Fix: Linux Cron Job Not Running.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Linux OOM Killer Killing Processes (Out of Memory)
How to fix Linux OOM killer terminating processes — reading oom_kill logs, adjusting oom_score_adj, adding swap, tuning vm.overcommit, and preventing memory leaks.
Fix: Certbot Certificate Renewal Failed (Let's Encrypt)
How to fix Certbot certificate renewal failures — domain validation errors, port 80 blocked, nginx config issues, permissions, and automating renewals with systemd or cron.
Fix: Docker Compose Environment Variables Not Loading from .env File
How to fix Docker Compose not loading environment variables from .env files — why variables are empty or undefined inside containers, the difference between env_file and variable substitution, and how to debug env var issues.
Fix: EMFILE Too Many Open Files / ulimit Error on Linux
How to fix EMFILE too many open files errors on Linux and Node.js — caused by low ulimit file descriptor limits, file handle leaks, and how to increase limits permanently.