Skip to content

Fix: Linux No Space Left on Device (Disk Full Error)

FixDevs ·

Quick Answer

How to fix 'No space left on device' errors on Linux — find what is consuming disk space with df and du, clean up logs, Docker images, old kernels, and temporary files, and prevent disk full situations.

The Error

A command fails with:

No space left on device

Or writing a file fails:

cp largefile.tar.gz /var/backups/
cp: error writing '/var/backups/largefile.tar.gz': No space left on device

Or a process crashes with no obvious reason and logs show:

OSError: [Errno 28] No space left on device
write error: No space left on device

Or your application stops working — database writes fail, logs stop updating, or the web server returns 500 errors — all caused silently by a full disk.

Why This Happens

Linux filesystems have a fixed capacity. When that capacity is exhausted, no new data can be written — not even to append to log files. Common culprits:

  • Log files that grow unbounded — application logs, system logs, or journal logs that are never rotated or truncated.
  • Docker images and containers — unused images, stopped containers, and dangling volumes accumulate silently.
  • Old Linux kernel packages — every kernel update leaves the old kernel installed until explicitly removed.
  • Large temporary files/tmp, build artifacts, core dumps.
  • Database files or backups — databases that grow continuously without archiving.
  • Inode exhaustion — the filesystem runs out of inodes (file metadata slots) before running out of raw space. df -h shows space available, but df -i shows inodes at 100%.

Fix 1: Identify the Problem with df and du

Start by confirming which filesystem is full and then finding what is consuming space:

Check filesystem usage:

df -h

Output:

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        50G   50G     0 100% /
/dev/sdb1       200G   45G  155G  23% /data
tmpfs           7.8G  1.2G  6.6G  16% /dev/shm

The 100% on /dev/sda1 is the problem. Now find what is using the space:

Find the largest directories:

# Top-level directories on the root filesystem
du -h --max-depth=1 / 2>/dev/null | sort -rh | head -20

# Drill into a specific directory
du -h --max-depth=2 /var 2>/dev/null | sort -rh | head -20

Find the largest files:

# Find files larger than 100MB anywhere on the filesystem
find / -xdev -type f -size +100M -exec ls -lh {} \; 2>/dev/null | sort -k5 -rh | head -20

# Or use du to find large files quickly
find /var/log -type f -exec du -h {} \; | sort -rh | head -20

Check inode usage (if df -h shows space but writes still fail):

df -i

Output:

Filesystem      Inodes  IUsed   IFree IUse% Mounted on
/dev/sda1      3276800 3276800       0  100% /

If IUse% is 100%, you have run out of inodes. This happens when a directory contains millions of small files. Find the culprit:

# Find directories with the most files
find / -xdev -type d -exec sh -c 'echo "$(find "$1" -maxdepth 1 | wc -l) $1"' _ {} \; 2>/dev/null | sort -rn | head -20

Fix 2: Clean Up Log Files

Log files are the most common cause of unexpected disk fill on production servers:

Check current log sizes:

du -h /var/log/* 2>/dev/null | sort -rh | head -20

# Check systemd journal size
journalctl --disk-usage

Truncate large log files (immediate, no restart needed):

# Truncate to zero without deleting (safe for running processes)
truncate -s 0 /var/log/nginx/access.log
truncate -s 0 /var/log/nginx/error.log

# Or using shell redirection
> /var/log/myapp/application.log

Warning: Do not rm a log file that a running process has open — the process keeps writing to the deleted inode, and the disk space is not freed until the process is restarted. Use truncate or > instead, which empties the file while keeping the inode open.

Vacuum systemd journal logs:

# Keep only the last 500MB of journal logs
journalctl --vacuum-size=500M

# Keep only logs from the last 7 days
journalctl --vacuum-time=7d

# Check the result
journalctl --disk-usage

Configure logrotate to prevent recurrence:

# /etc/logrotate.d/myapp
/var/log/myapp/*.log {
    daily
    rotate 7
    compress
    delaycompress
    missingok
    notifempty
    copytruncate
}

Force logrotate to run immediately:

logrotate -f /etc/logrotate.d/myapp

Fix 3: Clean Up Docker

Docker is a notorious disk consumer — unused images, stopped containers, and volumes accumulate silently:

Check Docker disk usage:

docker system df

Output:

TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          47        12        18.5GB    14.2GB (76%)
Containers      23        3         1.2GB     1.1GB (92%)
Local Volumes   18        5         8.4GB     6.2GB (73%)
Build Cache     -         -         4.1GB     4.1GB

Remove everything unused in one command:

# Remove stopped containers, unused networks, dangling images, and build cache
docker system prune -f

# Also remove unused volumes (careful — this deletes data)
docker system prune -f --volumes

# Remove unused images (not just dangling — includes tagged images not used by any container)
docker image prune -a -f

Targeted cleanup:

# Remove all stopped containers
docker container prune -f

# Remove dangling images (untagged layers)
docker image prune -f

# Remove all images not used by any container
docker image prune -a -f

# Remove unused volumes
docker volume prune -f

# Remove unused networks
docker network prune -f

Pro Tip: Add docker system prune -f to a weekly cron job on Docker hosts to prevent gradual disk fill. Combine with --volumes only if you are certain the pruned volumes are not needed.

Fix 4: Remove Old Linux Kernel Packages

Every apt upgrade installs a new kernel but leaves the old one installed. Over time these accumulate:

Check installed kernels:

dpkg --list | grep linux-image
# or
uname -r  # Shows the currently running kernel — do not remove this one

Remove old kernels automatically (Ubuntu/Debian):

# This removes kernels no longer needed to boot
apt autoremove --purge

# Or explicitly remove old kernel packages
apt purge linux-image-5.15.0-89-generic linux-headers-5.15.0-89-generic

For systems with many old kernels:

# List kernels not currently running
dpkg --list | grep linux-image | grep -v $(uname -r) | awk '{print $2}'

# Remove them all (confirm the currently running kernel is not in the list)
apt purge $(dpkg --list | grep linux-image | grep -v $(uname -r | sed 's/-generic//') | grep -v linux-image-generic | awk '{print $2}')

On RHEL/CentOS/Fedora:

# Automatically keep only the last 2 kernel versions
dnf remove --oldinstallonly --setopt installonly_limit=2 kernel

# Or manually
rpm -qa | grep kernel
yum remove kernel-old-version

Fix 5: Find and Remove Large Temporary Files and Core Dumps

Clean temporary directories:

# /tmp is usually safe to clear (check first for important files)
ls -lah /tmp | sort -k5 -rh | head -20
rm -rf /tmp/*

# Clear the apt cache
apt clean
apt autoclean

# Clear pip cache
pip cache purge

# Clear npm cache
npm cache clean --force

Find and remove core dumps:

# Core dumps can be gigabytes each
find / -name "core" -type f -size +100M 2>/dev/null
find / -name "core.*" -type f -size +100M 2>/dev/null

# Remove them
find / -name "core" -type f -delete 2>/dev/null
find / -name "core.*" -type f -delete 2>/dev/null

# Check where core dumps are being written
cat /proc/sys/kernel/core_pattern

Clean build artifacts:

# Node.js projects
find /home /var /srv -name "node_modules" -type d -prune 2>/dev/null | head -20
# Remove specific node_modules you no longer need

# Maven/Gradle caches
du -sh ~/.m2/repository
du -sh ~/.gradle/caches

# Python pip cache
du -sh ~/.cache/pip
pip cache purge

Fix 6: Free Space Immediately When Disk Is 100% Full

When disk is completely full, you cannot even create new files to help diagnose. Use these techniques to get immediate space:

Delete the most obvious large files first:

# Find and delete large rotated log files
find /var/log -name "*.gz" -o -name "*.1" -o -name "*.2" | xargs du -sh | sort -rh | head -20
find /var/log -name "*.gz" -delete

# Delete old journal logs
journalctl --vacuum-size=100M

Clear the package cache:

apt clean    # Clears /var/cache/apt/archives — often several GB
yum clean all
dnf clean all

If you have a large swap file on the root filesystem:

ls -lh /swapfile
# Temporarily turn off swap to reclaim space
swapoff /swapfile
# Then delete or move it after freeing other space

Use lsof to find deleted files still held open by processes:

# Deleted files still consuming space because a process has them open
lsof | grep deleted | sort -k8 -rn | head -20
# Output shows the file size in the 7th column
# Restart the listed process to release the space

Real-world scenario: A web server logs to /var/log/nginx/access.log. A logrotate failure causes the file to grow to 40GB. df -h shows 100%. Running rm /var/log/nginx/access.log deletes the file but nginx still holds it open — disk space is not freed. The fix: truncate -s 0 /var/log/nginx/access.log (or restart nginx after deleting the file). Always check lsof | grep deleted when df -h shows full disk but du totals do not match.

Fix 7: Prevent Disk Full in the Future

Set up disk usage alerts:

# Simple cron job to email when disk exceeds 80%
# /etc/cron.d/disk-alert
0 * * * * root df -h | awk 'NR>1 && $5+0 >= 80 {print $0}' | mail -s "Disk Alert: $(hostname)" [email protected]

Configure log retention in your applications:

# For applications using Winston (Node.js)
const winston = require('winston');
require('winston-daily-rotate-file');

const transport = new winston.transports.DailyRotateFile({
  filename: 'application-%DATE%.log',
  maxSize: '50m',
  maxFiles: '14d',  // Keep 14 days of logs
});

For systemd journal — set a permanent size limit:

# /etc/systemd/journald.conf
[Journal]
SystemMaxUse=2G
SystemKeepFree=1G
MaxFileSec=1month
# Apply changes
systemctl restart systemd-journald

Monitor disk usage with a proper tool:

# Install ncdu for interactive disk usage explorer
apt install ncdu
ncdu /

# ncdu shows an interactive tree — navigate with arrow keys, delete with 'd'

Still Not Working?

Verify it is actually a disk space issue and not permissions. ENOSPC (errno 28) means no space, but confirm with df -h. If df -h shows available space but writes still fail, check inodes with df -i.

Check for filesystem errors. A corrupted filesystem can report incorrect free space:

# Check filesystem status (do not run on a mounted filesystem)
fsck -n /dev/sda1

Check for disk quotas. User or group quotas can limit a specific user’s disk usage even if the overall filesystem has space:

quota -u username
repquota -a

Check for a full /boot partition. /boot is often a small separate partition (500MB–1GB) that fills up with old kernel files. Run df -h /boot to check it separately.

df -h /boot
# If full, remove old kernels as described in Fix 4

For related Linux issues, see Fix: Linux Too Many Open Files and Fix: Linux Cron Job Not Running.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles