Fix: EMFILE Too Many Open Files / ulimit Error on Linux
Quick Answer
How to fix EMFILE too many open files errors on Linux and Node.js — caused by low ulimit file descriptor limits, file handle leaks, and how to increase limits permanently.
The Error
Your application crashes or refuses new connections with:
Error: EMFILE: too many open files, open '/app/data/file.txt'Or in Node.js:
Error: EMFILE: too many open files, watch
at FSWatcher.<anonymous> (node:internal/fs/watchers:244:19)Or from the OS:
bash: /tmp/test.sh: Too many open files
ulimit: open files: cannot modify limit: Operation not permittedOr in system logs:
kernel: VFS: file-max limit 65536 reachedThe process hit the maximum number of file descriptors it is allowed to open simultaneously.
Why This Happens
Every open file, network socket, pipe, and device is represented by a file descriptor (FD). Linux enforces limits on how many FDs a process (and a user) can have open at once:
- Soft limit (
ulimit -n): the current limit enforced for the process. Default is often 1024 on older systems or 65536 on newer ones. - Hard limit (
ulimit -Hn): the maximum the soft limit can be raised to without root privileges. - System-wide limit (
/proc/sys/fs/file-max): the total FDs the kernel allows across all processes.
Common causes:
- Default soft limit too low (1024) for applications that open many files or connections.
- File descriptor leak — files/sockets opened but never closed.
- Too many watched files (webpack, Jest, Node.js file watchers).
- Heavy concurrency — a server handling thousands of simultaneous connections.
- Docker containers inheriting the host’s limits incorrectly.
Fix 1: Increase the Limit for the Current Session
Check and raise the limit immediately (applies to the current shell session only):
# Check current soft limit
ulimit -n
# Check hard limit
ulimit -Hn
# Raise soft limit to hard limit (no root needed)
ulimit -n $(ulimit -Hn)
# Raise to a specific value (must be ≤ hard limit)
ulimit -n 65536
# Verify
ulimit -nThis only affects the current shell and processes started from it. It resets after logout.
Pro Tip: After raising the limit with
ulimit -n, restart your application from the same shell session. The process inherits the shell’s limits at the time it starts — changingulimitafter the process is running has no effect on that process.
Fix 2: Increase Limits Permanently for a User
Edit /etc/security/limits.conf to persist the change across reboots and logins:
sudo nano /etc/security/limits.confAdd these lines (replace ubuntu with your username, or use * for all users):
# /etc/security/limits.conf
ubuntu soft nofile 65536
ubuntu hard nofile 65536
# For all users:
* soft nofile 65536
* hard nofile 65536
# For root (requires separate entry):
root soft nofile 65536
root hard nofile 65536Apply the change:
Log out and log back in, or start a new session. Verify:
ulimit -n # Should show 65536Also edit /etc/pam.d/common-session if limits are not applying:
# Add this line if not present:
session required pam_limits.soFix 3: Increase the System-Wide File Descriptor Limit
If many processes are hitting limits simultaneously, raise the kernel’s global cap:
# Check current system-wide limit
cat /proc/sys/fs/file-max
# Check currently open FDs system-wide
cat /proc/sys/fs/file-nr
# Output: open_fds unused_fds max_fds
# Temporarily increase (resets on reboot)
sudo sysctl -w fs.file-max=2097152
# Permanently increase — add to /etc/sysctl.conf
echo "fs.file-max = 2097152" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p # Apply without rebootFix 4: Fix Limits for systemd Services
If your application runs as a systemd service, limits.conf does not apply by default — systemd manages its own limits. Edit the service file:
sudo systemctl edit myapp.serviceAdd:
[Service]
LimitNOFILE=65536Or edit the service file directly:
sudo nano /etc/systemd/system/myapp.service[Service]
User=ubuntu
ExecStart=/usr/bin/node /app/server.js
LimitNOFILE=65536
Restart=on-failureApply changes:
sudo systemctl daemon-reload
sudo systemctl restart myapp.service
# Verify the limit is applied
cat /proc/$(pgrep -f "node /app/server.js")/limits | grep "open files"Fix 5: Fix EMFILE in Node.js File Watchers
Node.js uses inotify watches for file watching (fs.watch, webpack HMR, Jest). Each watched file or directory consumes an inotify instance, not just a file descriptor — but there is also an inotify instance limit:
# Check current inotify limits
cat /proc/sys/fs/inotify/max_user_watches # Default: 8192
cat /proc/sys/fs/inotify/max_user_instances # Default: 128
# Increase watches (fix for "ENOSPC: System limit for number of file watchers reached")
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -pFor ENOSPC file watcher errors specifically, see Fix: ENOSPC: System limit for number of file watchers reached.
Reduce the number of files Node.js watches:
In webpack or Jest config, exclude node_modules and other large directories from watching:
// webpack.config.js
module.exports = {
watchOptions: {
ignored: /node_modules/,
},
};// jest.config.js
module.exports = {
watchPathIgnorePatterns: ["node_modules", "dist", ".git"],
};Fix 6: Fix File Descriptor Leaks
If the limit keeps being hit despite raising it, the application may be leaking file descriptors — opening files or connections without closing them:
Check open FDs for a running process:
# Find the PID
pgrep -f "node server.js"
# Count open FDs
ls /proc/<PID>/fd | wc -l
# List open files
lsof -p <PID>
# Watch FD count in real time
watch -n 1 "ls /proc/<PID>/fd | wc -l"Check which types of FDs are leaking:
lsof -p <PID> | awk '{print $5}' | sort | uniq -c | sort -rn
# REG = regular files
# IPv4/IPv6 = network sockets
# FIFO = pipesIf the count grows continuously, you have a leak. Common Node.js leak patterns:
// Leaked — file opened but never closed
const fd = fs.openSync("data.txt", "r");
// ... forgot fs.closeSync(fd)
// Fixed — use fs.promises with proper cleanup
const fileHandle = await fs.promises.open("data.txt", "r");
try {
const content = await fileHandle.read(...);
} finally {
await fileHandle.close(); // Always close in finally
}
// Better — use streams that auto-close
const stream = fs.createReadStream("data.txt");
stream.on("end", () => stream.destroy());Fix 7: Fix Limits in Docker Containers
Docker containers inherit the host’s ulimit settings by default. If the host limit is low, containers hit it too:
Set limits in docker run:
docker run --ulimit nofile=65536:65536 myappSet limits in docker-compose.yml:
services:
app:
image: myapp:latest
ulimits:
nofile:
soft: 65536
hard: 65536Set default limits for all containers in Docker daemon config:
// /etc/docker/daemon.json
{
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 65536,
"Soft": 65536
}
}
}sudo systemctl restart dockerStill Not Working?
Check if root also needs limits raised. Root has its own limits and /etc/security/limits.conf entries for root require explicit root soft nofile entries — the * wildcard does not apply to root on some systems.
Check PAM configuration. If pam_limits.so is not loaded in the PAM session configuration, limits.conf changes have no effect. Verify /etc/pam.d/common-session (Debian/Ubuntu) or /etc/pam.d/system-auth (RHEL) contains session required pam_limits.so.
Check per-process vs system limits. A process’s limit (/proc/<PID>/limits) is set at startup and does not change when ulimit or limits.conf changes. The application must be restarted after changing limits for the new limits to take effect.
Verify the actual limit in effect for the process:
cat /proc/<PID>/limits
# Look for: Max open files 65536 65536 filesIf this still shows the old limit after changes, the service was not restarted correctly, or the limits were not applied to the user/service that starts the process.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Cron Job Not Running on Linux
How to fix cron jobs not running on Linux — caused by PATH issues, missing newlines, permission errors, environment variables not set, and cron daemon not running.
Fix: Linux OOM Killer Killing Processes (Out of Memory)
How to fix Linux OOM killer terminating processes — reading oom_kill logs, adjusting oom_score_adj, adding swap, tuning vm.overcommit, and preventing memory leaks.
Fix: Certbot Certificate Renewal Failed (Let's Encrypt)
How to fix Certbot certificate renewal failures — domain validation errors, port 80 blocked, nginx config issues, permissions, and automating renewals with systemd or cron.
Fix: Docker Compose Environment Variables Not Loading from .env File
How to fix Docker Compose not loading environment variables from .env files — why variables are empty or undefined inside containers, the difference between env_file and variable substitution, and how to debug env var issues.