🔧 Error Fixes
· 3 min read
Last updated on

Linux: Too Many Open Files — How to Fix It


Too many open files
java.io.IOException: Too many open files
Error: EMFILE, too many open files

This error means a process has hit the maximum number of file descriptors it’s allowed to open. On Linux, everything is a file — sockets, pipes, actual files, and even event listeners all consume file descriptors. When you run out, the OS refuses to open anything new.

What causes this

Every process has a soft limit and a hard limit on open file descriptors. The default soft limit is often 1024, which is low for servers, databases, or applications that handle many concurrent connections.

You’ll hit this when:

  • A web server or database handles many concurrent connections (each socket = one fd)
  • An application opens files in a loop without closing them (file descriptor leak)
  • A Node.js app watches too many files (each watcher = one fd)
  • A build tool or bundler processes a large project with thousands of files

Fix 1: Check current limits and usage

First, understand where you stand:

# Check the current soft limit for your shell
ulimit -n

# Check system-wide file descriptor usage
cat /proc/sys/fs/file-nr
# Output: <allocated> <free> <max>

# Check how many fds a specific process is using
ls /proc/<PID>/fd | wc -l

# Or use lsof
lsof -p <PID> | wc -l

Fix 2: Increase the limit temporarily

For the current shell session and any processes it spawns:

ulimit -n 65536

This only lasts until you close the terminal. It also can’t exceed the hard limit. Check the hard limit with:

ulimit -Hn

Fix 3: Increase the limit permanently

Edit /etc/security/limits.conf to set persistent limits:

# /etc/security/limits.conf
*         soft    nofile    65536
*         hard    nofile    65536

For systemd services, the limits.conf file is ignored. Set it in the service unit file instead:

# /etc/systemd/system/myapp.service
[Service]
LimitNOFILE=65536

Then reload and restart:

sudo systemctl daemon-reload
sudo systemctl restart myapp

You also need to make sure the system-wide limit is high enough:

# Check system max
cat /proc/sys/fs/file-max

# Increase if needed
echo "fs.file-max = 2097152" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

Fix 4: Fix the application (file descriptor leak)

If your limits are reasonable but you’re still hitting them, the application is probably leaking file descriptors — opening files or connections without closing them:

# ❌ Leaks file descriptor if an exception occurs
f = open('data.txt')
data = f.read()
# f.close() never called if an error happens above

# ✅ Context manager guarantees cleanup
with open('data.txt') as f:
    data = f.read()
// ❌ Leaks if an exception occurs
FileInputStream fis = new FileInputStream("data.txt");
// ...

// ✅ Try-with-resources
try (FileInputStream fis = new FileInputStream("data.txt")) {
    // Automatically closed
}

Use lsof -p <PID> to see what the process has open. If you see thousands of entries pointing to the same file or socket type, that’s your leak.

How to prevent it

  • Set LimitNOFILE=65536 in systemd service files for any production service. The default of 1024 is too low for most server applications.
  • Always use context managers, try-with-resources, or defer (in Go) to ensure files and connections are closed.
  • Monitor open file descriptor counts in production. Most monitoring tools (Prometheus, Datadog) can track process_open_fds and alert before you hit the limit.