I'm working on an application that monitors the processes' resources and gives a periodic report in Linux, but I faced a problem in extracting the open files count per process.
This takes quite a while if I take all of the files and group them according to their PID and count them.
How can I take the open files count for each process in Linux?
Have a look at the /proc/
file system:
ls /proc/$pid/fd/ | wc -l
To do this for all processes, use this:
cd /proc
for pid in [0-9]*
do
echo "PID = $pid with $(ls /proc/$pid/fd/ | wc -l) file descriptors"
done
EDIT: Credit to @Boban for this addendum: You can pipe the output of the script above into the following script to see the ten processes (and their names) which have the most file descriptors open:
...
done | sort -rn -k5 | head | while read -r _ _ pid _ fdcount _
do
command=$(ps -o cmd -p "$pid" -hc)
printf "pid = %5d with %4d fds: %s\n" "$pid" "$fdcount" "$command"
done
Here's another approach to list the top-ten processes with the most open fds, probably less readable, so I don't put it in front:
find -maxdepth 1 -type d -name '[0-9]*' \
-exec bash -c "ls {}/fd/ | wc -l | tr '\n' ' '" \; \
-printf "fds (PID = %P), command: " \
-exec bash -c "tr '\0' ' ' < {}/cmdline" \; \
-exec echo \; | sort -rn | head