mysqldump | mysql yields 'too many open files' error. Why?

user15910 picture user15910 · Sep 17, 2008 · Viewed 7.9k times · Source

I have a RHEL 5 system with a fresh new hard drive I just dedicated to the MySQL server. To get things started, I used "mysqldump --host otherhost -A | mysql", even though I noticed the manpage never explicitly recommends trying this (mysqldump into a file is a no-go. We're talking 500G of database).

This process fails at random intervals, complaining that too many files are open (at which point mysqld gets the relevant signal, and dies and respawns).

I tried upping it at sysctl and ulimit, but the problem persists. What do I do about it?

Answer

VolkerK picture VolkerK · Sep 17, 2008

mysqldump by default performs a per-table lock of all involved tables. If you have many tables that can exceed the amount of file descriptors of the mysql server process. Try --skip-lock-tables or if locking is imperative --lock-all-tables.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html

--lock-all-tables, -x

Lock all tables across all databases. This is achieved by acquiring a global read lock for the duration of the whole dump. This option automatically turns off --single-transaction and --lock-tables.