I would like to know how it is possible to set an upper limit on the amount of memory MySQL uses on a Linux server.
Right now, MySQL will keep taking up memory with every new query requested so that it eventually runs out of memory. Is there a way to place a limit so that no more than that amount is used by MySQL?
MySQL's maximum memory usage very much depends on hardware, your settings and the database itself.
The hardware is the obvious part. The more RAM the merrier, faster disks ftw. Don't believe those monthly or weekly news letters though. MySQL doesn't scale linear - not even on Oracle hardware. It's a little trickier than that.
The bottom line is: there is no general rule of thumb for what is recommend for your MySQL setup. It all depends on the current usage or the projections.
MySQL offers countless variables and switches to optimize its behavior. If you run into issues, you really need to sit down and read the (f'ing) manual.
As for the database -- a few important constraints:
InnoDB
, MyISAM
, ...)Most MySQL tips on stackoverflow will tell you about 5-8 so called important settings. First off, not all of them matter - e.g. allocating a lot of resources to InnoDB and not using InnoDB doesn't make a lot of sense because those resources are wasted.
Or - a lot of people suggest to up the max_connection
variable -- well, little do they know it also implies that MySQL will allocate more resources to cater those max_connections
-- if ever needed. The more obvious solution might be to close the database connection in your DBAL or to lower the wait_timeout
to free those threads.
If you catch my drift -- there's really a lot, lot to read up on and learn.
Table engines are a pretty important decision, many people forget about those early on and then suddenly find themselves fighting with a 30 GB sized MyISAM
table which locks up and blocks their entire application.
I don't mean to say MyISAM sucks, but InnoDB
can be tweaked to respond almost or nearly as fast as MyISAM
and offers such thing as row-locking on UPDATE
whereas MyISAM
locks the entire table when it is written to.
If you're at liberty to run MySQL on your own infrastructure, you might also want to check out the percona server because among including a lot of contributions from companies like Facebook and Google (they know fast), it also includes Percona's own drop-in replacement for InnoDB
, called XtraDB
.
See my gist for percona-server (and -client) setup (on Ubuntu): http://gist.github.com/637669
Database size is very, very important -- believe it or not, most people on the Intarwebs have never handled a large and write intense MySQL setup but those do really exist. Some people will troll and say something like, "Use PostgreSQL!!!111", but let's ignore them for now.
The bottom line is: judging from the size, decision about the hardware are to be made. You can't really make a 80 GB database run fast on 1 GB of RAM.
It's not: the more, the merrier. Only indices needed are to be set and usage has to be checked with EXPLAIN
. Add to that that MySQL's EXPLAIN
is really limited, but it's a start.
About these my-large.cnf
and my-medium.cnf
files -- I don't even know who those were written for. Roll your own.
A great start is the tuning primer. It's a bash script (hint: you'll need linux) which takes the output of SHOW VARIABLES
and SHOW STATUS
and wraps it into hopefully useful recommendation. If your server has ran some time, the recommendation will be better since there will be data to base them on.
The tuning primer is not a magic sauce though. You should still read up on all the variables it suggests to change.
I really like to recommend the mysqlperformanceblog. It's a great resource for all kinds of MySQL-related tips. And it's not just MySQL, they also know a lot about the right hardware or recommend setups for AWS, etc.. These guys have years and years of experience.
Another great resource is planet-mysql, of course.