When doing an ALTER TABLE statement in MySQL, the whole table is read-locked (allowing concurrent reads, but prohibiting concurrent writes) for the duration of the statement. If it's a big table, INSERT or UPDATE statements could be blocked for a looooong time. Is there a way to do a "hot alter", like adding a column in such a way that the table is still updatable throughout the process?
Mostly I'm interested in a solution for MySQL but I'd be interested in other RDBMS if MySQL can't do it.
To clarify, my purpose is simply to avoid downtime when a new feature that requires an extra table column is pushed to production. Any database schema will change over time, that's just a fact of life. I don't see why we should accept that these changes must inevitably result in downtime; that's just weak.
The only other option is to do manually what many RDBMS systems do anyway...
- Create a new table
You can then copy the contents of the old table over a chunk at a time. Whilst always being cautious of any INSERT/UPDATE/DELETE on the source table. (Could be managed by a trigger. Although this would cause a slow down, it's not a lock...)
Once finished, change the name of the source table, then change the name of the new table. Preferably in a transaction.
Once finished, recompile any stored procedures, etc that use that table. The execution plans will likely no longer be valid.
EDIT:
Some comments have been made about this limitation being a bit poor. So I thought I'd put a new perspective on it to show why it's how it is...