Backup MySQL Amazon RDS

JavaGuy picture JavaGuy · Dec 17, 2013 · Viewed 11.5k times · Source

I am trying to setup Replica outside of AWS and master is running at AWS RDS. And I do not want any downtime at my master. So I setup my slave node and now I want to backup my current database which is at AWS.

mysqldump -h RDS ENDPOINT -u root -p --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 --all-databases > /root/dump.sql

I tested it on my VM and it worked fine but when tying it with RDS it gives me error

mysqldump: Couldn't execute 'FLUSH TABLES WITH READ LOCK': Access denied for user 'root'@'%' (using password: YES) (1045)

Is it because i do not have super user privilege or how to I fix this problem? Please someone suggest me.

Answer

Michael - sqlbot picture Michael - sqlbot · Dec 17, 2013

RDS does not allow even the master user the SUPER privilege, and this is required in order to execute FLUSH TABLES WITH READ LOCK. (This is an unfortunate limitation of RDS).

The failing statement is being generated by the --master-data option, which is, of course, necessary if you want to be able to learn the precise binlog coordinates where the backup begins. FLUSH TABLES WITH READ LOCK acquires a global read lock on all tables, which allows mysqldump to START TRANSACTION WITH CONSISTENT SNAPSHOT (as it does with --single-transaction) and then SHOW MASTER STATUS to obtain the binary log coordinates, after which it releases the global read lock because it has a transaction that will keep the visible data in a state consistent with that log position.

RDS breaks this mechanism by denying the SUPER privilege and providing no obvious workaround.

There are some hacky options available to properly work around this, none of which may be particularly attractive:

  • do the backup during a period of low traffic. If the binlog coordinates have not changed between the time you start the backup and after the backup has begin writing data to the output file or destination server (assuming you used --single-transaction) then this will work because you know the coordinates didn't change while the process was running.

  • observe the binlog position on the master right before starting the backup, and use these coordinates with CHANGE MASTER TO. If your master's binlog_format is set to ROW then this should work, though you will likely have to skip past a few initial errors, but should not have to subsequently have any errors. This works because row-based replication is very deterministic and will stop if it tries to insert something that's already there or delete something that's already gone. Once past the errors, you will be at the true binlog coordinates where the consistent snapshot actually started.

  • as in the previous item, but, after restoring the backup try to determine the correct position by using mysqlbinlog --base64-output=decode-rows --verbose to read the master's binlog at the coordinates you obtained, checking your new slave to see which of the events must have already been executed before the snapshot actually started, and using the coordinates determined this way to CHANGE MASTER TO.

  • use an external process to obtain a read lock on each and every table on the server, which will stop all writes; observe that the binlog position from SHOW MASTER STATUS has stopped incrementing, start the backup, and release those locks.

If you use any of these approaches other than perhaps the last one, it's especially critical that you do table comparisons to be certain your slave is identical to the master once it is running. If you hit subsequent replication errors... then it wasn't.

Probably the safest option -- but also maybe the most annoying since it seems like it should not be necessary -- is to begin by creating an RDS read replica of your RDS master. Once it is up and synchronized to the master, you can stop replication on the RDS read replica by executing an RDS-provided stored procedure, CALL mysql.rds_stop_replication which was introduced in RDS 5.6.13 and 5.5.33 which doesn't require the SUPER privilege.

With the RDS replica slave stopped, take your mysqldump from the RDS read replica, which will now have an unchanging data set on it as of a specific set of master coordinates. Restore this backup to your off-site slave and then use the RDS read replica's master coordinates from SHOW SLAVE STATUS Exec_Master_Log_Pos and Relay_Master_Log_File as your CHANGE MASTER TO coordinates.

The value shown in Exec_Master_Log_Pos on a slave is the start of the next transaction or event to be processed, and that's exactly where your new slave needs to start reading on the master.

Then you can decommission the RDS read replica once your external slave is up and running.