Skip to Main Content
×
Freshbooks
Official App
Free – Google Play
Get it
FreshBooks is Loved by American Small Business Owners
FreshBooks is Loved by Canadian Small Business Owners
FreshBooks is Loved by Small Business Owners in the UK
Dev Blog

Database migrations using triggers in MySQL with MyISAM and replication

by Taavi on October 3/2011

Last year we purchased the book Refactoring Databases. While the first chapter was preaching to the choir (database migrations are normal around here), there was one extremely valuable gem: the use of database triggers to effect seamless data migrations.

The book targets enterprise environments where there are multiple applications accessing the database concurrently, each with release cycles measured in months and years, and regularly scheduled downtime windows. In contrast, FreshBooks has fewer moving parts (and each part is much smaller), release cycles on the order of weeks (sometimes hours!), and no time is good for downtime. We’re used around the world, and it shows in the server activity logs 24/7.

However, we can still learn a lot from Refactoring Databases. Methods of performing migrations with staggered application releases over the course of months are equally applicable to a normal web app undergoing a 0-downtime deployment with rolling backend restarts to a new version of the code.

Database triggers are a way of telling the database server to react automatically to some other action, like If This Then That. The Big Three operations that change data are INSERT, UPDATE, and DELETE. One example use of triggers is to create an audit log by instructing the database server to record the effect of all UPDATE statements against a table. From this description it’s tempting to think that a lot of things should be done using triggers! Unfortunately they run on the database server itself (so can easily cause a performance bottleneck), are written in SQL (which can make it hard to express yourself if you need to do any kind of data transformation), are impossible to unit test (any test you can write for them is essentially an integration test), and don’t allow chaining (in MySQL at least; if you need two logical things done you have to write them as a single trigger; Observer and Visitor patterns are not an option here).

Believe it or not, MySQL (even using the MyISAM table engine) has full support for triggers! In the context of supporting data migrations, there are some more specific uses whose benefits outweigh the issues above. Data migration triggers don’t have to exist for very long, and it’s a strong hint to reorganize your release schedule if you find yourself wanting to do more than one thing in a single trigger. Useful things a trigger can do to support a data migration include:

  • keeping two columns in a table in sync
  • keeping two columns in two different tables in sync
  • creating new column(s) with data synthesized from old one(s) and vice versa
  • anything else you can do with a trigger (no dynamic SQL!)

However, there are some gotchas to keep in mind when you’re replicating with triggers. With MySQL replication the master records all the data-changing SQL statements executed, and the slave(s) execute those recorded SQL statements1. The slave machine doesn’t check any privileges; it assumes that if the master performed the operation that it was okay. Normally this is perfectly fine, but triggers add an extra layer of indirection.

Let’s try an example. Say that we want to replicate data from one table (FOO) into another (BAR) using an AFTER INSERT trigger. The CREATE TRIGGER statement itself will be written to the replication logs to be executed blindly by the slave(s); each MySQL instance will then have a copy of the trigger. Trigger creation on the master requires the SUPER privilege. Note that ALL PRIVILEGES does not include SUPER!

CREATE TRIGGER `foo_to_bar`
AFTER INSERT ON foo FOR EACH ROW
INSERT INTO bar (some_column)
VALUES (NEW.some_old_column);

NEW is an alias for use in triggers. When a user issues an INSERT INTO FOO statement, the master checks that this is okay, inserts the row in FOO, runs the trigger (with the privileges of the trigger creator), and records only the original INSERT statement for replication.

A slave server sees the INSERT INTO FOO statement, does the insert blindly, and runs the trigger. And that works IFF the user who created the trigger on the master also exists on the slave! If the trigger-creator user doesn’t exist on the slave, replication will fail on the INSERT statement, because the slave is no longer in “blind-execution” replication, but acting on a trigger just as if it had been caused only on the local instance.

“So,” you think to yourself, “I’ll just add the trigger-creator user and re-start replication.” Except that this is MyISAM, and there are no transactions…and that original INSERT statement already completed! So restarting replication will cause a duplicate PK error (if you’re lucky). Being lucky, removing the new row in FOO on the slave will let you restart replication and everything goes on its merry way. If you weren’t lucky, your slave is now strangely out of sync with the master. Have fun cleaning up!

But assuming the trigger-creator user is in place, things will work quite happily. At one point, we had over a million triggers running simultaneously for a few weeks. Of course, before trusting the results we ran some data verifications to ensure we had exactly the right data.

So be careful the first time you deploy triggers, and enjoy the power they give to change your schema without inconveniencing your users. ![:)](/assets/blog-uploads/img/icon_smile.gif)

1 This is equally a problem for saving the results of local-state-specific functions. For example, it’s a bad idea to do something like INSERT INTO foo VALUES (GET_LOCK('a lock')) because the value returned by GET_LOCK() could be different on the slave versus on the master. Timestamps manage to propagate properly because of extra annotations (read: computer-readable SQL comments) included in the replication logs.