About the unix philosophy, and why I broke it – and then how I moved back to the old track

I am always fascinated about the cleanliness  of UNIX . One tool only should do one thing, but it has to be the best in that way. The operating system itself will glue all the modules together and give you a complex feel of a system, you don’t have to take care of huge, bloated software, don’t have to deal with mysterious bugs, which are appearing random. Just small bricks of clever software, and the rest is on you.

Recently I broke this, and frankly I am not sure if it was a bad decision, or not.

With MySQL, if you want point in time recovery after a disaster, you should back up not only the database, but the binary logs themselves, as I have mentioned it in my last blogpost. I added a feature to the binlogstreamer: it can clean up the binary logs after a given amount of time.

Let’s see how many ways we have to get a rid of the old, unwanted (expired) files! We can define it in the provisioning tool to “ensure that directory doesn’t contain files older than days”, like puppet’s tidy module. This is a good solution, clean, and easy to read, but I am not sure if puppet is the best way to keep logic of a running system. We can also define a recurring cronjob, for cleaning up a directory (for example with a nicely parameterised find command) which is good enough, moreover this is the UNIXish way, but adds complexity, which is avoidable when possible. So what is the “good” way? I think the best would be the cronjob, but I didn’t chosen that.

The database backups are created with an ansible playbook here, it initialises the mysql host where the backup will be made as well as the store server where the backups will be kept until they are expired. Because this is a playbook, I found that is a good idea to keep the cleanup process next to the backup one, ’cause if the backup process fails I definitely  don’t want to remove any of the old backups until it is fixed.

Well, I  have two backup processes regarding the database hosts, one does the backup of the database itself, and one starts a binlog streamer process which keeps binlogs saved as well. Because the backup process manages the expiration of the database dumps, it seemed logical to me to make the binlog streamer smart enough to expire binlogs as well. So I’ve decided to add the functionality of cleanup to the binlogstreamer app itself.

And now I started to rethink the whole idea, and I just wondering if it is not a bit better to move the binlog expiration inside the backup process – because if the backups are failing, and we keep the old backups to ensure that at last we have backups to restore from, then we also need binlogs next to them, and I definitely don’t want to expire them, until I removed the original backup file too.

Keep it simple, stupid.

 

 

 

 

Share This: