Currently, we have one database cluster with 15 different schemas – these schemas could be either schema which contains “real” data, or just schemas with metadata.
I guess the next evolutionary step of our database stack would be to split up the database cluster vertically along these schemas. All the data schemas should be moved to standalone mysql instances and put the metadata schemas next to them. This also could be a good project for prepare to move a certain part of the database for example to a cloud provider while other parts are still kept on bare metal.
Continue reading “Running multiple instances on the same hardware”
On Percona Live! Amsterdam 2015 we had a talk with Peter Boros about GTID replication.
Here are the slides.
Well, it was ended a week ago, but I had too many errands to run so I couldn’t post anything about it.
It was really great, again.This was the third time I attended (2013 London, 2015 Santa Clara) so now I have met with a lot of familiar people – it is true that MySQL has a great community. The chosen city was great, Amsterdam is one of the coolest places in Europe, the hotel was neat, and the programs were also astounding.
The conference sessions were great too, I really enjoyed, them all, and because they are running on 8 thread parallel it is not that bad that there are some recurring sessions; if you missed one in spring you can watch it on autumn.
So, everything was comfy and neat. I hope I’ll attend on the next one too …
There were a few topics where I plan to dig deeper in the next weeks
- ProxySQL because HAProxy is a good choice, but it only speaks TCP and HTTP but not MYSQL
- Semi-Sync replication, because getting rid of replication lag would be useful
- XtraDB Cluster/Galera cluster, because it seems a good evolutionary step beyond our current setup
- DB options in the cloud.
So far I was blogging on Kinja but I’ve decided that I move my blog content to an own domain, and a WordPress blog.
Kinja is a great place to have good posts, and good conversations but the engine is not really made for tech blogs – it is really hard to insert preformatted text such as code or console dump, but I need to do it often.
I was moved all of my previous posts here, maybe in the future, I’ll edit them to fix the code displaying, but now I don’t feel the power for that.
Anyways, welcome here, let’s the blogging begin.
Last time I was checked, how can TokuDB be used as a drop-in replacement for InnoDB. The first impressions were jolly good; way less disk space usage, and the TokuDB host can be a part of the current replication cluster.
So far so good.
Continue reading “Getting familiar with TokuDB part 2.”
After TokuDB was announced as a new storage engine for MySQL, it made me very curious, but I didn’t try it out until now.
I try to check it from different aspects and I’ll be the blog it steps by step. I don’t do any serious benchmarking, just play with it, and see if it could be fit into Kinja’s MySQL ecosystem.
I use one of our development servers as a TokuDB playground. Sadly that hardware is not the same as the database masters nor as the slaves, so performance tests couldn’t be made on that piece of metal but many other ways are open to doing this.
I’ve installed the tokudb plugin from the Percona repository. The setup was quite easy and fast, the documentation is nice.
Continue reading “Getting familiar with TokuDB part 1.”
I showed in an earlier post how to drop a whole database in a very safe way (no replication lag at all) and that technique could be used to drop a single table too, but cleaning up a table can take hours if not days to finish, so this is not the most comfortable way to do that. We also don’t want to have even a small spike of replication lag, so we need to find another solution.
How to remove database in a safe way
When you have to drop a large database, you’ll encounter some problems, mainly replication…
Continue reading “How to drop table in a hacky way”
MySQL replication is great, and kind of reliable, but sometimes it could be messed up. The good news is that we can handle this.
Let’s see how replication happens when everything is fine!
Continue reading “A few words about database checksumming”
When you have to drop a large database, you’ll encounter some problems, mainly replication lag. Now I’ll show you how to avoid this.
What can cause replication lag when you drop a database? First, it takes some disk I/O to unlink the files, and secondly, MySQL will scan through the buffer pool to see if there are pages from that database or not. On a huge (or at least big) database this could take seconds or even minutes, what means your slaves will collect lag for seconds or (of course) even minutes.
Continue reading “How to remove database in a safe way”
We love graphs.
Really love them. I think everyone likes graphs to collect data about the current state of their system, and needless to say, why.
Of course, sometimes it is painful to create graphs, but graphite could make this process easier, so we use them.
The graphite ecosystem makes data collection simple, when you send some data to statsd via simple UDP packets it will put them to a carbon database, and graphite will draw the lines. The only thing that can make this hard, is the question of ‘How can I collect my data to send?’
Well, I’ve checked many ways to solve this problem, but I didn’t find anything simple enough.
So, I wrote a daemon called ‘Mambocollector’ to deal with this problem. It is far from perfect, it has bugs, and I am not sure if it isn’t used too many resources when we collect a too many data, but it is working fine for my current needs. The project can be found on GitHub, feel free to use it, or participate in that.
UPDATE: This version was bugging badly, I rewrote it in Go.