Google Cloud SQL Second Generation is available

Recently Google announced, that the second generation of Cloud SQL left the beta stage and it is available. I decided to take a look, because last time when I checked it, it looked good, but I couldn’t take it seriously because of the nonexistent SLA.

I have a few databases running on Amazon RDS, but I don’t really like them, it is really hard to see what’s going on inside an instance, and which is most important: I can’t make binary dumps (with Xtra backup) of them, because of the nonexistent access to the host OS. I have to deal with a vendor lock-in (snapshots) or I have to use slow logical dumps. I bet this is a good scenario for a lot of DBAs, but personally, I do love to work with MySQL, because it acts like a real UNIX application, and it just lives on the system native.

We are moving to google cloud platform (GCP) so using – or at least trying out – cloud SQL seems a logical decision.

I like GCP more than AWS, because it is really hard, and bad (for me) to use Amazon’s product. They creating always a lot of fancy names for services, I have to always figure out what name covers which service, and frankly, the UI looks … patchy. GCP is more user-friendly when I want to do something, it is easy to find it on the interface … if it is available, of course. To be honest, AWS is more mature in that way than Google’s cloud platform, there are way more services working there.

But get back to cloud SQL! Its first generation has limitations which make it too small for us, we simply can’t use it, so let’s see the second generation. It has nice proposals about the performance, it is easy to manage (boy, I just love the way it handles replica creation (read, or even failover)), it has a nice secure way to access the database from a local application (SQL cloud proxy), but all the instances have to get an external IP address (no way!) for that, and that is a neuralgic point for me: the applications reside in google cloud engine what appears as an ‘internal’ network, and then we have to route our database traffic to external IP addresses (even in a secure way). No way sir, again, no way. In our current architecture, applications connect to HAProxies for accessing the database; imagine the scenario, when and an application has to connect to a proxy, which connects to another proxy which connects to the database… for me, it doesn’t sound too reliable. That’s a concern of mine, but not a show stopper.

The showstopper for us is the fact, that right now there is no official way to replicate external data source to a second generation cloud SQL instance. With first generation it is possible, but with horrible limitations (Although the Cloud SQL replica is visible on the console, the console does not provide information about replication status for a replica with an external master instance.), but with the second generation it is not even mentioned so far. I can imagine by the way that this is solvable maybe with tungsten replicator, but I don’t think if this is the rabbit hole I do want to dive in. (Replicating from the physical DCs is mandatory for a while.)

I don’t want to be unfair, so I have to say if I had to face a greenfield investment of a software development, where I just have to bring up a database instance for a trendy new application which will run on google cloud platform, then I think Google Cloud SQL is a great choice for that.