Showing entries 11 to 20 of 27
« 10 Newer Entries | 7 Older Entries »
Displaying posts with tag: Software and tools (reset)
Storage caching options in Linux 3.9 kernel

dm-cache is (albeit still classified “experimental”) is in the just released Linux 3.9 kernel. It deals with generic block devices and uses the device mapper framework. While there have been a few other similar tools flying around, since this one has been adopted into the kernel it looks like this will be the one that you’ll be seeing the most in to the future. It saves sysadmins the hassle of compiling extra stuff for a system.

A typical use is for an SSD to cache a HDD. Similar to a battery backed RAID controller, the objective is to insulate the application from latency caused by the mechanical device, the most laggy part of which is seek time (measured in milliseconds). Giventhe  relatively high storage capacity of an SSD (in the hundreds of GBs), this allows you to mostly disregard the mechanical latency for writes and that’s very useful for database systems such as MariaDB.

That covers writes (for the moment), but …

[Read more]
Optimising Web Servers

I was lucky enough to attend PyCon-AU recently and one talk in particular highlighted the process of web server optimisation.

Graham Dumpleton’s add-in talk Web Server Bottlenecks And Performance Tuning available on YouTube (with the majority of PyCon-AU talks)

The first big note at the beginning is that the majority of the delay in user’s perception of a website is caused by the browser rendering the page. Though not covered in the talk for those that haven’t used the tool YSlow (for Firefox and Chrome) or Google’s Developer Tools (ctrl-alt-I in Chrome), both tools will …

[Read more]
Jetpants: a toolkit for huge MySQL topologies

From a Tumblr engineering blog post:

Tumblr is one of the largest users of MySQL on the web. At present, our data set consists of over 60 billion relational rows, adding up to 21 terabytes of unique relational data. Managing over 200 dedicated database servers can be a bit of a handful, so naturally we engineered some creative solutions to help automate our common processes.

Today, we’re happy to announce the open source release of Jetpants, Tumblr’s in-house toolchain for managing huge MySQL database topologies. Jetpants offers a command suite for easily cloning replicas, rebalancing shards, and performing master …

[Read more]
MySQL Cluster on Raspberry Pi

Earlier this week, Andrew Morgan wrote a piece on running MySQL Cluster on Raspberry Pi. Since the term “Cluster” is hideously overloaded, I’ll note that we’re talking about the NDB cluster storage engine here, a very specific architecture originally acquired by MySQL AB from Ericsson (telco).

Raspberry Pi is a new single-board computer based on the ARM processor series (same stuff that powers most mobile phones these days), and it can run Linux without any fuss. Interfaces include Ethernet, USB, and HDMI video, and the cost is $25-50. I’m looking to use one for the front-end of a MythTV setup (digital video recorder and TV system), I can just strap the Raspberry Pi to the back of a TV or monitor to do its …

[Read more]
Green HDs and RAID Arrays

Some so-called “Green” harddisks don’t like being in a RAID array. These are primarily SATA drives, and they gain their green credentials by being able reduce their RPM when not in use, as well as other aggressive power management trickery. That’s all cool and in a way desirable – we want our hardware to use less power whenever possible! – but the time it takes some drives to “wake up” again is longer than a RAID setup is willing to tolerate.

First of all, you may wonder why I bother with SATA disks at all for RAID. I’ve written about this before, but they simply deliver plenty for much less money. Higher RPM doesn’t necessarily help you for a db-related (random access) workload, and for tasks like backups which do have a lot of speed may not be a primary concern. SATA disks have a shorter command queue than SAS, so that means they might need to seek more – however a smart RAID controller would already arrange its I/O …

[Read more]
HDlatency – now with quick option

I’ve done a minor update to the hdlatency tool (get it from Launchpad), it now has a –quick option to have it only do its tests with 16KB blocks rather than a whole range of sizes. This is much quicker, and 16KB is the InnoDB page size so it’s the most relevant for MySQL/MariaDB deployments.

However, I didn’t just remove the other stuff, because it can be very helpful in tracking down problems and putting misconceptions to rest. On SANs (and local RAID of course) you have things like block sizes and stripe sizes, and opinions on what might be faster. Interestingly, the real world doesn’t always agree with the opinions.

We Mark Callaghan correctly pointed out when I first published it, hdlatency does not provide anything new in terms of functionality, the db IO tests of sysbench cover it all. A key advantage of hdlatency is that it doesn’t have any …

[Read more]
MySQL data backup: going beyond mysqldump

A user on a linux user group mailing list asked about this, and I was one of the people replying. Re-posting here as I reckon it’s of wider interest.

> [...] tens of gigs of data in MySQL databases. > Some in memory tables, some MyISAM, a fair bit InnoDB. According to my > understanding, when one doesn’t have several hours to take a DB > offline and do dbbackup, there was/is ibbackup from InnoBase.. but now > that MySQL and InnoBase have both been ‘Oracle Enterprised’, said > product is now restricted to MySQL Enterprise customers.. > > Some quick searching has suggested Percona XtraBackup as a potential > FOSS alternative. > What backup techniques do people employ around these parts for backups > of large mixed MySQL data sets where downtime *must* be minimised? > > Has your backup plan ever been put to the test?

You should put it to the test regularly, not just when it’s needed. …

[Read more]
Importing a file dumped from MySQL with mysqldump into drizzle

As a big fan of new technology, we try to keep up to date with what’s happening in the industry. As such, I decided to start using drizzle on my development machine since they announced GA this week. First exercise: import a file dumped from a MySQL server I don’t have access to into drizzle. Normally, you can use drizzledump on the mysql server and make it dump a drizzle compatible file. Not in this case, so I decided to sed my way through the various errors. Not pretty, and I hope that at some point we’ll have a tool that can convert a mysqldump into a drizzle compatible file, but it works for now. Here’s what I had to do. Note that this is by no means complete or comes with any guarantees, it’s just a starting point.

# This file started by setting a SQL_MODE. That doesn't …
[Read more]
PBXT early impressions in production use

With Paul McCullagh’s PBXT storage engine getting integrated into MariaDB 5.1, it’s never been easier to it out. So we have, on a slave off one of our own production systems which gets lots of inserts from our Zabbix monitoring system.

That’s possibly an ideal usage profile, since PBXT is a log based engine (simplistically stated, it indexes its transaction logs, rather than rewriting data from log into index and indexing that) so it should require less disk I/O than say InnoDB. And that means it should be particularly suited to for instance logging, which have lots of inserts on a sustained basis. Note that for short insert burst you may not see a difference with InnoDB because of caching, but sustain it and then you can notice.

Because PBXT has such different/distinct architecture there’s a lot of learning involved. Together …

[Read more]
Quest for Resilience: Multi-DC Masters

This is a Request for Input. Dual MySQL masters with MMM in a single datacentre are in common use, and other setups like DRBD and of course VM/SAN based failover solutions are conceptually straightforward also. Thus, achieving various forms of resilience within a single data-centre is doable and not costly.

Doing the same across multiple (let’s for simplicity sake limit it to two) datacentres is another matter. MySQL replication works well across longer links, and it can use MySQL’s in-built SSL or tools like stunnel. Of course it needs to be kept an eye on, as usual, but since it’s asynchronous the latency between the datacentres is not a big issue (apart from the fact that the second server gets up-to-date a little bit later).

But as those who have tried will know, having a client (application server) connection to a MySQL instance in a remote data-centre is a whole other matter, latency becomes a big issue and is generally …

[Read more]
Showing entries 11 to 20 of 27
« 10 Newer Entries | 7 Older Entries »