It was not long before when we all were discussing to meet in person during UDS. We did not have good enough reasons to get the logistics mobilized back then but over time we realized the vibrant MySQL Ecosystem on Debian and Ubuntu needs a brainstorming session. While we did try Google Hangout as the […]
Fri, 2014-12-19 09:14mriddoch
This time of the year it is traditional, at least in the UK, to look back and reflect on the year that is coming to a close. Since we have just produced the release candidate for MaxScale and are looking forward to the GA release early in the New Year, it seems like a good time to reflect on the events that have bought us to this stage in the story of MaxScale.
Going Public
The start of 2014 also marked the start for MaxScale, with the first public announcements regarding MaxScale and the first downloadable binaries. MaxScale itself had been started internally before that, but we wanted to hold off on letting it out into "the wild" until there was enough of the functionality provided in order to be able to do more than just give "what it might be" type promises. At first we only had tar files available and only for CentOS/RedHat Linux distributions, we also have the source …
[Read more]Taxonomy upgrade extras: temporary tablediskselectquery tuning
For processing SELECT
queries MySQL needs some times
the help of temporary tables. These temporary tables can be
created either in memory or on disk.
The number of creations of such temporary tables can be found with the following command:
mysql> SHOW GLOBAL STATUS LIKE 'created_tmp%tables'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Created_tmp_disk_tables | 4 | | Created_tmp_tables | 36 | +-------------------------+-------+
There are 2 different reasons why MySQL is creating a …
[Read more]
So this is just a simple example of how to set up a PARTITION and
a SUBPARTITION in MySQL. The concept here is that you have data
in a table with numerous values in a datetime field. You might
have data that is spread across numerous years (most likely
you do). So one way to partition this data is to sort it by year
but then also sort it by month within that yearly
partition.
Below is an example that you can use for consideration.
Consider the test table. Your table with have many more fields of
course.
CREATE TABLE `t1` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`date_time` datetime DEFAULT NOW(),
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
First I will populate the test table with random values for
the date_time field.
delimiter //
CREATE PROCEDURE populate_t1( IN rowsofdata INT )
…
HAProxy is frequently used as a software load balancer in the MySQL world. Peter Boros, in a past post, explained how to set it up with Percona XtraDB Cluster (PXC) so that it only sends queries to available nodes. The same approach can be used in a regular master-slaves setup to spread the read load across multiple slaves. However with MySQL replication, another factor comes into play: replication lag. In this case the approach mentioned for Percona XtraDB Cluster does not work that well as the check we presented only returns ‘up’ or ‘down’. We would like to be able to tune the weight of a replica inside HAProxy depending on its replication lag. This is what we will do in this post using HAProxy 1.5.
Agent …
[Read more]Pythian revamped Chegg’s MySQL architecture and processes and has provided 24×7 operations and incident management support and continual architectural guidance for Chegg’s MySQL, MongoDB,and Cassandra environments for the past five years—prior to, during, and after their IPO. Pythian also spearheaded an implementation on Amazon Web Services’ cloud for the elasticity needed by an educational application’s cyclical peaks. Chegg’s partnership with Amazon and Pythian has allowed for rapid growth and constant evolution without sacrificing performance. Many years later, Chegg still has no in-house DBAs, and relies on Pythian’s expertise for all their data management needs.
A few days back we were given a MySQL database hosted in EC2 with
around 5TB data to move some of the tables to a new server.
The intention was to share the tables between 2 EC2
instances. Since AWS had the option to take online snapshots, the
plan was to take a snapshot, create a new machine with that
snapshot and drop the unwanted tables.So everything went as
planned until creating a new machine with the snapshot. The
real challenge was dropping the unwanted tables. It took
around 4 minutes to Drop a table whose size is 20GB. It
took 20 minutes to drop a 100GB table. The time kept on
increasing for larger tables. MySQL even went to “defunct” when
we killed the drop query and at times crashed. To track down this
issue we executed drop table in one session and checked the
processlist from another session that gave the below
output.
mysql> show processlist \G
*************************** 1. row …
[Read more]If you’re monitoring MySQL performance on dozens or hundreds of servers, chances are you have a “rainbow chart” – a time-series chart with hundreds of tiny area graphs stacked on top of each other, crawling imperceptibly one pixel at a time across the big-screen monitor in your office. The trouble with these charts is they’re hard to see. It takes many minutes for enough new pixels to display after a change. In the meantime you can’t see the change clearly.
At VividCortex, we think we’ve found a better way to keep tabs on what’s going on in your infrastructure: a bubble visualization. It is compact, and immediately communicates current status and trend, with visualizations that your eye interprets in a glance.
This is the Activity layout on the Hosts dashboard. It’s designed to scale to hundreds, even thousands of hosts. Tabular layouts and strip-charts won’t do the trick, but live, interactive bubbles will. …
[Read more]TokuDB offers high throughput for write intensive applications, and the throughput scales with the number of concurrent clients. However, when the binary log is turned on, TokuDB 7.5.2 throughput suffers. The throughput scaling problem is caused by a poor interaction between the binary log group commit algorithm in MySQL 5.6 and the way TokuDB commits transactions. TokuDB 7.5.4 for Percona Server 5.6 fixes this problem, and the result is roughly an order of magnitude increase in SysBench throughput for in memory workloads.
MySQL uses two phase commit protocol to synchronize the MySQL binary log with the recovery logs of the storage engines when a transaction commits. Since fsync’s are used to ensure the durability of the data in the various logs, and fsync’s can be very slow, the fsync can easily become a bottleneck. A …
[Read more]I believe InnoDB storage engine architecture is great for a lot of online workloads, however, there are no silver bullets in technology and all design choices have their trade offs. In this blog post I’m going to talk about one important InnoDB limitation that you should consider.
InnoDB is a multiversion concurrency control (MVCC) storage engine which means many versions of the single row can exist at the same time. In fact there can be a huge amount of such row versions. Depending on the isolation mode you have chosen, InnoDB might have to keep all row versions going back to the earliest active read view, but at the very least it will have to keep all versions going back to the start of SELECT query which is currently running.
In most cases this is not a big deal – if you have many short transactions happening you will have only a few row versions to deal with. If you just use the system for reporting queries but do not …
[Read more]