Showing entries 10251 to 10260 of 44077
« 10 Newer Entries | 10 Older Entries »
Store UUID in an optimized way

A few years ago Peter Zaitsev, in a post titled “To UUID or not to UUID,” wrote: There is timestamp based part in UUID which has similar properties to auto_increment and which could be used to have values generated at same point in time physically local in BTREE index.”

For this post I’ve rearranged the timestamp part of UUID (Universal Unique Identifier) and did some benchmarks.

Many people store UUID as char (36) and use as row identity value (PRIMARY KEY) because it is unique across every table, every database and every server and allow easy merging of records from different databases. But here comes the problem, using it as PRIMARY KEY causes the problems described below.

Problems with UUID

  • UUID has 36 characters which makes it bulky.
  • InnoDB stores data in …
[Read more]
MySQL Debian/Ubuntu packaging sprint

Debian/Ubuntu packaging sprint participants. From left: James Page, Norvald H. Ryeng, George Lorch, Akhil Mohan, Otto Kekäläinen, Robie Basak.

Last week, Canonical invited the MySQL packaging team in Debian to a packaging sprint in their London office, and most of us were able to participate. We’ve met online on IRC and UOSs before, but this was the first time we were all in the same room.

The results of our sprint will soon be available in a .deb near you. Since Debian Jessie is currently in feature freeze, most of it will hit Ubuntu first. The two main things we achieved on the MySQL side were to make MySQL 5.6 ready for Ubuntu Vivid (15.04) and to split MySQL, Percona and MariaDB configuration files. The configuration file split …

[Read more]
Open Source Collaboration: This is how we did it ‘together’ !

It was not long before when we all were discussing to meet in person during UDS. We did not have good enough reasons to get the logistics mobilized back then but over time we realized the vibrant MySQL Ecosystem on Debian and Ubuntu needs a brainstorming session. While we did try Google Hangout as the […]

A Year In The Life Of MaxScale

Fri, 2014-12-19 09:14mriddoch

 

This time of the year it is traditional, at least in the UK, to look back and reflect on the year that is coming to a close. Since we have just produced the release candidate for MaxScale and are looking forward to the GA release early in the New Year, it seems like a good time to reflect on the events that have bought us to this stage in the story of MaxScale.

Going Public

The start of 2014 also marked the start for MaxScale, with the first public announcements regarding MaxScale and the first downloadable binaries. MaxScale itself had been started internally before that, but we wanted to hold off on letting it out into "the wild" until there was enough of the functionality provided in order to be able to do more than just give "what it might be" type promises. At first we only had tar files available and only for CentOS/RedHat Linux distributions, we also have the source …

[Read more]
Shinguz: Avoid temporary disk tables with MySQL

Taxonomy upgrade extras: temporary tablediskselectquery tuning

For processing SELECT queries MySQL needs some times the help of temporary tables. These temporary tables can be created either in memory or on disk.

The number of creations of such temporary tables can be found with the following command:

mysql> SHOW GLOBAL STATUS LIKE 'created_tmp%tables';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 4     |
| Created_tmp_tables      | 36    |
+-------------------------+-------+


There are 2 different reasons why MySQL is creating a …

[Read more]
A MySQL PARTITION and SUBPARTITION Example

So this is just a simple example of how to set up a PARTITION and a SUBPARTITION in MySQL. The concept here is that you have data in a table with numerous values in a datetime field. You might have data that is spread across numerous years  (most likely you do). So one way to partition this data is to sort it by year but then also sort it by month within that yearly partition.

Below is an example that you can use for consideration.

Consider the test table. Your table with have many more fields of course.

CREATE TABLE `t1` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `date_time` datetime DEFAULT NOW(),
  PRIMARY KEY (`id`)
) ENGINE=InnoDB;


First I will populate the test table with random values for the  date_time field.

delimiter //
CREATE PROCEDURE populate_t1( IN rowsofdata INT )

[Read more]
Making HAProxy 1.5 replication lag aware in MySQL

HAProxy is frequently used as a software load balancer in the MySQL world. Peter Boros, in a past post, explained how to set it up with Percona XtraDB Cluster (PXC) so that it only sends queries to available nodes. The same approach can be used in a regular master-slaves setup to spread the read load across multiple slaves. However with MySQL replication, another factor comes into play: replication lag. In this case the approach mentioned for Percona XtraDB Cluster does not work that well as the check we presented only returns ‘up’ or ‘down’. We would like to be able to tune the weight of a replica inside HAProxy depending on its replication lag. This is what we will do in this post using HAProxy 1.5.

Agent …

[Read more]
Chegg

Pythian revamped Chegg’s MySQL architecture and processes and has provided 24×7 operations and incident management support and continual architectural guidance for Chegg’s MySQL, MongoDB,and Cassandra environments for the past five years—prior to, during, and after their IPO. Pythian also spearheaded an implementation on Amazon Web Services’ cloud for the elasticity needed by an educational application’s cyclical peaks. Chegg’s partnership with Amazon and Pythian has allowed for rapid growth and constant evolution without sacrificing performance. Many years later, Chegg still has no in-house DBAs, and relies on Pythian’s expertise for all their data management needs.

Chegg Case Study

Drop Table slow


A few days back we were given a MySQL database hosted in EC2 with around 5TB data to move some of the tables to a new server.  The intention was to share the tables between 2 EC2 instances. Since AWS had the option to take online snapshots, the plan was to take a snapshot, create a new machine with that snapshot and drop the unwanted tables.So everything went as planned until creating a new machine with the snapshot.  The real challenge was dropping the unwanted tables.  It took around 4 minutes to Drop a table whose size is 20GB.  It took 20 minutes to drop a 100GB table. The time kept on increasing for larger tables. MySQL even went to “defunct” when we killed the drop query and at times crashed. To track down this issue we executed drop table in one session and checked the processlist from another session that gave the below output.
mysql> show processlist \G

*************************** 1. row …

[Read more]
Monitor MySQL Performance Interactively With VividCortex

If you’re monitoring MySQL performance on dozens or hundreds of servers, chances are you have a “rainbow chart” – a time-series chart with hundreds of tiny area graphs stacked on top of each other, crawling imperceptibly one pixel at a time across the big-screen monitor in your office. The trouble with these charts is they’re hard to see. It takes many minutes for enough new pixels to display after a change. In the meantime you can’t see the change clearly.

At VividCortex, we think we’ve found a better way to keep tabs on what’s going on in your infrastructure: a bubble visualization. It is compact, and immediately communicates current status and trend, with visualizations that your eye interprets in a glance.

This is the Activity layout on the Hosts dashboard. It’s designed to scale to hundreds, even thousands of hosts. Tabular layouts and strip-charts won’t do the trick, but live, interactive bubbles will. …

[Read more]
Showing entries 10251 to 10260 of 44077
« 10 Newer Entries | 10 Older Entries »