Showing entries 18061 to 18070 of 44106
« 10 Newer Entries | 10 Older Entries »
Tungsten on the Beach--LA MySQL Meetup on Jan 11, 2012

It is my pleasure to announce that I will be presenting on Tungsten Replicator next Wednesday, January 11th at the Los Angeles MySQL Meetup. The presentation title is Fast, Flexible, and Fun--The Tungsten Replicator Magical Mystery Tour. This talk is going to be fun for two reasons.

First, it's a great opportunity to meet people in the LA MySQL community and talk about my favorite replication software. Tungsten is like a Swiss Army Knife for data replication.  It solves a wide range of problems involving HA, scaling, and data movement.   The presentation gives a quick intro to the replicator, then surveys how to use the most interesting features, including parallel slave apply, multi-master replication, …

[Read more]
SAN vs Local-disk :: innodb_flush_method performance benchmarks

If you’ve been tuning your MySQL database and have wondered what effect the innodb_flush_method settings have on write performance, then this information might help. I’ve recently been doing a lot of baseline load tests to show performance differences between localdisk and the new SAN we’re deploying. Since we run InnoDB for everything in production, and writes are very heavy, I decided to run comparison tests between two identical servers to find the best setting for innodb_flush_method. We have the following specs for the hardware:

  • Dell R610
  • 24 core Intel Xeon X5670 @ 2.93ghz
  • 72GB ECC RAM
  • Brocade 825 HBA
  • Local disk: RAID-10 15K SAS Ext3 (ugh)
  • SAN: Oracle 7420 with four Intel Xeon X7550 @ 2.00GHz, 512GB RAM, 2TB read-cache(MLC-SSD), 36GB write cache (SLC-SSD), 3 disk shelves populated with 60x2TB 7200RM SATA drives setup in mirrored format with striped …
[Read more]
How Percona Server handles data corruption more gracefully

I got a question a while ago about how Percona Server handles corrupted data more gracefully than the standard MySQL server from Oracle. The short version is that it won’t crash the whole server.

With standard MySQL from Oracle, if any page of data in InnoDB is found to be corrupt, the entire instance will crash forcefully. This is a good policy if you want to treat your entire data set as a single unit, which is either usable or not. However, this does not reflect reality for many users, who have a lot of data collocated in a single instance. In such cases, it is desirable for the server to continue running, so the corruption in one database does not affect the others.

Percona Server handles corruption more gracefully, if you enable it, by simply marking the single table as corrupt, and not crashing the entire server.

The relevant documentation is …

[Read more]
OurSQL Episode 73: What happened?

This week we present a year in review for the MySQL Ecosystem, including updates from Oracle's MySQL, SkySQL, Percona and MariaDB.

News:
The MySQL developer’s room at FOSDEM has almost 40 submissions, and only about a dozen slots, so they need your vote to figure out what sessions will be presented. Send in your votes via twitter or e-mail, see Giuseppe's blog post and session descriptions.

read more

[Read more]
MySQL 5.6 Replication Enhancements – webinar replay

Global Transaction IDs - simplifying replication management

The replay has now been released for the MySQL 5.6 replication enhancements replay where you can get the latest information on all of the great new content that has been included in the MySQL 5.6 Development Releases as well as some features that are still being developed. You can view the replay here.

Some of the topics discussed are:

  • Enhanced data integrity: Global Transactions Identifiers, Crash-Safe Slaves and Replication Event Checksums;
  • High performance: Multi-Threaded Slaves, Binlog Group Commit and Optimized Row-Based Replication;
  • Improved flexibility: Time Delayed Replication, Multi-Source …
[Read more]
Configuring MySQL For High Number of Connections per Second

One thing I noticed during the observation was that there were roughly 2,000 new connections to MySQL per second during peak times. This is a high number by any account.

When a new connection to MySQL is made, it can go into the back_log, which effectively serves as a queue for new connections on operating system size to allow MySQL to handle spikes. Although MySQL connections are quite fast compared to many other databases it can become the bottleneck. For a more in depth discussion of what goes on during a connection and ideas for improvement, see this post by Domas]),

With MySQL 5.5 default back_log of 50 and 2000 connections created per second it will take just 0.025 seconds to fill the queue completely if requests are not …

[Read more]
MySQL Performance: Linux I/O

It was a long time now that I wanted to run some benchmark tests to understand better the surprises I've met in the past with Linux I/O performance during MySQL benchmarks, and finally it happened last year, but I was able to organize and present my results only now..

My main questions were:

  • what is so different with various I/O schedulers in Linux (cfq, noop, deadline) ?..
  • what is wrong or right with O_DIRECT on Linux ?..
  • what is making XFS more attractive comparing to EXT3/EXT4 ?..


There were already several posts in the past about impact on MySQL performance when one or another Linux I/O layer feature was used (for ex. Domas about I/O schedulers, Vadim regarding TPCC-like …

[Read more]
OSSCube adds one more Cloudera Certified Developer in its Armor

OSSCube has now one more Cloudera Certified Developer for Apache Hadoop. Rakesh Kumar has become the Cloudera Certified Developer through the CCDH, the industry's only certification for software developers on Hadoop. He passed the Cloudera Certified Developer for Apache Hadoop exam after going through a rigorous training program.

Rakesh is also a MySQL certified DBA and Cluster DBA and has trained several engineers for Zend Certification Examinations.

Tags: ClouderaHadoop

OSSCube adds one more Cloudera Certified Developer in its Armor

OSSCube has now one more Cloudera Certified Developer for Apache Hadoop. Rakesh Kumar has become the Cloudera Certified Developer through the CCDH, the industry's only certification for software developers on Hadoop. He passed the Cloudera Certified Developer for Apache Hadoop exam after going through a rigorous training program.

Rakesh is also a MySQL certified DBA and Cluster DBA and has trained several engineers for Zend Certification Examinations.

Tags: ClouderaHadoop

Log Buffer #253, A Carnival of the Vanities for DBAs

A very very Happy New Year 2012 to all of you. These are the festive and jubilant times when people look back on previous years and make plans for their new year and beyond. Well, this Log Buffer Edition is no different. This week covers the new year posts of the bloggers across the database [...]

Showing entries 18061 to 18070 of 44106
« 10 Newer Entries | 10 Older Entries »