Showing entries 13161 to 13170 of 44104
« 10 Newer Entries | 10 Older Entries »
Testing performance of MySQL 5.6.14

First of all, I’m back! Its been a long time since I’ve blogged about MySQL. I’ve been busy (plus I can’t access my old blog). Done with the excuses. Now onto business at hand.


After attending the MySQL Connect conference and meeting up with old colleagues and acquaintances, I decided to start looking at the latest GA release (5.6.14) to see just how well it performs. At Yahoo, we have settled on using Percona server builds (MySQL 5.5 based), primarily because it works as advertised. After building the new version I started running simple sysbench read only point-select tests against 5.6.14 and 5.5.30. I’m going to provide more detailed results later on, but here’s the short of it:

Percona 5.5.30 : 69,540 qps

MySQL 5.6.14 :  67, 002 qps

8 tables, 10 million rows per table, 128 client threads against an 8 core 24GB machine, 4x300 GB SAS RAID-5 machine. Does not give me …

[Read more]
Release Webinar - Introducing Galera 3.0: Now supporting MySQL 5.6, Global Transaction IDs and WAN

September 25, 2013 By Severalnines

 

Join this technical webinar to learn about the new features in the latest Galera 3.0 release.

You'll learn how Galera integrates with MySQL 5.6 and Global Transaction IDs to enable cross-datacenter and cloud replication over high latency networks. The benefits are clear; a globally distributed MySQL setup across regions to deliver Severalnines availability and real-time responsiveness.

DATE & TIME

Europe/MEA/APAC

read more

SQL to Hadoop and back again, Part 1: Basic data interchange techniques

I’ve got a new article, which is part of a new three-part series, on moving data between SQL and Hadoop, both the export to Hadoop and importing processed content back into an SQL store.

In this first one, we look at the basic mechanics and considerations before you start the migration of data, such as the data format, content, and export techniques.

Read: SQL to Hadoop and back again, Part 1: Basic data interchange techniques


How to reclaim space in InnoDB when innodb_file_per_table is ON

When innodb_file_per_table is OFF and all data is going to be stored in ibdata files. If you drop some tables of delete some data then there is no any other way to reclaim that unused disk space except dump/reload method.

When Innodb_file_per_table is ON, each table stores data and indexes in it’s own tablespace file. However, the shared tablespace-ibdata1 can still grow and you can check more information here about why it grows and what are the solutions.

http://www.mysqlperformanceblog.com/2013/08/20/why-is-the-ibdata1-file-continuously-growing-in-mysql/

Following the recent blog post from Miguel Angel Nieto titled “ …

[Read more]
Thanks For Attending MySQL Connect

MySQL Connect 2013 was held this past Saturday through Monday, and I would like to extend a big thank you to everyone who attended my sessions, I talked to or otherwise took part in the conference.

I had two sessions as well as participated in a Birds of the Feather session with the Community and Support teams. The slides have been uploaded the the Content Catalog but they are not available for download from there yet. Until then you can download them from the the links below:

[Read more]
InnoDB Temporary Tables just got faster

It all started with a goal to make InnoDB temporary tables more effective. Temporary table semantics are blessed with some important characteristics that can help us simplify lot of operations.

  • Temporary tables are not visible across connections
  • Temporary tables lifetime is limited to connection lifetime (unless user explicitly drops it).

What does this means in to InnoDB ?

  • REDO logging can be avoided for temporary tables and related objects since temporary tables do not survive a shutdown or crash.
  • Temporary table definitions can be maintained in-memory without persisting to the disk.
  • Locking constraints can be relaxed since only one client can see these tables.
  • Change buffering can be avoided since the majority of temporary tables are short-lived.

In order to implement these changes in InnoDB we took a bit different approach:

[Read more]
PalominoDB CEO Laine Campbell at Velocity NYC Conference Oct 14th

PalominoDB CEO Laine Campbell will be presenting "Using Amazon Web Services for MySQL at Scale" at Velocity NYC Conference Oct 14th 2013 at 11am.

Laine will explain the options for running MySQL at high volumes at Amazon Web Services, exploring options around database as a service, hosted instances/storages and all appropriate availability, performance and provisioning considerations using real-world examples from Call of Duty, Obama for America and many more. Laine will show how to build highly available, manageable and performant MySQL environments that scale in AWS—how to maintain then, grow them and deal with failure. Some of the specific topics covered are:

  1. Overview of RDS and EC2 – pros, cons and usage patterns/antipatterns.
  2. Implementation choices in both offerings: instance sizing, ephemeral SSDs, EBS, provisioned IOPS and advanced techniques (RAID, mixed storage …
[Read more]
How Marketo solved key data management challenges with Continuent Tungsten

Marketo provides the leading cloud-based marketing software platform for companies of all sizes to build and sustain engaging customer relationships. Marketo's SaaS platform runs on MySQL and has faced data management challenges common to all 24x7 SaaS businesses:

Keeping data available regardless of DBMS failures or planned maintenance Utilizing hardware optimized for multi-terabyte MySQL

Oracle's MySQL Connect 2013 conference summary

Although, hosting the event on a weekend, which is an inconvenience to those who have family, and ignoring the fact that it's Oracle's third MySQL Connect event, I would have to say that this year's Oracle MySQL Connect conference was the best one yet.

This past year, I have been mostly heads-down working at +LinkedIn so I haven't been paying close attention to what Oracle has been doing for +

TokuMX Hot Backup – Part 3

Last week I described TokuDB’s new Hot Backup feature.  This week we are going to briefly discuss the same feature, but as it was added to TokuMX, our version of MongoDB.

Since the Hot Backup library is essentially a shim between MySQL and the Linux kernel, intercepting file system calls for the life of the process, it should be easy to add this to any other system, including TokuMX.  Indeed with our addition of transactions and logging to TokuMX we can gain a consistent backup of any data set at any time.

Unlike MySQL, where system tables use the non-transactional MyISAM storage engine, TokuMX uses internal (non-explicit) transactions for all meta-data changes and regular CRUD operations on BSON data.  This …

[Read more]
Showing entries 13161 to 13170 of 44104
« 10 Newer Entries | 10 Older Entries »