Sergey Glukhov (Gluh) recently wrote an interesting blog about
InnoDB secondary key improvements in MySQL 5.6. His blog isn't
aggregated to planet.mysql.com but certainly deserves some
attention.
Here it is: InnoDB, extended secondary keys.
With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees.
In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure.
This and more is discussed in the latest Guide to MySQL and …
[Read more]
***This post is out of date and old. I have just migrated to
blogger and this post is displaying as
recent. Apologies***
Whenever I talk about MySQL performance my first recommendation
is normally something along the lines of “Use the InnoDB storage
engine” and I always get asked the same two questions 1) Why use
InnoDB over MyISAM? and 2) Isn’t MyISAM faster? The short answers
to these questions are:
1) There’s rarely any reason not to.
2) No……….(pause) well sometimes… in most cases no.
In this post I will aim to explain my choice of InnoDB and try
and loosely define the cases where MyISAM may be better suited
for your application. I am also very aware that there are many
MySQL storage engines, but I am just going to cover the big two
here.
First I am going to start with the reasons that InnoDB is better
suited than MyISAM:
Data safety - I cannot stress …
Yet another article comparing two database features. This time
the different between the two storage engines MyISAM and InnoDB
from MySQL will be compared, so you can choose which one you
should choose for your project. Some fast facts: InnoDB supports
foreign keys InnoDB implements row-level lock for
inserting and updating while MyISAM
implements table-level lock! InnoDB
supports transactions MyISAM [...]
One of the great things about the MassTLC unConference is the spontaneity of the ideas. In the morning I ran into an old colleague whose startup was looking at switching databases and struggling with the options. Hence, “Scalable Databases for Startups” seemed like a great topic, so I proposed it, and then was off and running full steam after lunch.
The session brought in a wide variety of firms. While there were several vendors there – Basho, Calpont, InterSystems, ParElastic, and …
[Read more]With a handful of exceptions, few people deliberately choose to deploy MySQL on read-only media – but there are cases where being able to access InnoDB data that way comes in handy. As it happened, I had exactly this need a few months back, and the excellent InnoDB development team at Oracle has recently implemented this feature in MySQL 5.6.
First, some background. We had a need to migrate legacy systems from a 3rd party data center into new corporate data centers. These systems were redundant, but we wanted to retain access to the data for archival purposes. All went well, except one small detail: Because the machines were originally housed outside the new data center, we weren’t allowed to stand up the servers inside the new data center – except in read-only mode.
Therein was the problem – we had many GB worth of data stored in MySQL using InnoDB, and we could access the data files – …
[Read more]I’ve noted previously that the new transportable tablespaces for InnoDB in MySQL 5.6 are a big step forward, and newly-released 5.6.9-rc makes importing tablespaces a bit easier. In previous versions, you had to have a .cfg file to import the InnoDB tablespace. That file is produced during FLUSH TABLE <tbl> FOR EXPORT operations, and contains InnoDB metadata that’s not contained in the .ibd tablespace file itself. I filed a feature request requesting the .cfg file be made optional, and Sunny implemented it:
mysql> create table tt (a INT PRIMARY KEY, b VARCHAR(10), KEY(b)) ENGINE = InnoDB; Query OK, 0 rows affected (0.33 sec) mysql> …[Read more]
Hot backups are important for high availability, they can run
without blocking the application. Percona Xtrabackup is a great
tool for backing up InnoDB data.
We have created a backup tool called s9s_backup that makes
Xtrabackup really easy to use, and is fully integrated with
ClusterControl, which means that you can schedule backups with
ease and view the backups that you have taken, and also restore
the backups with no pain.
s9s_backup is available in the lastest version of ClusterControl
or you can download it here.
Is mysqldump totally useless
then?
No. If you would like to isolate and load only one table,
mysqldump is great, or if you want to …
This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin
Traditionally, the MySQL storage engine interface has taken a
minimalistic approach to data definition language. The only
natively supported operations were CREATE TABLE,
DROP TABLE and RENAME TABLE. Consider
the following example:
CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t;
The CREATE INDEX statement would be executed roughly
as follows:
CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP …[Read more]
When InnoDB compresses a page it needs the result to fit into its
predetermined compressed page size (specified with
KEY_BLOCK_SIZE). When the result does not fit we call that a
compression failure. In this case InnoDB needs to split up the
page and try to compress again. That said, compression failures
are bad for performance and should be minimized.
Whether the result of the compression will fit largely depends on
the data being compressed and some tables and/or indexes may
contain more compressible data than others. And so it would be
nice if the compression failure rate, along with other
compression stats, could be monitored on a per table or even on a
per index basis, wouldn't it?
This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks
in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this
helpful information. It contains the following fields: