Showing entries 481 to 490 of 1120
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: innodb (reset)
InnoDB scalability issues due to tables without primary keys

Each day there is probably work done to improve performance of the InnoDB storage engine and remove bottlenecks and scalability issues. Hence there was another one I wanted to highlight: Scalability issues due to tables without primary keys. This scalability issue is caused by the usage of tables without primary keys. This issue typically shows itself as contention on the InnoDB dict_sys mutex. Now the dict_sys mutex controls access to the data dictionary. This mutex is used at various important places throughout the InnoDB code and as such any contention on the dict_sys mutex is going to have a InnoDB system-wide negative affect.

The post InnoDB scalability issues due to tables without primary keys appeared first on ovais.tariq.

InnoDB scalability issues due to tables without primary keys

Each day there is probably work done to improve performance of the InnoDB storage engine and remove bottlenecks and scalability issues. Hence there was another one I wanted to highlight:

Scalability issues due to tables without primary keys

This scalability issue is caused by the usage of tables without primary keys. This issue typically shows itself as contention on the InnoDB dict_sys mutex. Now the dict_sys mutex controls access to the data dictionary. This mutex is used at various places. I will only mention a few of them:

  • During operations such as opening and closing table handles, or
  • When accessing I_S tables, or
  • During undo of a freshly inserted row, or
  • During other data dictionary modification operations such as CREATE TABLE, or
  • Within the “Persistent Stats” subsystem, among other things.

Of course this list is not exhaustive but should …

[Read more]
Introducing TokuMX Clustering Indexes for MongoDB

Since introducing TokuMX, we’ve discussed benefits that TokuMX has for existing MongoDB applications that require no changes. In this post, I introduce an extension we’ve made to the indexing API: clustering indexes, a tool that can tremendously improve query performance. If I were to speak to someone about clustering indexes, I think the conversation could go something like this…

What is a Clustering Index?

A clustering index is an index that stores the entire document, not just the defined key.

A common example is …

[Read more]
Making full table scan 10x faster in InnoDB

At MySQL Connect 2013, I talked about how we used MySQL 5.6 at Facebook, and explained some of new features we added to our Facebook MySQL 5.6 source tree. In this post, I'm going to talk about how we made full table scan faster in InnoDB.

Faster full table scan in InnoDB  In general, almost all queries from applications are using indexes, and reading very few rows (0..1 on primary key lookups and 0..hundreds on range scans). But sometimes we run full table scans. Typical full table scan examples are logical backups (mysqldump) and online schema changes (SELECT ... INTO OUTFILE).

 We take logical backups by mysqldump at Facebook. As you know MySQL offers both physical and logical backup commands/utilities. Logical backup has some advantages against physical backup. For example:

  • Logical backup size is much smaller. …
[Read more]
Inexpensive SSDs for Database Workloads

The cost of SSDs has been dropping rapidly, and at the time of this writing, 2.5-drives have reached the 1TB capacity mark.  You can actually get inexpensive drives for as little as 60 cents per GB. Even inexpensive SSDs can perform tens of thousands of IOPs and come with 1.5M – 2M hous MTBF and a 5-year warranty: check out the Intel SC S3500 specs as an example. There is however one important factor you need to take into account when considering  SSDs as opposed to conventional hard drives – Write Endurance.

Many of us have heard about SSDs having limits in terms of how many writes SSDs can handle, many however assume this is what is already accounted for in the warranty period and so if the hard drives claim to have sequential write speed of 450MB/sec and a warranty of 5 years we expect …

[Read more]
InnoDB Temporary Tables just got faster

It all started with a goal to make InnoDB temporary tables more effective. Temporary table semantics are blessed with some important characteristics that can help us simplify lot of operations.

  • Temporary tables are not visible across connections
  • Temporary tables lifetime is limited to connection lifetime (unless user explicitly drops it).

What does this means in to InnoDB ?

  • REDO logging can be avoided for temporary tables and related objects since temporary tables do not survive a shutdown or crash.
  • Temporary table definitions can be maintained in-memory without persisting to the disk.
  • Locking constraints can be relaxed since only one client can see these tables.
  • Change buffering can be avoided since the majority of temporary tables are short-lived.

In order to implement these changes in InnoDB we took a bit different approach:

[Read more]
InnoDB 5.7 performance improvements

A quick overview of the InnoDB performance improvements for both read-only and read-write loads.

InnoDB Redundant Row Format

Introduction

This article describes the InnoDB redundant row format. If you are new to InnoDB code base (a new developer starting to work with InnoDB), then this article is for you. I'll explain the row format by making use of a gdb session. An overview of the article is given below:

  • Create a simple table and populate few rows.
  • Access the page that contains the rows inserted.
  • Access a couple of rows and explain its format.
  • Give summary of redundant row format.
  • Useful gdb commands to analyse the InnoDB rows.
  • Look at a GNU Emacs Lisp function to traverse rows in an InnoDB index page.

To get the most out of this article, the reader is expected to repeat the gdb session as described here.

The Schema

Consider the following SQL statements to produce the schema:

CREATE TABLE t1 (f1 int unsigned) row_format=redundant …
[Read more]
How to move the InnoDB log sequence number (LSN) forward

This post focuses on the problem of the InnoDB log sequence number being in the future.

Preface: What is an InnoDB log sequence number?

The Log sequence number (LSN) is an important database parameter used by InnoDB in many places.
The most important use is for crash recovery and buffer pool purge control.

Internally, the InnoDB LSN counter never goes backward.
And, when InnoDB writes 50 bytes to the redo logs, the LSN increases by 50 bytes.
As such we can count LSN in megabytes, gigabytes and etc.

Now for the problem: LSN being in the future!

When you have set innodb_force_recovery like this:

innodb_force_recovery=6

and then issue a data affecting query.

For example, if you are dropping a corrupted table after doing a mysqldump for backup …

[Read more]
How InnoDB promotes UNIQUE constraints

The other day I was running pt-duplicate-key-checker on behalf of a customer and noticed some peculiar recommendations on an InnoDB table with an odd structure (no PRIMARY key, but multiple UNIQUE constraints). This got me thinking about how InnoDB promotes UNIQUE constraints to the role of PRIMARY KEYs. The documentation is pretty clear:

[DOCS]
When you define a PRIMARY KEY on your table, InnoDB uses it as the clustered index. Define a primary key for each table that you create. If there is no logical unique and non-null column or set of columns, add a new auto-increment column, whose values are filled in automatically.

If you do not define a PRIMARY KEY for your table, MySQL locates the first UNIQUE index where all the key …

[Read more]
Showing entries 481 to 490 of 1120
« 10 Newer Entries | 10 Older Entries »