Showing entries 491 to 500 of 1123
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: innodb (reset)
InnoDB Redundant Row Format

Introduction

This article describes the InnoDB redundant row format. If you are new to InnoDB code base (a new developer starting to work with InnoDB), then this article is for you. I'll explain the row format by making use of a gdb session. An overview of the article is given below:

  • Create a simple table and populate few rows.
  • Access the page that contains the rows inserted.
  • Access a couple of rows and explain its format.
  • Give summary of redundant row format.
  • Useful gdb commands to analyse the InnoDB rows.
  • Look at a GNU Emacs Lisp function to traverse rows in an InnoDB index page.

To get the most out of this article, the reader is expected to repeat the gdb session as described here.

The Schema

Consider the following SQL statements to produce the schema:

CREATE TABLE t1 (f1 int unsigned) row_format=redundant …
[Read more]
How to move the InnoDB log sequence number (LSN) forward

This post focuses on the problem of the InnoDB log sequence number being in the future.

Preface: What is an InnoDB log sequence number?

The Log sequence number (LSN) is an important database parameter used by InnoDB in many places.
The most important use is for crash recovery and buffer pool purge control.

Internally, the InnoDB LSN counter never goes backward.
And, when InnoDB writes 50 bytes to the redo logs, the LSN increases by 50 bytes.
As such we can count LSN in megabytes, gigabytes and etc.

Now for the problem: LSN being in the future!

When you have set innodb_force_recovery like this:

innodb_force_recovery=6

and then issue a data affecting query.

For example, if you are dropping a corrupted table after doing a mysqldump for backup …

[Read more]
How InnoDB promotes UNIQUE constraints

The other day I was running pt-duplicate-key-checker on behalf of a customer and noticed some peculiar recommendations on an InnoDB table with an odd structure (no PRIMARY key, but multiple UNIQUE constraints). This got me thinking about how InnoDB promotes UNIQUE constraints to the role of PRIMARY KEYs. The documentation is pretty clear:

[DOCS]
When you define a PRIMARY KEY on your table, InnoDB uses it as the clustered index. Define a primary key for each table that you create. If there is no logical unique and non-null column or set of columns, add a new auto-increment column, whose values are filled in automatically.

If you do not define a PRIMARY KEY for your table, MySQL locates the first UNIQUE index where all the key …

[Read more]
TokuDB vs InnoDB in timeseries INSERT benchmark

This post is a continuation of my research of TokuDB’s  storage engine to understand if it is suitable for timeseries workloads.

While inserting LOAD DATA INFILE into an empty table shows great results for TokuDB, what’s more interesting is seeing some realistic workloads.

So this time let’s take a look at the INSERT benchmark.

What I am going to do is to insert data in 16 parallel threads into the table from the previous post:

CREATE TABLE `sensordata` (
  `ts` int(10) unsigned NOT NULL DEFAULT '0',
  `sensor_id` int(10) unsigned NOT NULL,
  `data1` double NOT NULL,
  `data2` double NOT NULL,
  `data3` double NOT NULL,
  `data4` double NOT NULL,
  `data5` double NOT …
[Read more]
Here’s my favorite secret MySQL 5.6 feature. What’s yours?

MySQL 5.6 has a great many new features, including, but certainly not limited to a number of performance improvements. However, besides the widely talked-about features such as InnoDB support for full text search, optimizer, performance schema improvements and …

[Read more]
Considering TokuDB as an engine for timeseries data

I am working on a customer’s system where the requirement is to store a lot of timeseries data from different sensors.

For performance reasons we are going to use SSD, and therefore there is a list of requirements for the architecture:

  • Provide high insertion rate
  • Provide a good compression rate to store more data on expensive SSDs
  • Engine should be SSD friendly (less writes per timeperiod to help with SSD wear)
  • Provide a reasonable response time (within ~50 ms) on SELECT queries on hot recently inserted data

Looking on these requirements I actually think that TokuDB might be a good fit for this task.

There are several aspects to consider. This time I want to compare TokuDB vs InnoDB on an initial load time and space consumption.

Let’s …

[Read more]
InnoDB Full-text Search in MySQL 5.6: Part 3, Performance

This is part 3 of a 3 part series covering the new InnoDB full-text search features in MySQL 5.6. To catch up on the previous parts, see part 1 or part 2

Some of you may recall a few months ago that I promised a third part in my InnoDB full-text search (FTS) series, in which I’d actually take a look at the performance of InnoDB FTS in MySQL 5.6 versus traditional MyISAM FTS. I hadn’t planned on quite such a gap between part 2 and part 3, but as they say, better late than never. Recall that we have been working with two data sets, one which I call SEO (8000-keyword-stuffed web pages) and the other which I call DIR (800K directory records), and we are comparing MyISAM FTS in …

[Read more]
How InnoDB works with transactions and auto recovery

How InnoDB work with transactions:

When any transaction will be completed with COMMIT,  InnoDB will write those changes in InnoDB Buffer Pool. After that InnoDB will run some background operations like checkpoint.  Checkpoint is the most important operation which will writes the changes on disk.   Lets see how it will work.

During the checkpoint phase, InnoDB writes dirty pages to the double write buffer, and then writes pages from the doublewrite buffer to the actual tablespace. During checkpointing, as pages are flushed to the actual tablespace making the data changes persistent on disk, log_sequence_numbers (LSN) are also updated on the pages. The LSN info written to the page is what identifies whether a data page has current data or not, during the crash recovery phase.

How InnoDB does crash / auto …

[Read more]
Comparing MongoDB, MySQL, and TokuMX Data Layout

A lot is said about the differences in the data between MySQL and MongoDB. Things such as “MongoDB is document based”, “MySQL is relational”, “InnoDB has a clustering key”, etc.. Some may wonder how TokuDB, our MySQL storage engine, and TokuMX, our MongoDB product, fit in with these data layouts. I could not find anything describing the differences with a simple google search, so I figured I’d write a post explaining how things compare.

So who are the players here? With MySQL, users are likely familiar with two storage engines: MyISAM, the original default up until …

[Read more]
Redo Logging in InnoDB

Introduction

InnoDB is a general-purpose storage engine that balances high reliability and high performance. It is a transactional storage engine and is fully ACID compliant, as would be expected from any relational database. The durability guarantee provided by InnoDB is made possible by the redo logs.

This article will provide an overview of the redo log subsystem or log subsystem of InnoDB. We will look at the following details:

  • The global log system object, which provides access to important data structures and information.
  • The mini-transaction (mtr), using which all redo log records are created.
  • The global in-memory log buffer (or just log buffer), into which the redo logs are written to from the mini transaction buffer. This log buffer will be periodically flushed to the log file on disk. …
[Read more]
Showing entries 491 to 500 of 1123
« 10 Newer Entries | 10 Older Entries »