Showing entries 1 to 9
Displaying posts with tag: mysql innodb (reset)
Improvements to Undo Truncation in MySQL 8.0.21

Undo Tablespaces can be truncated either implicitly or explicitly in MySQL 8.0. Both methods use the same mechanism. This mechanism could cause periodic stalls on very busy systems while an undo tablespace truncate completes. This problem has been fixed in MySQL 8.0.21.…

Facebook Twitter LinkedIn

MySQL : InnoDB Transparent Tablespace Encryption

From MySQL 5.7.11, encryption is supported for InnoDB (file-per-table) tablespaces. This is called Transparent Tablespace Encryption or sometimes referred as Encryption at Rest. This blog post aims to give the internal details of InnoDB Tablespace Encryption.

Keyring Plugin : Why What How ?

Calculating InnoDB Buffer Pool Size for your MySQL Server

What is an InnoDB Buffer Pool?

InnoDB buffer pool is the memory space that holds many in-memory data structures of InnoDB, buffers, caches, indexes and even row-data. innodb_buffer_pool_size is the MySQL configuration parameter that specifies the amount of memory allocated to the InnoDB buffer pool by MySQL. This is one of the most important settings in the MySQL configuration and should be configured based on the available system RAM.

In this post, we’ll walk you through two approaches of setting your InnoDB buffer pool size value, examine the pros and cons of those practices, and also propose a unique method to arrive at an optimum value based on the size of …

[Read more]
InnoDB Transparent PageIO Compression

We have released some code in a labs release that does compression at the InnoDB IO layer. Let me answer the most frequently asked question. It will work on any OS/File system that supports sparse files and has “punch hole” support. It is not specific to FusionIO. However, I’ve been told by the FusionIO developers that you will get two benefits from FusionIO + NVMFS, no fragmenation issues and more space savings because of a smaller file system block size. Why the block size matters I will attempt to explain next.

The high level idea is rather simple. Given a 16K page we compress it using your favorite compression algorithm and write out the only the compressed data. After writing out the data we “punch a hole” to release the unused part of the original 16K block back to the file system. Let me illustrate with an example:

[DDDDDDDDDDDDDDDD] -> Compress -> [CCCCFFFFFFFFFFFF]

D – Data
F – Free …

[Read more]
Presenting MySQL/InnoDB at Percona Live 2014

I will be presenting at Percona Live 2014 and I’m excited to share and discuss the latest and greatest features and improvements that we have made to MySQL/InnoDB in 5.7. Great performance improvements, there are some new exciting compression features that we are working on,  GIS support,  temporary table performance etc.. There is a long list. Also, we are always interested to hear about user issues and priorities so that we can address them and/or work them into our plan. Your feedback is very important for us, if you want to influence the direction of InnoDB development then you need to talk to me .

Covering indexes in MySQL - revisited (with benchmark)

In the process of building a new benchmark tool for Yahoo, I needed a good "guinea pig." I think I found the one by showing how much more powerful covering indexes can be with InnoDB. A covering index is one where the index itself contains all of the necessary data field(s). In other words, no need to access the data page!

Here's a sample table:

CREATE TABLE entity (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`entity_type` enum('Advertiser','Publisher','Network') NOT NULL DEFAULT 'Advertiser',
`managing_entity_id` int(10) unsigned DEFAULT NULL,
....
PRIMARY KEY (`id`),
KEY `ix_entity_managing_entity_id` (`managing_entity_id`),
....
) ENGINE=InnoDB DEFAULT CHARSET=utf8;


It's a wide table with 88 some odd columns with an average row length of 240 bytes.

Now to test the QPS difference between a query like this:

SELECT * FROM …
[Read more]
Should you be worried about STATEMENT based replication?

Earlier this month, an announcement about STATEMENT based binary logging would be the default starting with MySQL version 5.1.29. I've always preached that backwards compatibility was key to new releases. In this case, lessons were not learned until close to final GA date.

I would like to point out that for 90% of customer cases, STATEMENT based replication will work fine as advertised. But I'd like to point out some use cases where STATEMENT based replication will be at best spotty (at least it is in 5.1.28).

If you primarily use InnoDB as your storage engine you will want to pay close attention to your transaction isolation level. There is a minimum requirement that READ COMMITED level be used, otherwise statement based replication can not be used.

Partitioning + InnoDB + STATEMENT-based binlog also has its problems. We …

[Read more]
Using INNODB_FILE_PER_TABLE

DBAs are always try to determine the best way to manage their storage for InnoDB. The three main options include:Having one shared InnoDB tablespace data file.Setting up individual files per InnoDB table.Setting up a shared tablespace across multiple files.Option 1: Having one shared InnoDB tablespace data file.This is fine if you have a simple and small MySQL database. Great for new DBAs to

Reload data quickly into MySQL InnoDB tables

As DBAs that manage large quantities of database servers, we are always looking for the fastest or most efficient way to load data into the database. Some DBAs have quarterly maintenance periods where they reload data into a database to refresh the indexes.

If you primarily use InnoDB tables in your MySQL database server, then these set of tricks will help in trying to make the reload process a bit faster than just a straight dump & reload.

my.cnf configuration
innodb_flush_log_at_trx_commmit = 0
innodb_support_xa = 0
skip-innodb-doublewrite
disable log-bin & log_slow_queries

Since the goal is to reload data quickly, we need to eliminate any potential bottlenecks. Setting innodb_flush_log_at_trx_commit = 0 this will reduce the amount of disk I/O by avoiding a flush to disk on each commit. If you are not using XA compliant transactions (multi system two-phase commits) then you …

[Read more]
Showing entries 1 to 9