In the previous blog posts I've talked about transactions which
block other transactions but don't do anything and about some
possible solutions.
In this post I will show you how to get even more information
about what is locked by a transaction.
As you might have noticed the information_schema.innodb_locks
table doesn't show all locks. This is what the documentation says:
"The INNODB_LOCKS
table contains information
about each lock that an InnoDB
transaction has
requested but not yet acquired, and each lock that a transaction
holds that is blocking another …
When I queried information_schema.innodb_trx (introduced in 5.1
with the InnoDB Plugin) I noticed there were a few transactions
in LOCK WAIT state.
Example:
mysql [information_schema] > select trx_id,trx_state
-> from information_schema.innodb_trx;
+--------+-----------+
| trx_id | trx_state |
+--------+-----------+
| 7974 | LOCK WAIT |
| 7962 | RUNNING |
+--------+-----------+
2 rows in set (0.00 sec)
Then I made a query to join a few information_schema and
performance_schema tables to find out which query was blocking my
transactions. It turned out that the blocking transaction had a
trx_query=NONE. So my query was block by a transaction doing
nothing. That's not really helpful.
Let's try to recreate the situation and see exactly what
happened. I'll use two sessions for the transactions and a third
to monitor the server.
…
Yesterday (Oct. 22) I gave a presentation titled “MySQL 5.6 Full Text Search Throwdown.” If you missed it, you can still register to view the recording and my slides.
Thanks to everyone who attended, and especially to folks who asked the great questions. I answered as many as we had time for during the session, but here are all the questions with my complete answers:
Q: Does Solr automatically maintain its index against MySQL? Do you have to hit the Solr server with a specific query to keep the index ‘warm’?
There are several strategies for updating a Solr …
[Read more]Introduction
Online DDL is a new feature in MariaDB 10.0. Online DDL is processed through below 4 tasks in sequence.
- InnoDB::ha_prepare_inplace_alter_table(..)
- InnoDB::ha_inplace_alter_table(..)
- InnoDB::ha_commit_inplace_alter_table(..)
- mysql_rename_table(..)
InnoDB storage engine allocates temporal memory buffer for transaction logging in phase 1 where row changes during this phase are logged. Size of this buffer is at start sort_buffer_size and it can be grown up to innodb_online_alter_log_max size. During phase 2 thread processing the ALTER statement will copy old table’s rows to a new altered table. After this MariaDB will take exclusive lock for target table and applies row log buffer to the new altered table.
This introduces a new unpredictable failure case row log buffer overflow. MariaDB server will rollback ALTER statement if row log buffer …
[Read more]In many write-intensive workloads Innodb/XtraDB storage engines you may see hidden and dangerous “debt” being accumulated – unpurged transaction “history” which if not kept in check over time will cause serve performance regression or will take all free space and cause an outage. Let’s talk about where it comes from and what can you do to avoid running into the trouble.
Technical Background: InnoDB is an MVCC engine which means it keeps multiple versions of the rows in the database, and when rows are deleted or updated they are not immediately removed from the database but kept for some time – until they can be removed. For a majority of OLTP workloads they can be removed seconds after the change actually took place. In some cases though they might need to be kept for a long period of time – if there are some old transactions running in the system that might still need to look at an old database state. As of …
[Read more]In the new InnoDB lab release we support page sizes of 32k and 64k. This gives users even more choices on the page size, allowing you to further customize InnoDB for your particular workload.
There are some things worthy of note related to this new feature:
- The extent size changes when the
innodb_page_size
is set 32k or 64k.
The extent size is 2M for 32k page sizes, and 4M for 64k page sizes (the extent size is 1M for 4k, 8k, and 16k page sizes). If we do not enlarge the extent size then we will have too many extent headers on the allocation bitmap page, and the bitmap page will overflow. - The
…
A few months back, Michael wrote about reconnecting orphaned *.ibd files using MySQL 5.6. I will show you the same procedure, this time for partitioned tables. An InnoDB partition is also a self-contained tablespace in itself so you can use the same method described in the previous post.
To begin with, I have an example table with a few orphaned partitions and we will reconnect each partition one by one to the original table.
mysql [localhost] {msandbox} (recovery) > SHOW CREATE TABLE t1 G *************************** 1. row *************************** Table: t1 Create Table: CREATE TABLE `t1` ( [...] KEY `h_date` (`h_date`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 /*!50100 PARTITION BY RANGE (year(h_date)) (PARTITION p0 VALUES LESS THAN (2006) ENGINE = InnoDB, PARTITION p1 VALUES LESS THAN (2010) ENGINE = InnoDB, PARTITION …[Read more]
The new InnoDB Labs release contains the ability to create and use independent multi-table general tablespaces.
This feature will provide a way to group tables together into
tablespaces at a location and filename of your choosing.
Tables using row formats of Redundant, Compact, and
Dynamic can be combined together into the same general
tablespace. Compressed tables with the same key_block_size
can also be
combined together.
The SQL syntax for creating an empty general tablespaces
is:
CREATE TABLESPACE `tblspace_name` ADD DATAFILE 'tablespace.ibd' [FILE_BLOCK_SIZE=n];
The filename can contain an absolute path or a path relative to the data …
[Read more]MySQL has information_schema.tables that contain information such as “data_length” or “avg_row_length.” Documentation on this table however is quite poor, making an assumption that those fields are self explanatory – they are not when it comes to tables that employ compression. And this is where inconsistency is born. Lets take a look at the same table containing some highly compressible data using different storage engines that support MySQL compression:
TokuDB:
mysql> select * from information_schema.tables where table_schema='test' G *************************** 1. row *************************** TABLE_CATALOG: def TABLE_SCHEMA: test TABLE_NAME: comp TABLE_TYPE: BASE TABLE ENGINE: TokuDB VERSION: 10 ROW_FORMAT: tokudb_zlib TABLE_ROWS: 40960 AVG_ROW_LENGTH: 10003 DATA_LENGTH: 409722880 MAX_DATA_LENGTH: …[Read more]
As I mentioned on my last post, where I compared the default configurations options in 5.6 and 5.7, I have been doing some testing for a particular load in several versions of MySQL. What I have been checking is different ways to load a CSV file (the same file I used for testing the compression tools) into MySQL. For those seasoned MySQL DBAs and programmers, you probably know the answer, so you can jump over to my 5.6 versus 5.7 results. However, the first part of this post is dedicated for developers and MySQL beginners that want to know the answer to the title question, in a step-by-step fashion. I must say I also learned something, as I under- and over-estimated some of the effects of certain …
[Read more]