In a previous post, I was trying to figure out the most optimal
way to switch from two large innodb table space files to using
innodb_file_per_table to take advantage of some of the benefits
of this setting. I had one part of it solved, which was to stop
MySQL, add innodb_file_per_table to the my.cnf, then restart,
perform a "no-op" alter of "ALTER TABLE t1 ENGINE=InnoDB" which
would cause the table to be re-created an it's own .ibd file. The
remaining problem was how to be able to resize the huge table
space files after converting all the tables to a smaller size (in
my case from 10GB to 10MB).
Someone suggested a better way:
1. Alter all innodb tables to MyISAM
2. Stop the server
3. Add innodb_file_per_table to my.cnf
4. Change innodb_data_file_path to new settings (10MB
tablespaces) in my.cnf
5. Move all innodb files (logs, data) to a backup directory
6. Restart MySQL
7. Alter …
Description:
The drupal_lookup_path function unnecessarily counts all rows of
the url_alias table to determine if any aliases are defined. This
can be expensive on a transactional database such as MySQL with
the InnoDB Storage Engine or PostgreSQL when a website has a lot
of aliases. This patch modifies the query to always only return
one row, possible because any existing pid will be greater than
0. If no aliases are defined, it will return 0.
Status:
This patch has not been merged into any release of Drupal.
Patch
We are going to show the effects of the new patches applied to Percona HighPerf release. As you see from the following graphs, there is significant difference to normal version when the data bigger than buffer pool (right graph shows CPU usage)
The workload emulates TPC-C and has a same characteristic to DBT-2 (it is not DBT-2, but custom scripts, we will publish them eventually). There are no delays between transactions (no thinking time, no keying time), it uses MySQL C API and the server side prepared statement.
The server has 8core CPU and RAID storage (RAID10 / 6 disks). The data population is along to the scale factor 40WH (:=~4GB). It is enough bigger than the data cache of the storage.
main common settings
innodb_buffer_pool_size = 2048M
innodb_thread_concurrency = 0 …
Over a year ago I wrote about pretty nasty Innodb Recovery Bug. I ran in the same situation again (different system, different customer) and went to see the status of the bug... and it is still open.
You may thing it is minor issue but in fact with large buffer pool this bug makes database virtually unrecoverable (if 10% of progress in 2hours qualifies as that). It is especially nasty as it is quite hard to predict. Both customers had MySQL crash recovery happening in reasonable time... most of the times until they run into this problem.
So what is the point ? Have modest expectations about when your favorite MySQL bugs are fixed (This is actually Innodb one, so Innobase/Oracle is responsible for fixing it not MySQL/Sun but there are …
[Read more]Following up on my Previous Post I decided to do little test to see how accurate stats we can get for for Index Stats created by ANALYZE TABLE for MyISAM and Innodb.
But before we go into that I wanted to highlight about using ANALYZE TABLE in production as some people seems to be thinking I advice to use it.... a lot. In fact I should say I see more systems which have ANALYZE abused - run too frequently without much need than systems which do not run ANALYZE frequently enough.
First it is worth to note MySQL only saves very basic cardinality information for index prefixes for index stats and these rarely change. There is no histograms or any other skew metrics etc. MySQL optimizer also uses number of rows in the table for many decisions but this is computed live (maintained for …
[Read more]
One way to look at a database is that
-
- there is data, and
- there are ways to access data.
This dichotomy was actually coined (whether intentional or not) by Vinay in the MySQL telecom team when discussing the MySQL Cluster vision some months ago.
Even if you typically think of MySQL Cluster as just a clustered version of the plain old MySQL server, it is actually more like the opposite is true, if you consider the architecture and history of MySQL Cluster. The original cluster was just the data store called Network DataBase or NDB as we familiarly know it still. Then MySQL Server was integrated on top of that to provide an SQL interface. The original and "native" NDB interface is still there though, and many prefer to use direct C++ or Java access to their …
[Read more]
One way to look at a database is that
-
- there is data, and
- there are ways to access data.
This dichotomy was actually coined (whether intentional or not) by Vinay in the MySQL telecom team when discussing the MySQL Cluster vision some months ago.
Even if you typically think of MySQL Cluster as just a clustered version of the plain old MySQL server, it is actually more like the opposite is true, if you consider the architecture and history of MySQL Cluster. The original cluster was just the data store called Network DataBase or NDB as we familiarly know it still. Then MySQL Server was integrated on top of that to provide an SQL interface. The original and "native" NDB interface is still there though, and many prefer to use direct C++ or Java access to their …
[Read more]
One way to look at a database is that
-
- there is data, and
- there are ways to access data.
This dichotomy was actually coined (whether intentional or not) by Vinay in the MySQL telecom team when discussing the MySQL Cluster vision some months ago.
Even if you typically think of MySQL Cluster as just a clustered version of the plain old MySQL server, it is actually more like the opposite is true, if you consider the architecture and history of MySQL Cluster. The original cluster was just the data store called Network DataBase or NDB as we familiarly know it still. Then MySQL Server was integrated on top of that to provide an SQL interface. The original and "native" NDB interface is still there though, and many prefer to use direct C++ or Java access to their …
[Read more]
Properties:
| Applicable To | InnoDB |
| Server Startup Option |
--innodb_log_file_size=<value>
|
| Scope | Global |
| Dynamic | Yes |
| Possible Values |
Integer: Range: 1M - 4G <1M will be adjusted to 1M |
| Default Value | 5M |
| Category | Performance, Maintenance |
Description:
This variable defines the size of each log file in a log group.
While setting this variable it should be noted that combined size
of all log files should be less than …
I have recently decided I would like to take advantage of the
benefits of InnoDB file-per-table. Namely, that OPTIMIZE TABLE
benefits-- defragmentation (Grazr does a lot of writes, both
INSERT and DELETE), .ibd files being able to shrink if optimized,
etc.
So, the next question is how to convert a ton of data to this
setup. What I have tried is this:
1. Add innodb_file_per_table to my.cnf
2. Restart MySQL
3. Alter all InnoDB tables engine=InnoDB
This works great... except, my two 10GB tablespace files remain.
I would like to make them smaller, but there is no way of doing
this.
Also, there is no trick where you back up the newly created .ibd
tables after the alter, move the large table space files to a
backup dir, restart and pray that the new smaller tablespace
files magically work with your .ibd files. That does *not* work
;)
So, dump all data and restore …