Showing entries 1 to 9
Displaying posts with tag: xfs (reset)
MySQL Performance : Benchmark kit (BMK-kit)

The following is a short HOWTO about deployment and use of Benchmark-kit (BMK-kit). The main idea of this kit is to simplify your life in running various MySQL benchmark workloads with less blood and minimal potential errors.

Generally as simple as the following :

$ bash /BMK/sb_exec/sb11-Prepare_50M_8tab-InnoDB.sh 32   # prepare data

$ for users in 1 2 4 8 16 32 64 128 256 512 1024 2048
do   
  # run OLTP_RW for 5min each load level..
  bash /BMK/sb_exec/sb11-OLTP_RW_50M_8tab-uniform-ps-trx.sh $users 300
  sleep 15
done

the latest public online version of the following HOWTO is always available from here : http://dimitrik.free.fr/blog/posts/mysql-perf-bmk-kit.html

Read more... (42 min remaining to read)

MySQL Performance : XFS -vs- EXT4 Story

This post was remaining in stand-by for a long time, specially that I was expecting that observed issues will be fixed soon. But time is going, and the problems are remaining. And I'm constantly asked "why, Dimitri, you're suggesting now to use XFS, while in the past you always suggested EXT4 ??" -- hope the following article will clarify you the "why" and maybe motivate you to do your own evaluations to see how well the things are working for you on your own systems under your own workloads..

NOTE : this will also clarify why the new Double Write did not appear in MySQL 8.0 in 2018, as it was planned, but only recently (http://dimitrik.free.fr/blog/posts/mysql-80-perf-new-dblwr.html)

Read more... (6 min remaining to read)

LVM read performance during snapshots

For the same customer I am exploring ZFS for backups, the twin server is using regular LVM and XFS. On this twin, I have setup mylvmbackup for a more conservative backup approach. I quickly found some odd behaviors, the backup was taking much longer than what I was expecting. It is not the first time I saw that, but here it was obvious. So I recorded some metrics, bi from vmstat and percent of cow space used from lvs during a backup. Cow space is the Copy On Write buffer used by LVM to record the modified pages like they were at the beginning of the snapshot. Upon reads, LVM must scan the list to verify that there’s no newer version. Here’s the other details about the backup:

  • Filesystem: 2TB, xfs
  • Snapsize: 60GB
  • Amount to backup: ~600GB
  • Backup tool: mylvmbackup
  • Compressor: pbzip2

[Read more]
Serious XFS Performance Regression in Linux Kernel 2.6.32-279

I'm not the only one to have noticed this, but I spent a sufficient amount of time banging my head against a wall finding this out that I thought it important to make more people aware of this.

While trying to validate new database hardware we were seeing some serious performance issues in production.  Most MySQL benchmarks using sysbench or pt-playback couldn't reproduce it, but a simple sysbench 16 threaded filio test on the mysql partition showed about 1/3 the throughput we would expect.   The fact that much of the hardware was new as well as the OS we were using made tracking down the cause difficult (changing from CentOS 5.5 to Scientific Linux 6.)

Finally some of our ops people working on different systems started noticing similar issues, and they uncovered the XFS issue.  Sure enough -- when took existing hardware, upgraded to SL6 and ran the same sysbench filio test we immediately saw a …

[Read more]
SAN vs Local-disk :: innodb_flush_method performance benchmarks

If you’ve been tuning your MySQL database and have wondered what effect the innodb_flush_method settings have on write performance, then this information might help. I’ve recently been doing a lot of baseline load tests to show performance differences between localdisk and the new SAN we’re deploying. Since we run InnoDB for everything in production, and writes are very heavy, I decided to run comparison tests between two identical servers to find the best setting for innodb_flush_method. We have the following specs for the hardware:

  • Dell R610
  • 24 core Intel Xeon X5670 @ 2.93ghz
  • 72GB ECC RAM
  • Brocade 825 HBA
  • Local disk: RAID-10 15K SAS Ext3 (ugh)
  • SAN: Oracle 7420 with four Intel Xeon X7550 @ 2.00GHz, 512GB RAM, 2TB read-cache(MLC-SSD), 36GB write cache (SLC-SSD), 3 disk shelves populated with 60x2TB 7200RM SATA drives setup in mirrored format with striped …
[Read more]
Setting up XFS on Hardware RAID — the simple edition

There are about a gazillion FAQs and HOWTOs out there that talk about XFS configuration, RAID IO alignment, and mount point options.  I wanted to try to put some of that information together in a condensed and simplified format that will work for the majority of use cases.  This is not meant to cover every single tuning option, but rather to cover the important bases in a simple and easy to understand way.

Let’s say you have a server with standard hardware RAID setup running conventional HDDs.

RAID setup

For the sake of simplicity you create one single RAID logical volume that covers all your available drives.  This is the easiest setup to configure and maintain and is the best choice for operability in the majority of normal configurations.  Are there ways to squeeze more performance out of a server by dividing the logical volumes: perhaps, but it requires a lot of fiddling and custom tuning to …

[Read more]
Lost innodb tables, xfs and binary grep

Before I start a story about the data recovery case I worked on yesterday, here's a quick tip - having a database backup does not mean you can restore from it. Always verify your backup can be used to restore the database! If not automatically, do this manually, at least once a month. No, seriously - in most of the recovery cases I worked on, customers did have some sort of backup, but it just wasn't working, complete and what not. Someone set it up and never bothered to check if it still works after a while.

Anyway, this post is not really about the backups but rather about few interesting things I learned during last recovery case.

First, some facts about the system and how data was lost:

  • MySQL had a dedicated partition on XFS file system
  • Server was running innodb_file_per_table
  • There was a …
[Read more]
LCA Miniconf Call for Papers: Data Storage: Databases, Filesystems, Cloud Storage, SQL and NoSQL

This miniconf aims to cover many of the current methods of data storage and retrieval and attempt to bring order to the universe. We’re aiming to cover what various systems do, what the latest developments are and what you should use for various applications.

We aim for talks from developers of and developers using the software in question.

Aiming for some combination of: PostgreSQL, Drizzle, MySQL, XFS, ext[34], Swift (open source cloud storage, part of OpenStack), memcached, TokyoCabinet, TDB/CTDB, CouchDB, MongoDB, Cassandra, HBase….. and more!

Call for Papers open NOW (Until 22nd October).

SetFileValidData Function (Windows) - Now with added FAIL

SetFileValidData Function (Windows)

There seems to be two options on Win32 for preallocating disk space to files.

Basically, I want a equivilent to posix_fallocate or the ever wonderful xfsctl XFS_IOC_RESVSP64 call.

The idea being to (quickly) create a large file on disk that is stored efficiently (i.e. isn’t fragmented).

From SQL, you’d do something like “CREATE LOGFILE GROUP lg1 ADD UNDOFILE ‘uf1′ INITIAL_SIZE 1G;” and expect a 1GB file on disk. One way of getting this is calling write() (or WriteFile() on Win32) repeatedly until you’ve written a 1GB file full of zeros. This means you’re generating approximately 1GB of IO.

Except it’s worse than that: every time you extend the file, you’re going to be changing the metadata (file and free space information). If you’re lucky, you won’t be …

[Read more]
Showing entries 1 to 9