Showing entries 61 to 70 of 329
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: TokuDB (reset)
Should vegetarians open steakhouse restaurants?

"Should vegetarians open steakhouse restaurants?"

Though someone will probably give me several examples of why they should, I'll argue that they absolutely should not. How can someone who doesn't eat steak convince others to eat at their "steak-only" restaurant?

But this is something a "professional technology benchmarker" (PTB) struggles with on a regular basis. Hello, I'm Tim Callaghan, and I'm a PTB.
professional technology benchmarker, or PTB (noun) : One who compares two technologies as part of their job. One of these technologies is usually the product of the PTB's employer, the other is almost always not. In a past experience I was tasked with comparing the performance of a fully in-memory database with Oracle and MySQL on a "TPC-C like" workload. At the time I was an Oracle expert and working for the in-memory database company, but had never started a single MySQL server in my life. At …

[Read more]
Can we improve the current state of benchmarketing?

I'm starting off 2015 with the following New Year's Resolution, to improve the state of benchmarking. About a month ago I noticed the following tweet:
Hey @tokutek, please look at this: http://stssoft.com/products/stsdb-4-0/benchmark …. Are the benchmarks rigged or correctly done? I'm curious to know! While I've never met Ian Campbell (@iamic) he certainly knew how to call me to action. I immediately checked out the STSsoft website, the benchmark results page, and the benchmark code itself. My first reaction …

[Read more]
Testing TokuDB’s Group Commit Algorithm Improvement

The MySQL 5.6 Release has introduced some changes to how two phase commit works and is managed.  In particular, the commit phase of transactions to the binary log is now serialized and this behavior is something we identified fairly immediately.  We implement a group commit algorithm that needed to be altered so that TokuDB’s group commit to its recovery log would function effectively.

As part of our effort to verify the new Binary Log Group Commit functionality introduced in TokuDB 7.5.4 for Percona Server, we wanted to demonstrate the substantial increase in throughput scaling but also show the bottleneck caused by the skewed interaction between the binary log group commit algorithm in MySQL 5.6 and the transaction commit mechanism used in TokuDB 7.5.3 for Percona Server.  During our testing, we noticed that the throughput scaling was diminished when we turned on the binlog.

Here are the relevant system …

[Read more]
Scaling TokuDB Performance with Binlog Group Commit

TokuDB offers high throughput for write intensive applications, and the throughput scales with the number of concurrent clients.  However, when the binary log is turned on, TokuDB 7.5.2 throughput suffers.  The throughput scaling problem is caused by a poor interaction between the binary log group commit algorithm in MySQL 5.6 and the way TokuDB commits transactions.   TokuDB 7.5.4 for Percona Server 5.6 fixes this problem, and the result is roughly an order of magnitude increase in SysBench throughput for in memory workloads.

MySQL uses two phase commit protocol to synchronize the MySQL binary log with the recovery logs of the storage engines when a transaction commits.  Since fsync’s are used to ensure the durability of the data in the various logs, and fsync’s can be very slow, the fsync can easily become a bottleneck.  A …

[Read more]
Benchmarking Presentation at Percona Live London 2014

In a few weeks I’m presenting “Performance Benchmarking: Tips, Tricks, and Lessons Learned” at Percona Live London 2014 (November 3-4). I continue to learn lessons and improve my benchmarking capabilities, so the content is a full upgrade from my presentation at Percona Live Santa Clara in April 2013. Anyone interested in achieving and sustaining the best performance out of their software/hardware/application should attend.

Also, Tokutek is sponsoring so we’ll be available in the expo hall throughout the show.

If you are attending or in the area and want to learn more about …

[Read more]
MySQL compression: Compressed and Uncompressed data size

MySQL has information_schema.tables that contain information such as “data_length” or “avg_row_length.” Documentation on this table however is quite poor, making an assumption that those fields are self explanatory – they are not when it comes to tables that employ compression. And this is where inconsistency is born. Lets take a look at the same table containing some highly compressible data using different storage engines that support MySQL compression:

TokuDB:

mysql> select * from information_schema.tables where table_schema='test' G
*************************** 1. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: test
TABLE_NAME: comp
TABLE_TYPE: BASE TABLE
ENGINE: TokuDB
VERSION: 10
ROW_FORMAT: tokudb_zlib
TABLE_ROWS: 40960
AVG_ROW_LENGTH: 10003
DATA_LENGTH: 409722880
MAX_DATA_LENGTH: …
[Read more]
TokuDB Read Free Replication : Details and Use Cases

The biggest innovation in TokuDB v7.5 is Read Free Replication (RFR). I blogged a few days ago posting a benchmark showing how much additional throughput can be achieved on a replication slave, while at the same time lowering the read IO operations to almost zero. The official documentation on the feature is available here.

In this second blog I want to cover the requirements for RFR, as well as some interesting use-cases for the technology.

RFR Requirements The only requirement on the master is that …[Read more]
More then 1000 columns – get transactional with TokuDB

Recently I encountered a specific situation in which a customer was forced to stay with the MyISAM engine due to a legacy application using tables with over 1000 columns. Unfortunately InnoDB has a limit at this point. I did not expect to hear this argument for MyISAM. It is usually about full text search or spatial indexes functionality that were missing in InnoDB, and which were introduced in MySQL 5.6 and 5.7, respectively, to let people forget about MyISAM. In this case though, InnoDB still could not be used, so I gave the TokuDB a try.

I’ve created a simple bash script to generate a SQL file with …

[Read more]
TokuDB v7.5 Read Free Replication : The Benchmark

New to TokuDB® v7.5 is a feature we’re calling “Read Free Replication” (RFR). RFR allows TokuDB replication slaves to process insert, update, and delete statements with almost no read IO. As a result, the slave can easily keep up with the master (no lag) as well as brings all the read IO capacity of the slave for read-scaling your workload.

The goal of this blog is two-fold: (1) to cover why RFR is important and how RFR works and (2) to run a simple before/after benchmark showing the impact of RFR on a well known workload. Later this week I’ll post another blog showing other interesting use-cases for RFR beyond this first benchmark.

Read Free Replication: The Why and How

In MySQL, a replication slave does less work than the master because there is no need for a slave to execute SELECT statements (only INSERT, UPDATE, and DELETE). However, a MYSQL slave can struggle to keep up with the master because replication is …

[Read more]
Announcing TokuDB v7.5: Read Free Replication

Today we released TokuDB® v7.5, the latest version of Tokutek’s storage engine for MySQL and MariaDB.

I’ll be publishing two blogs next week to go into more details about our new “Read Free Replication”, but here are high level descriptions of the most important new features.

Read Free Replication
TokuDB replication slaves can now be configured to process the binary logs with virtually no read IO. This is accomplished via two new server parameters: one to allow the skipping of uniqueness checks (for inserts and updates), the other to eliminate read-before-write behavior (for updates and deletes). The two other conditions are that the slave must be in read-only mode and replication must be row based.
Hot Backup Now Supports Multiple Directories (Enterprise Edition)
The original implementation of our Hot Backup functionality was only capable of …
[Read more]
Showing entries 61 to 70 of 329
« 10 Newer Entries | 10 Older Entries »