Showing entries 1 to 10 of 116
10 Older Entries »
Displaying posts with tag: benchmark (reset)
Netflix Data Benchmark: Benchmarking Cloud Data Stores

The Netflix member experience is offered to 83+ million global members, and delivered using thousands of microservices. These services are owned by multiple teams, each having their own build and release lifecycles, generating a variety of data that is stored in different types of data store systems. The Cloud Database Engineering (CDE) team manages those data store systems, so we run benchmarks to validate updates to these systems, perform capacity planning, and test our cloud instances with multiple workloads and under different failure scenarios. We were also interested in a tool that could evaluate and compare new data store systems as they appear in the market or in the open source domain, determine their performance characteristics and limitations, and gauge whether they could be used in production for relevant use cases. For these purposes, we wrote Netflix Data Benchmark

[Read more]
tpcc-mysql benchmark tool: less random with multi-schema support

In this blog post, I’ll discuss changes I’ve made to the


 benchmark tool. These changes make it less random and support multi-schema.

This post might only be interesting to performance researchers. The


 benchmark to is what I use to test different hardware (as an example, see my previous post:

The first change is support for multiple schemas, rather than just one schema. Supporting only one schema creates too much internal locking in MySQL on the same rows or the same index. Locking is …

[Read more]
MySQL/docker performance report update

Saturday I was in my favourite grocery store, standing in the line, browsing the net on my phone. I read Vadim Tkachenko‘s blog post about Measuring Percona Server Docker CPU/network overhead and his findings were the opposite than mine – he didn’t found any measurable difference. Reading his post, he did found huge impact in networking which I didn’t […]

MySQL in docker or native – performance benchmarks

Back in October I have write about possible ways of running multiple MySQL instances on the same hardware. As the months passing by, the project of splitting our database schemas into standalone instances is closing in, so I started to check the different ways. EDIT: This post is outdated, here is the follow up. I started […]

MariaDB 10.1 and MySQL 5.7 performance on commodity hardware

When you have read my previous blog post about MariaDB 10.1 GA performance, you have probably wondered why I didn’t include any numbers for MySQL 5.7. There are two reasons: first MySQL wasn’t GA at that time and secondly MySQL is not running stable on Power8. Today I will come up with a comparison benchmark. […]

The post MariaDB 10.1 and MySQL 5.7 performance on commodity hardware appeared first on

MySQL 5.6 Benchmarks with Haswell CPUs, SSDs and PCIe Flash


The purpose of this test is to benchmark MySQL 5.6 performance on hardware with Haswell CPUs, SSDs and PCIe Flash storage devices.



  • SysBench OLTP workload installed on the database machine
  • MySQL 5.6.24 distribution from Percona
  • jemalloc used for MySQL Server and Sysbench test client
  • Charts are plotted using MySQL Performance Analyzer


  • Data and software on server was wiped out post every test run.
  • Predefined number of tables - 16 or 64
  • Predefined size of rows - 20M per table

Read Only
Read Write
Concurrencies vary from 1 to 512 with incremental increase

[Read more]
InnoDB vs TokuDB in LinkBench benchmark

Previously I tested Tokutek’s Fractal Trees (TokuMX & TokuMXse) as MongoDB storage engines – today let’s look into the MySQL area.

I am going to use modified LinkBench in a heavy IO-load.

I compared InnoDB without compression, InnoDB with 8k compression, TokuDB with quicklz compression.
Uncompressed datasize is 115GiB, and cachesize is 12GiB for InnoDB and 8GiB + 4GiB OS cache for TokuDB.

Important to note is that I used tokudb_fanout=128, which is only available in our latest Percona Server release.
I …

[Read more]
How to Purchase [Benchmarking] Hardware on a Budget

One of my goals at Acmebenchmarking is make sure I'm running on hardware that is representative of real-world infrastructure, while at the same time doing it as inexpensively as possible.

To date I've been running on two custom built "desktops" (for lack of a better term). Both have an Intel Core i7 4790K processor (quad core plus hyperthreading, 4Ghz), 32GB RAM (dual channel), and a quality SSD. They are named acmebench01 and acmebench02.

Alas, it is time to expand. MUST...PURCHASE...MORE...HARDWARE!

In order to maintain the inexpensive theme I tend to buy used hardware, my goal on this purchase was to achieve many more cores and greater memory bandwidth than my existing machines can provide. Keep in mind that used hardware is great for benchmarking (and likely development and QA environments) but you might want to avoid it for production. For years now I've been purchasing used hardware …

[Read more]
Bad Benchmarketing and the Bar Chart

Technical conferences are flooded with visual [mis]representations of a particular product's performance, compression, cost effectiveness, micro-transactions per flux-capacitor, or whatever two-axis comparison someone dreams up. Lets be honest, benchmarketers like to believe we all suffer from innumeracy.

The Merriam-Webster dictionary defines innumeracy as follows:
innumeracy (noun): marked by an ignorance of mathematics and the scientific approach Mark Callaghan has been a long time advocate of explaining benchmark results, but that's not the point of the bar chart. Oh no, the bar chart only exists to catch your eye and …

[Read more]
How to benchmark MongoDB

There are generally three components to any benchmark project:

  1. Create the benchmark application
  2. Execute it
  3. Publish your results

I assume many people think they want to run more benchmarks but give up since step 2 is extremely consuming as you expand the number of different configurations/scenarios.

I'm hoping that this blog post will encourage more people to dive-in and participate, as I'll be sharing the bash script I used to test the various compression options coming in the MongoDB 3.0 storage engines. It enabled me to run a few different tests against 8 different configurations, recording insertion speed and size-on-disk for each one.

If you're into this sort of thing, please read on and provide any feedback or improvements you can think of. …

[Read more]
Showing entries 1 to 10 of 116
10 Older Entries »