Showing entries 1 to 10 of 128
10 Older Entries »
Displaying posts with tag: benchmark (reset)
How to Benchmark Replication Performance in MySQL

In this blog, I will cover important aspects which you need to test when benchmarking replication setup. MySQL has great tools that could be used to test its performance. They include:

sysbench – https://github.com/akopytov/sysbench

BMK-kit – http://dimitrik.free.fr/blog/posts/mysql-perf-bmk-kit.html

mysqlslap – https://dev.mysql.com/doc/refman/8.0/en/mysqlslap.html

LinkBench – https://github.com/facebookarchive/linkbench

I will not describe how to use them here, as you can find instructions on the provided links or in the Percona blog by browsing tags …

[Read more]
Announcing Vitess Arewefastyet

Benchmarking is a critical technique for delivering high performance software. The basic idea behind benchmarking is measuring and comparing the performance of one software version against another. Over the years, many benchmarking techniques have emerged, but we can broadly separate them into two categories: micro and macro benchmarks. Micro-benchmarks measure a small part of the codebase, usually by isolating a single function call and calling it repeatedly, whereas macro-benchmarks measure the performance of the codebase as a whole and run in an environment similar to what end-users experience.

Which Cloud Provider Performs Better for My Mysql Workload?

More and more people are nowadays thinking of cloud migration. The question of “Which cloud provider performs better for my MySQL workload?” is really common but cannot always be easily answered. However, there are ways to come up with an answer. This question also applies when thinking of moving to any provider, not necessarily a cloud one or DBaaS.

The Problem

The most reliable conclusion can be found if you have a testing environment that is fully identical and can produce the same amount of traffic compared to your production version. In this case, the comparison should be straightforward as what you have to do is point your testing environment against the under-evaluation (cloud) providers, and evaluate the differences in performance. But this is not always easy, as many organizations do not have such environments or it’s not always possible to replay all traffic.

The Idea

In this …

[Read more]
Best Practices for Percona XtraDB Cluster on AWS

In this blog post I’ll look at the performance of Percona XtraDB Cluster on AWS using different service instances, and recommend some best practices for maximizing performance.

You can use Percona XtraDB Cluster in AWS environments. We often get questions about how best to deploy it, and how to optimize both performance and spend when doing so. I decided to look into it with some benchmark testing.

For these benchmark tests, I used the following configuration:

  • Region:
    • Availability zones: US East – 1, zones: b, c, d
    • Sysbench 1.0.8
    • ProxySQL 1.4.3
    • 10 tables, 40mln records – ~95GB dataset
    • Percona XtraDB Cluster 5.7.18
    • Amazon Linux AMI

We …

[Read more]
Gh-ost benchmark against pt-online-schema-change performance

In this blog post, I will run a gh-ost benchmark against the performance of pt-online-schema-change.

When gh-ost came out, I was very excited. As MySQL ROW replication became commonplace, you could use it to track changes instead of triggers. This practice is cleaner and safer compared to Percona Toolkit’s pt-online-schema-change. Since gh-ost doesn’t need triggers, I assumed it would generate lower overhead and work faster. I frequently called it “pt-online-schema-change on steroids” in my talks. …

[Read more]
ClickHouse in a General Analytical Workload (Based on a Star Schema Benchmark)

In this blog post, we’ll look at how ClickHouse performs in a general analytical workload using the star schema benchmark test.

We have mentioned ClickHouse in some recent posts (ClickHouse: New Open Source Columnar Database, Column Store Database Benchmarks: MariaDB ColumnStore vs. Clickhouse vs. Apache Spark), where it showed excellent results. ClickHouse by itself seems to be event-oriented RDBMS, as its name suggests (clicks). Its primary purpose, using Yandex Metrica (the system similar to Google Analytics), also points to an event-based nature. We also can see there is …

[Read more]
Column Store Database Benchmarks: MariaDB ColumnStore vs. Clickhouse vs. Apache Spark

This blog shares some column store database benchmark results, and compares the query performance of MariaDB ColumnStore v. 1.0.7 (based on InfiniDB), Clickhouse and Apache Spark.

I’ve already written about ClickHouse (Column Store database).

The purpose of the benchmark is to see how these three solutions work on a single big server, with many CPU cores and large amounts of RAM. Both systems are massively parallel (MPP) database systems, so they should use many cores for SELECT queries.

For the benchmarks, I chose …

[Read more]
Millions of Queries per Second: PostgreSQL and MySQL’s Peaceful Battle at Today’s Demanding Workloads

This blog compares how PostgreSQL and MySQL handle millions of queries per second.

Anastasia: Can open source databases cope with millions of queries per second? Many open source advocates would answer “yes.” However, assertions aren’t enough for well-grounded proof. That’s why in this blog post, we share the benchmark testing results from Alexander Korotkov (CEO of Development, Postgres Professional) and Sveta Smirnova (Principal Technical Services Engineer, Percona). The comparative research of PostgreSQL 9.6 and MySQL 5.7 performance will be especially valuable for environments with multiple databases.

The idea behind this research is to provide an honest comparison for the two popular RDBMSs. Sveta and Alexander wanted to test the most recent versions of both MySQL and PostgreSQL with the same tool, under the same challenging …

[Read more]
RocksDB doesn't support large transactions very well

So I tried to do my first set of benchmarks and testing on RocksDB today, but I ran into a problem and had to file a bug:
https://github.com/facebook/mysql-5.6/issues/365

MySQL @ Facebook RocksDB appears to store at least 2x the size of the volume of changes in a transaction. I don't know how much space for the row + overhead there is in each transcation, so I'm just going to say 2x the raw size of the data changed in the transaction, as approximation. I am not sure how this works for updates either, that is whether old/new row information is maintained. If old/row data is maintained, then a pure update workload you would need 4x the ram for the given transactional changes. My bulk load was 12GB of raw data, so it failed as I have only 12GB of RAM in my test system.

The workaround (as suggested in the bug) is to set two configuration …

[Read more]
RocksDB doesn't support large transactions very well

So I tried to do my first set of benchmarks and testing on RocksDB today, but I ran into a problem and had to file a bug:
https://github.com/facebook/mysql-5.6/issues/365

MySQL @ Facebook RocksDB appears to store at least 2x the size of the volume of changes in a transaction. I don't know how much space for the row + overhead there is in each transcation, so I'm just going to say 2x the raw size of the data changed in the transaction, as approximation. I am not sure how this works for updates either, that is whether old/new row information is maintained. If old/row data is maintained, then a pure update workload you would need 4x the ram for the given transactional changes. My bulk load was 12GB of raw data, so it failed as I have only 12GB of RAM in my test system.

The workaround (as suggested in the bug) is to set two configuration …

[Read more]
Showing entries 1 to 10 of 128
10 Older Entries »