Showing entries 51 to 60 of 365
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: Benchmarks (reset)
How to Improve MySQL AWS Performance 2X Over Amazon RDS at The Same Cost

AWS is the #1 cloud provider for open-source database hosting, and the go-to cloud for MySQL deployments. As organizations continue to migrate to the cloud, it’s important to get in front of performance issues, such as high latency, low throughput, and replication lag with higher distances between your users and cloud infrastructure. While many AWS users default to their managed database solution, Amazon RDS, there are alternatives available that can improve your MySQL performance on AWS through advanced customization options and unlimited EC2 instance type support. ScaleGrid offers a compelling alternative to hosting MySQL on AWS that offers better performance, more control, and no cloud vendor lock-in and the same price as Amazon RDS. In this post, we compare the performance of MySQL Amazon RDS …

[Read more]
MySQL Master Replication Crash Safety Part #5a: making things faster without reducing durability - using better hardware

This is a follow-up post in the MySQL Master Replication Crash Safety series.  In the previous posts, we explored the consequences of reducing durability on masters (different data inconsistencies after an OS crash depending on replication type) and the performance boost associated with this configuration (benchmark results done on Google Cloud Platform / GCP).  The consequences are summarised in

MySQL Master Replication Crash Safety Part #5: faster without reducing durability (under the hood)

This post is a sister post to MySQL Master Replication Crash Safety Part #5: making things faster without reducing durability.  There is no introduction or conclusion to this post, only landing sections: reading this post without its context is not not recommended. You should start with the main post and come back here for more details.

And this Part #5 of the series has many sub-parts.  So far,

MySQL Master Replication Crash Safety Part #4: benchmarks of high and low durability

This is a follow-up post in the MySQL Master Replication Crash Safety series.  In the three previous posts, we explored the consequence of reducing durability on masters (including setting sync_binlog to a value different from 1).  But so far, I only quickly presented why a DBA would run MySQL with such configuration.  In this post, I present actual benchmark results.  I also present a

MySQL Master Replication Crash Safety part #4: benchmarks (under the hood)

This post is a sister post to MySQL Master Replication Crash Safety Part #4: benchmarks of high and low durability.  There are no introduction or conclusion to this post, only landing sections: reading this post without its context is not recommended. You should start with the main post and come back here for more details.

Environment

My benchmark environment is composed of three vms in

My Slides about MySQL 8.0 Scalability & Benchmarks from #PerconaLIVE 2019 Austin, TX

Here are my slides about MySQL 8.0 Scalability & Benchmarks from Percona LIVE 2019 in Austin, TX. (and I also realized I forgot to publish my slides from MySQL pre-FOSDEM 2019 Day about MySQL Performance Tuning, so fixing it now ;-))

P.S. Percona LIVE was "just awesome" ! ;-))

Rgds,
-Dimitri

Benchmark ClickHouse Database and clickhousedb_fdw

In this research,  I wanted to see what kind of performance improvements could be gained by using a ClickHouse data source rather than PostgreSQL. Assuming that I would see performance advantages with using ClickHouse, would those advantages be retained if I access ClickHouse from within postgres using a foreign data wrapper (FDW)? The FDW in question is clickhousedb_fdw – an open source project from Percona!

The database environments under scrutiny are PostgreSQL v11, clickhousedb_fdw and a ClickHouse database. Ultimately, from within PostgreSQL v11, we are going to issue various SQL queries routed through our clickhousedb_fdw to the ClickHouse database. Then we’ll see how the FDW performance compares with those same queries executed in native PostgreSQL and native ClickHouse.

Clickhouse Database

ClickHouse is an open source column based database management system which can achieve performance of between 100 and …

[Read more]
Creating Custom Sysbench Scripts

Sysbench has long been established as the de facto standard when it comes to benchmarking MySQL performance. Percona relies on it daily, and even Oracle uses it when blogging about new features in MySQL 8. Sysbench comes with several pre-defined benchmarking tests. These tests are written in an easy-to-understand scripting language called Lua. Some of these tests are called: oltp_read_write, oltp_point_select, tpcc, oltp_insert. There are over ten such scripts to emulate various behaviors found in standard OLTP applications.

But what if your application does not fit the pattern of traditional OLTP? How can you continue to utilize the power of load-testing, benchmarking, …

[Read more]
MySQL Challenge: 100k Connections

In this post, I want to explore a way to establish 100,000 connections to MySQL. Not just idle connections, but executing queries.

100,000 connections. Is that really needed for MySQL, you may ask? Although it may seem excessive, I have seen a lot of different setups in customer deployments. Some deploy an application connection pool, with 100 application servers and 1,000 connections in each pool. Some applications use a “re-connect and repeat if the query is too slow” technique, which is a terrible practice. It can lead to a snowball effect, and could establish thousands of connections to MySQL in a matter of seconds.

So now I want to set an overachieving goal and see if we can achieve it.

Setup

For this I will use the following hardware:

Bare metal server provided by packet.net, instance size: c2.medium.x86
Physical Cores @ 2.2 GHz
(1 X AMD EPYC 7401P)
Memory: 64 GB of …

[Read more]
Measuring Percona Server for MySQL On-Disk Decryption Overhead

Percona Server for MySQL 8.0 comes with enterprise grade total data encryption features. However, there is always the question of how much overhead – or performance penalty – comes with the data decryption. As we saw in my networking performance post, SSL under high concurrency might be problematic. Is this the case for data decryption?

To measure any overhead, I will start with a simplified read-only workload, where data gets decrypted during read IO.

During query execution, the data in memory is already decrypted so there is no additional processing time. The decryption happens only for blocks that require a read from storage.

For the benchmark I will use the following workload:

sysbench …
[Read more]
Showing entries 51 to 60 of 365
« 10 Newer Entries | 10 Older Entries »