Chapter 9 of Efficient MySQL Performance changed in development. Originally, it was a chapter titled “Not MySQL”, as in “how not to use MySQL.” But we (O’Reilly and I) pulled the chapter, and the current chapter 9 in print is “Other Challenges”: an important laundry list of other challenges engineers using MySQL must be aware of and address. This blog post is a sketch of the unwritten chapter 9: how not to use MySQL.
Chapter 9 of Efficient MySQL Performance changed in development. Originally, it was a chapter titled “Not MySQL”, as in “how not to use MySQL.” But we (O’Reilly and I) pulled the chapter, and the current chapter 9 in print is “Other Challenges”: an important laundry list of other challenges engineers using MySQL must be aware of and address. This blog post is a sketch of the unwritten chapter 9: how not to use MySQL.
Chapter 9 of Efficient MySQL Performance changed in development. Originally, it was a chapter titled “Not MySQL”, as in “how not to use MySQL.” But we (O’Reilly and I) pulled the chapter, and the current chapter 9 in print is “Other Challenges”: an important laundry list of other challenges engineers using MySQL must be aware of and address. This blog post is a sketch of the unwritten chapter 9: how not to use MySQL.
Overview One of the best things the DevOps movement has ushered in is the concept of Infrastructure as Code. IaC lets you define your infrastructure in specially formatted files, and allows you to use automation tools to create or modify your infrastructure based on those files. But did you know that you can also manage your database schemas in a similar approach? Atlas CLI is a command line tool that helps manage the structure of your database by keeping a representation of the schema in a file. It can be used by itself to manage your schema changes, or as part of a CI/CD pipeline to automate the process of updating your schema based on the definition file. In this article, we’ll cover the basics of using Atlas CLI to generate a schema definition file, as well as updating the schema of a PlanetScale database using the tool. To follow along, you should have the following: A PlanetScale account. The PlanetScale CLI installed and configured. The …
[Read more]Learn how to use Atlas CLI with PlanetScale to define your database as code.
This article outlines the basic configurations for setting up and deploying Percona XtraDB Cluster 8.0 (PXC) on Amazon EC2, as well as what is new in the setup compared to Percona XtraDB Cluster 5.7.
What is Percona XtraDB Cluster an ideal fit for?
Percona XtraDB Cluster is a cost-effective, high-performance clustering solution for mission-critical data. It combines all the improvements, and functionality found in MySQL 8 with Percona Server for MySQL‘s Enterprise features and Percona’s upgraded Galera library.
A Percona XtraDB Cluster environment is an ideal fit for applications requiring 5-9s uptime with high read …
[Read more]list of conferences where you can MySQL.
The availability of MySQL HeatWave on Amazon Web Services (AWS) was announced on Monday, September 12, extending Oracle’s commitment to multi-cloud access. Many industry experts applaud this new platform support for MySQL HeatWave -- some of their quotes are included here.
Galera Load Balancer (GLB) is a scalable and performant, yet easy to use TCP/IP connection balancing proxy. It is the oldest, yet actively maintained load balancer in the MySQL ecosystem, with a wide array of customers using it in production.
Firstly, please request for binaries via contacting sales@galeracluster.com. Once you have access to the package repository, you’ll have access to RPMs. Installing the RPMs are straightforward, and you can also add it to your Yum repository. This blog presumes you already have access to the binaries.
You can start it up, really simply:
glbd --threads 6 127.0.0.1:3306 188.166.179.177:3306 165.22.50.152:3306 165.22.49.92:3306 …
[Read more]
Two weeks ago I was being drawn into the debug of artdb, the Replication hierarchy used by our Artifactory instance.
Artifactory overloaded the database. This was incident-handled by optimizing a number of slow queries using some covering index trickery, and by upgrading the hardware substantially.
Using the runway we bought, we found and partially fixed the following problems:
- Fixed: A number of very expensive reporting queries were sped up 16x to 20x using covering indexes, from 180s runtime to 8s-12s runtime.
- Fixed: We optimized our database and data size by completing the data lifecycle for several repo types, deleting old images.
- Fixed: Hardware upgrade lowered load even more, and sped up the queries even more.
- Fixed: …