Showing entries 1 to 10 of 29
10 Older Entries »
Displaying posts with tag: operations (reset)
Cut-over logic in vitess schema migrations

Vitess supports managed, non-blocking schema migrations based on VReplication, aptly named vitess migrations. Vitess migrations are powerful, revertible, and failure agnostic. They take an asynchronous approach, which is more lightweight on the database server. The asynchronous approach comes with an implementation challenge: how to cut-over with minimal impact to the user/app and risk free of data loss. In this post we take a deep dive into the cut-over logic used in vitess migrations.

Online DDL: why FOREIGN KEYs are not supported

This post explains the inherent problem of running online schema changes in MySQL, on tables participating in a foreign key relationship. We'll lay some ground rules and facts, sketch a simplified schema, and dive into an online schema change operation. Our discussion applies to gh-ost, pt-online-schema-change, and VReplication based migrations, or any other online schema change tool that works with a shadow/ghost table like the Facebook tools. Why Online DDL? # Online schema change tools come as workarounds to an old problem: schema migrations in MySQL were blocking, uninterruptible, aggressive in resources, replication unfriendly.

Online DDL in Vitess

Vitess introduces a new way to run schema migrations: non-blocking, asynchronous, scheduled online DDL. With online DDL Vitess simplifies the schema migration process by taking ownership of the operational overhead, and providing the user a simple, familiar interface: the standard ALTER TABLE statement. Let’s first give some background and explain why schema migrations are such an issue in the databases world, and then dive into implementation details The relational model and the operational overhead # The relational model is one of the longest surviving models in the software world, introduced decades ago and widely used until today.

JFG Posted on the Percona Community Blog - A Nice Feature in MariaDB 10.3: no InnoDB Buffer Pool in Core Dumps

I just posted an article on the Percona Community Blog.  You can access it following this link:

A Nice Feature in MariaDB 10.3: no InnoDB Buffer Pool in Core Dumps

I do not know if I will stop publishing posts on my personal blog or use both, I will see how things go.  In the rest of this post, I will share why I published there and how things went in the process.

So there is a Percona

A Nice Feature in MariaDB 10.3: no InnoDB Buffer Pool in Core Dumps

MariaDB 10.3 is now generally available (10.3.7 was released GA on 2018-05-25). The article What’s New in MariaDB Server 10.3 by the MariaDB Corporation lists three key improvements in 10.3: temporal data processing, Oracle compatibility features, and purpose-built storage engines. Even if I am excited about MyRocks and curious on Spider, I am also very interested in less flashy but still very important changes that make running the database in production easier. This post describes such improvement: no InnoDB Buffer Pool in core dumps.

Hidden in the …

[Read more]
Database Daily Ops Series: GTID Replication and Binary Logs Purge

This blog continues the ongoing series on daily operations and GTID replication.

In this blog, I’m going to investigate why the error below has been appearing in a special environment I’ve been working with on the last few days:

Last_IO_Errno: 1236
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log:
'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the
master has purged binary logs containing GTIDs that the slave requires.'

The error provides the right message, and explains what is going on. But sometimes, it can be a bit tricky to solve this issue: you need additional information discovered after some tests and readings. We try and keep Managed Services scripted, in the sense that our advice and best practices …

[Read more]
Using gh-ost for online schema changes

The pt-online-schema-change tool has been a workhorse for years, allowing MySQL DBAs to alter tables with minimal impact to application workload, and before MySQL had native online alter capabilities. Although consistently grateful for the tool’s existence, I never liked the messiness of having to use and clean up triggers, and most DBAs have a horror story or two to tell about using any of the online alter methods.

When Github’s online schema transmogrifer (gh-ost; https://github.com/github/gh-ost) was released, then, I jumped on the chance to test it out.

Following are my testing notes. I tried out the flags available to run gh-ost from different locations in a replication setup, both functionally and under load. The load applied was via mysqlslap, using a …

[Read more]
MaxScale Binlog Server HOWTO: Operations (including Chaining)

In the Install and Configure HOWTO, we learned how to install and configure a MaxScale Binlog Server.  In this HOWTO, I will present the common operations that you might need to perform when using this software.  Those operations include:

Purging Binary Logs, Chaining Binlog Servers, Saving Binary Log Files in the Non-Default Directory, Downloading Binary Logs other than First, Listing Connected

Syncronizing MySQL where tables have triggers and foreign keys defined

On a recent consulting engagement, the PSCE team were charged with what can be considered a fairly common task of synchronising tables between master and slave in MySQL Replication. On this occasion the  schema contained both foreign key constraints and triggers, this post describes how we avoided the potential problems related to such an operation.

The process to synchronise tables in MySQL is to first identify the differences between tables and then execute queries which bring those tables into a consistent state. The first part of the process can be handled by the pt-table-checksum tool, which steps through the table analysing sets of rows (chunks) and recording a checksum value. Then taking advantage of replication, the same process occurs on each of the slaves and the checksums can then be compared. Once the entire table has been processed, a second tool pt-table-sync can be used …

[Read more]
Full table scans and MySQL performance

High season is coming, how do you make sure that MySQL will handle the increased load? Stress tests could help with that, but it’s not a good idea to run them in a production environment. In this case Select_scan, Select_full_join and other MySQL counters could quickly give you an idea of how many queries are not performing well and could cause a performance degradation as the load goes up.

Select_scan from SHOW GLOBAL STATUS indicates how many full table scans were done since last MySQL restart. Scanning the entire table is a resource intensive operation. It also forces MySQL to store unnecessary data in the buffer pool, wasting memory and IO resources.

Full scan of a tiny table would be quite fast so missing indexes could stay invisible until the load rises or the dataset grows up. This could also be the case for developers who work with too small data sets on their dev boxes. To prevent performance issues all newly added …

[Read more]
Showing entries 1 to 10 of 29
10 Older Entries »