Showing entries 1 to 10 of 14
4 Older Entries »
Displaying posts with tag: gcp (reset)
MySQL Master Replication Crash Safety Part #5a: making things faster without reducing durability - using better hardware

This is a follow-up post in the MySQL Master Replication Crash Safety series.  In the previous posts, we explored the consequences of reducing durability on masters (different data inconsistencies after an OS crash depending on replication type) and the performance boost associated with this configuration (benchmark results done on Google Cloud Platform / GCP).  The consequences are summarised in

MySQL Master Replication Crash Safety Part #5: faster without reducing durability (under the hood)

This post is a sister post to MySQL Master Replication Crash Safety Part #5: making things faster without reducing durability.  There is no introduction or conclusion to this post, only landing sections: reading this post without its context is not not recommended. You should start with the main post and come back here for more details.

And this Part #5 of the series has many sub-parts.  So far,

MySQL Master Replication Crash Safety Part #4: benchmarks of high and low durability

This is a follow-up post in the MySQL Master Replication Crash Safety series.  In the three previous posts, we explored the consequence of reducing durability on masters (including setting sync_binlog to a value different from 1).  But so far, I only quickly presented why a DBA would run MySQL with such configuration.  In this post, I present actual benchmark results.  I also present a

MySQL Master Replication Crash Safety part #4: benchmarks (under the hood)

This post is a sister post to MySQL Master Replication Crash Safety Part #4: benchmarks of high and low durability.  There are no introduction or conclusion to this post, only landing sections: reading this post without its context is not recommended. You should start with the main post and come back here for more details.

Environment

My benchmark environment is composed of three vms in

2019 Open Source Database Report: Top Databases, Public Cloud vs. On-Premise, Polyglot Persistence

Ready to transition from a commercial database to open source, and want to know which databases are most popular in 2019? Wondering whether an on-premise vs. public cloud vs. hybrid cloud infrastructure is best for your database strategy? Or, considering adding a new database to your application and want to see which combinations are most popular? We found all the answers you need at the Percona Live event last month, and broke down the insights into the following free trends reports:

[Read more]
MySQL PITR The Fastest Way With DevOps

Point In Time Recovery - is a nightmare for DBAs if the MySQL clusters are self managed. It was 10PM, after had my dinner I was simply watching some shows in YouTube. And my phone was ringing, the customer on other side. Due to some bad queries, one of the main table get updated without where clause. Then suddenly everyone joined the call and asking me to bring the data back. That day it took 6 to 8 Hours to bring the data. Yes, every DBAs will do one or two biggest mistakes. In my carrier I would say this was that day. So here is my MySQL PITR the fastest way with DevOps.

Where I failed in this DR setup?

  • PITR starts with last full backup + binlogs
  • I missed in my backup script to add --master-data, So I don’t know how to start applying binlogs.
  • No Delay replica. I got the call within 10mins when the data has been messed up. But all of my replicas are real time sync. Its affected all of …
[Read more]
MySQL With DevOps 1 – Automate Database Archive

This is my next blog series. Im going to write about how I automated many complex tasks in MySQL with Rundeck. In my last series, I have explained RunDeck basics. You can find those articles here. In this blog Im writing about how I automated MySQL archive for multiple tables in one Rundeck job. Challeange …

The post MySQL With DevOps 1 – Automate Database Archive appeared first on SQLgossip.

MySQL With DevOps 1 - Automate Database Archive

This is my next blog series. Im going to write about how I automated many complex tasks in MySQL with Rundeck. In my last series, I have explained RunDeck basics. You can find those articles here. In this blog Im writing about how I automated MySQL archive for multiple tables in one Rundeck job.

Challeange with Replication:

My MySQL database setup has 1 Master 4 Read Replica and the 3’rd replica is an intermediate Master for Replica 4. I don’t want to archive this data on Replica 3 and 4. Because these replicas are using for generating historical reports also some internal application.

Disable Log-Bin:

To prevent archive data on Replica 3 and 4, I decided to disable binlog on my archive session. But another challenge is, it won’t replicate to Replica 1 and 2. So my final solution is Archive the data on Master, then execute the …

[Read more]
Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 2

Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 2

In part1, we explained how we are going to approach the HA setup. Here we can see how to install and configure Orchestrator and ProxySQL, then do the failover testing.

Install and configure MySQL Replication:

We need a MySQL with 4 Read replica and the 4'th replica will have a replica for it. And we must have to use GTID replication. Because once the master failover done, the remaining replicas will start replicating from the new master. Without GTID its not possible, but as an alternate Orchestrator provides Pseudo-GTID.

VM Details: …

[Read more]
Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 1

Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 1

Recently we have migrated one of our customer's infra to GCP and post the migration we published some adventures on ProxySQL which we implemented for them.

  1. Reduce MySQL Memory Utilization With ProxySQL Multiplexing
  2. How max_prepared_stmt_count can bring down production

Now, we are going to implement an HA solution with customer filter for failover. We have done a PoC and the blog is about this PoC configurations. And again the whole setup has been implemented in GCP. You can follow the same steps for AWS …

[Read more]
Showing entries 1 to 10 of 14
4 Older Entries »