Showing entries 1 to 10 of 277
10 Older Entries »
Displaying posts with tag: Percona XtraDB Cluster (reset)
Replication Between Two Percona XtraDB Clusters, GTIDs and Schema Changes

I got this question on the “How to Avoid Pitfalls in Schema Upgrade with Percona XtraDB Cluster (PXC)” webinar and wanted to answer it in a separate post.

Will RSU have an effect on GTID consistency if replication PXC cluster to another cluster?

Answer for this: yes and no.

Galera assigns its own GTID for the operations, replicated to all nodes of the cluster. Such operations include DML (

INSERT/UPDATE/DELETE

 ) on InnoDB tables and DDL commands, executed with default TOI method. You can find more details on how GTIDs work in the Percona XtraDB Cluster in this blog post.

However, DDL commands, executed with RSU method, are …

[Read more]
Updates to Percona Kubernetes Operator for Percona XtraDB Cluster

On July 21, 2020, Percona delivered an updated version of our Percona Kubernetes Operator for Percona XtraDB Cluster (PXC) focused on easing deployment and operations management of a clustered MySQL environment. Included in the Percona Distribution for MySQL, our Operator is based on the best practices for MySQL cluster configuration and setup in Kubernetes. This update adds a variety of important new features including:

Smart Update to Safely and Reliably Upgrade your PXC Environment Automatically
We implemented a new update strategy called Smart Update. Smart Update is aware of the context of your environment and minimizes the number of failover events that need to occur to fully upgrade a …

[Read more]
Scaling the Percona Kubernetes Operator for Percona XtraDB Cluster

You got yourself a Kubernetes cluster and are now testing our Percona Kubernetes Operator for Percona XtraDB Cluster. Everything is working great and you decided that you want to increase the number of Percona XtraDB Cluster (PXC) pods from the default 3, to let’s say, 5 pods.

It’s just a matter of running the following command:

kubectl patch pxc cluster1 --type='json' -p='[{"op": "replace", "path": "/spec/pxc/size", "value": 5 }]'

Good, you run the command without issues and now you will have 5 pxc pods! Right? Let’s check out how the pods are being replicated:

kubectl get pods | grep pxc
cluster1-pxc-0                                     1/1     Running   0          25m
cluster1-pxc-1                                     1/1     Running   0          23m
cluster1-pxc-2                                     1/1 …
[Read more]
ProxySQL Behavior in the Percona Kubernetes Operator for Percona XtraDB Cluster

The Percona Kubernetes Operator for Percona XtraDB Cluster(PXC) comes with ProxySQL as part of the deal. And to be honest, the behavior of ProxySQL is pretty much the same as in a regular non-k8s deployment of it. So why bother to write a blog about it? Because what happens around ProxySQL in the context of the operator is actually interesting.

ProxySQL is deployed on its own POD (that can be scaled as well as the PXC Pods can). Each ProxySQL Pod has its own ProxySQL Container and a sidecar container. If you are curious, you can find out which node holds the pod by running

kubectl describe pod cluster1-proxysql-0 | grep Node:
Node: ip-192-168-37-111.ec2.internal/192.168.37.111

Login into and ask for the running containers. You will see something like this:

[root@ip-192-168-37-111 ~]# docker ps | grep -i proxysql …
[Read more]
Achieving Consistent Read and High Availability with Percona XtraDB Cluster 8.0

In real life, there are frequent cases where getting a running application to work correctly is strongly dependent on consistent write/read operations. This is no issue when using a single data node as a provider, but it becomes more concerning and challenging when adding additional nodes for high availability and/or read scaling. 

In the MySQL dimension, I have already described it here in my blog Dirty Reads in High Availability Solution.

We go from the most loosely-coupled database clusters with primary-replica async replication, to the fully tightly-coupled database clusters with NDB Cluster (MySQL/Oracle).

Adding components like ProxySQL to the architecture can, from one side, help in improving high availability, and from the other, it can amplify and randomize the negative effect of …

[Read more]
Join ProxySQL Tech Talks with Percona on June 4th, 2020!

Long months of the pandemic lockdown have brought to life many great online events enabling the MySQL community to get together and stay informed about the very recent developments and innovations available to MySQL users. It isn’t over yet! Next Thursday, June 4th, Percona & ProxySQL are co-hosting the ProxySQL Tech Talks with Percona virtual meetup covering ProxySQL, MySQL and Percona XtraDB Cluster.

The attendees are invited to participate in the two-hour deep-dive event with plenty of time for questions and answers (we will have two 40-minute sessions + 20 minutes allocated for Q&A). Get prepared, come with your burning questions and true war stories – we’ll have our speakers answer and comment on them! And here come the speakers:

  • René Cannaò, ProxySQL author and CEO of ProxySQL …
[Read more]
Percona XtraDB Cluster 8.0 Behavior Change for pxc-encrypt-cluster-traffic

Percona has enforced stronger security in Percona XtraDB Cluster (PXC) 8, but this requires some attention during the rollout of the new server version, so let see the why and what.

In PXC there are two different kinds of traffic: client-server exchange (ie: application traffic), and replication traffic. The latter refers to any SST/IST, write-set, and other service messages the nodes exchange.

In PXC 5.7 it is possible to activate SSL encryption by enabling the variable pxc-encrypt-cluster-traffic by following the instructions.

In PXC 8, we choose to enable encryption by default on all replication traffic, to have the highest out-of-box security enforcement.

While this is an obvious …

[Read more]
New Feature in Percona XtraDB Cluster 8.0 – Streaming Replication

Percona XtraDB Cluster 8.0 comes with an upgraded Galera 4.0 library, which provides a new feature – streaming replication. Let’s review what it is and when it might be helpful.

Previous versions of Percona XtraDB Cluster with Galera 3.x had a limitation in how big transactions are handled.

Let’s review the performance under sysbench-tpcc workload when in parallel we update a big update on a table that is even non-related to the tables in the primary workload.

Without Streaming Replication

Let’s run two workloads.

  1. sysbench-tpcc workload with 1 sec resolution
  2. In parallel run UPDATE oltp.sbtest SET k=k+1 LIMIT 1000000

Running update:

mysql> update sbtest1 set k=k+1 limit 1000000;
Query OK, 1000000 rows affected (34.48 sec)
Rows matched: 1000000 …
[Read more]
Galera 4 Streaming Replication in Percona XtraDB Cluster 8.0

I was testing the latest Percona XtraDB Cluster 8.0 (PXC) release which has the Galera 4 plugin, and I would like to share my experiences and thoughts on the Streaming Replication feature so far.

What Is Streaming Replication, in One Sentence?

In Galera 4, the large transaction could split into smaller fragments, and even before it got committed these fragments have been replicated to the other nodes and have already started the certification and apply process.

The manual describes all the pros and cons, but let’s see how it works. I have created a table with 10M rows and I am going to run some large updates on that.

First I have run the updates without Streaming Replication, and because it is disabled by …

[Read more]
Testing Percona XtraDB Cluster 8.0 Using Vagrant

As Alkin and Ramesh have shown us in their Testing Percona XtraDB Cluster 8.0 with DBdeployer post, it is now possible to easily deploy an environment to test the features provided by the brand new release of Percona XtraDB Cluster 8.0.

We have also worked on creating a testing environment available for those that use Vagrant instead. Be it that it’s what you are used to working with, or that you want a proper VM for each instance, in particular, you can use the following commands to easily deploy a three-node cluster.

Requirements

Vagrant runs in Linux, Mac OS, and Windows, you just need to have the packages installed. Visit …

[Read more]
Showing entries 1 to 10 of 277
10 Older Entries »