We are pleased to announce the general availability of Vitess 17! Major Themes in Vitess 17 # In this release of Vitess, several significant enhancements have been introduced to improve the compatibility, performance, and usability of the system. GA Announcements # The VTTablet settings connection pool feature, introduced in v15, is now enabled by default in this release. This feature simplifies the management and configuration of system settings, providing users with a more streamlined and convenient experience.
We are pleased to announce the general availability of Vitess 16! Documentation improvements # In this release the maintainer team has decided to put an emphasis on reviewing, editing, and rewriting the website documentation to be current with the code. With help from CNCF, we have also improved the search experience. We welcome feedback on the current incarnation of the docs. GA announcements # We are marking VDiff v2 as Generally Available or production-ready in v16.
From MySQL 8.0.21 onwards, START GROUP_REPLICATION
includes new options which allow a user to specify credentials to
be used for distributed recovery. You can now pass credentials
when invoking START GROUP_REPLICATION
instead of
setting them when configuring the
group_replication_recovery
channel.
START GROUP_REPLICATION
command now has the options:
-
USER
: User name.
… Tweet Share
Group Replication distributed recovery is one of the key features and until now it was restricted to be executed over one mysql connection point automatically defined on mysql system variables port and host.
With group_replication_recovery_endpoints
we
can specify through which interfaces can group replication
recovery take place for a given member so that it controls where
recovery traffic flows in the network infrastructure.…
Tweet Share
The MySQL NDB Cluster team works on fundamental redesigns of core parts of NDB architecture. One of these changes is the partial checkpoint algorithm. You can now take full advantage of it when building much larger clusters: NDB 8.0 can use 16 TB data memory per data node for in-memory tables.…
Tweet Share
There are three different ways ProxySQL can direct traffic between your application and the backend MySQL services.
- Locally, on the MySQL servers.
- Between the MySQL servers and the application.
- Colocated on the application servers themselves.
Without going through too much detail – each has its own limitations. In the first form, the application needs to know about all MySQL servers at any given point in time. With the third form, a large number of application servers, especially in the age of Kubernetes, where apps can simply recycle easily or be scaled up and down, backend connections can increase exponentially leading to issues.
In the second form, load balancing between a pool of ProxySQL servers is normally the challenge. Do you load balance the load balancers? While there are approaches like balancing from the application, similar to how the MongoDB drivers works, the …
[Read more]Percona announces the release of Percona XtraDB Cluster Operator 0.3.0 early access.
The Percona XtraDB Cluster Operator simplifies the deployment and management of Percona XtraDB Cluster in a Kubernetes or OpenShift environment. It extends the Kubernetes API with a new custom resource for deploying, configuring and managing the application through the whole life cycle.
You can install the Percona XtraDB Cluster Operator on Kubernetes or OpenShift. While the operator does not support all the Percona XtraDB Cluster features in this early access release, instructions on how to install and configure it …
[Read more]Distributed systems are hard – I just want to echo that. In MySQL, we have quite a number of options to run highly available systems. However, real fault tolerant systems are difficult to achieve.
Take for example a common use case of multi-DC replication where Orchestrator is responsible for managing the topology, while ProxySQL takes care of the routing/proxying to the correct server, as illustrated below. A rare case you might encounter is that the primary MySQL
node01
on DC1 might have a blip of a couple of seconds. Because Orchestrator uses an adaptive health check – not only the node itself but also consults its replicas – it can react really fast and promote the node in DC2.
Why is this problematic?
The problem occurs when
node01
resolves its temporary issue. A race condition could occur within ProxySQL that could mark it back as read-write. You can increase an …
[Read more]If you are using Galera replication, you know that schema changes may be a serious problem. With its current implementation, there is no way even a simple ALTER will be unobtrusive for live production traffic. It is a fact that with the default TOI alter method, Percona XtraDB Cluster (PXC) cluster suspends writes in order to execute the ALTER in the same order on all nodes.
For factual data structure changes, we have to adapt to the limitations, and either plan for a maintenance window, or use pt-online-schema-change, where interruptions should be very short. I suggest you be extra careful here, as normally you …
[Read more]Join Percona CEO Peter Zaitsev as he presents High Availability and Disaster Recovery in Amazon RDS on Wednesday, March 6th, 2019, at 11:00 AM PST (UTC-8) / 2:00 PM EST (UTC-5).
In this hour-long webinar, Peter describes the differences between high availability (HA) and disaster recovery (DR). Afterward, Peter will go through scenarios detailing how each is handled manually and in Amazon RDS.
He will review the pros and cons of managing HA and DR in the traditional …
[Read more]