NDB Cluster was originally developed for Network DataBases in the
telecom
network. I worked in a EU project between 1991 and 1995 that
focused on
developing a pre-standardisation effort on UMTS that later became
standardised
under the term 3G. I worked in a part of the project where we
focused on
simulating the network traffic in such a 3G network. I was
focusing my attention
especially on the requirements that this created on a network
database
in the telecom network.
In the same time period I also dived deeply into research
literatures about DBMS
implementation.
The following requirements from the 3G studies emerged as the
most important:
1) Class 5 Availability (less than 5 minutes of unavailability
per year)
2) High Write Scalability as well as High Read Scalability
3) Predictable latency down to milliseconds
4) Efficient API …
The requirements on Class 5 availability and immediate failover
had two important
consequences for NDB Cluster. The first is that we wanted a
fail-fast architecture.
Thus as soon as we have any kind of inconsistency in our internal
data structures we
immediately fail and rely on the failover and recovery mechanisms
to make the failure
almost unnoticable. The second is that we opted for a shared
nothing model where all
replicas are able to take over immediately.
The shared disk model requires replay of the REDO log before
failover is completed
and this can be made fast, but not immediate. In addition as one
quickly understands
with the shared disk model is that it relies on an underlying
shared nothing storage
service. The shared disk implementation can never be more
available than the
underlying shared nothing storage service.
Thus it is actually possible to …
A number of developments was especially important in influencing
the development
of NDB Cluster. I was working at Ericsson, so when I didn't work
on DBMS research
I was deeply involved in prototyping the next generation telecom
switches. I was the
lead architect in a project that we called AXE VM. AXE was the
cash cow of Ericsson
in those days. It used an in-house developed CPU called APZ. I
was involved in some
considerations into how to develop a new generation of the next
generation APZ in the
early 1990s. However I felt that the decided architecture didn't
make use of modern
ideas on CPU development. This opened for the possibility to use
a commercial CPU
to build a virtual machine for APZ. The next APZ project opted
for a development
based on the ideas from AXE VM at the end of the 1990s. I did
however at this time
focus my full attention to development of NDB Cluster.
…
One of the key factors of a performant MySQL database server is having good memory allocation and utilization, especially when running it in a production environment. But how can you determine if the MySQL utilization is optimized? Is it reasonable to have high memory utilization or does it require fine tuning? What if I come up against a memory leak?
Let's cover these topics and show the things you can check in MySQL to determine traces of high memory utilization.
Memory Allocation in MySQL
Before we delve into the specific subject title, I'll just give a short information about how MySQL uses memory. Memory plays a significant resource for speed and …
[Read more]Overview The Skinny
In this blog post we explore various options for tuning MySQL traffic routing in the Tungsten Connector for better control of the distribution.
A Tungsten Cluster relies upon the Tungsten Connector to route client requests to the master node or optionally to the slaves. The Connector makes decisions about where to route requests based on a number of factors.
This blog post will focus on the Load Balancer algorithms available via configuration that allow you to adjust the routing behavior of the Connector, along with ways to debug the Connector Load Balancer’s routing decisions.
The Question Recently, a customer asked us:
How do I know which load balancer algorithm is in use by the Connector? And how do we enable debug logging for the Connector load balancer?
…[Read more]In the last blog post of this series, we discussed in detail how Master Key encryption works. In this post, based on what we already know about Master Key encryption, we look into how Master Key rotation works.
The idea behind Master Key rotation is that we want to generate a new Master Key and use this new Master Key to re-encrypt the tablespace key (stored in tablespace’s header).
Let’s remind ourselves what a Master Key encryption header looks like (it is located in tablespace’s header):
From the previous blog post, we know that when a server starts it goes through all encrypted tablespaces’ encryption headers. During that, it remembers the highest KEY ID it read from all the encrypted tablespaces. For instance, if we have three tables with KEY_ID = 3 and one table with KEY ID = 4, it means that …
[Read more]This talk covers some of the challenges we sought to address by creating a Kubernetes Operator for Percona XtraDB Cluster, as well as a look into the current state of the Operator, a brief demonstration of its capabilities, and a preview of the roadmap for the remainder of the year. Find out how you can deploy a 3-node PXC cluster in under five minutes and handle providing self-service databases on the cloud in a cloud-vendor agnostic way. You’ll have the opportunity to ask the Product Manager questions and provide feedback on what challenges you’d like us to solve in the Kubernetes landscape.
Please join Percona Product Manager Tyler Duzan on Wednesday, February 26, 2020, at 1 pm EST for his webinar “Building a Kubernetes Operator for Percona XtraDB Cluster”.
…
[Read more]We are well aware that ProxySQL is one of the powerful SQL aware proxy for MySQL. The ProxySQL configuration is flexible and the maximum part of configurations can be done with the ProxySQL client itself.
The latest ProxySQL release ( 2.0.9 ) has few impressive features like “SQL injection engine, Firewall whitelist, Config file generate” . In this blog I am going to explain, how to generate the ProxySQL config file using the proxySQL client .
Why configuration file ?
- Backup solution
- Helpful for Ansible deployments in multipul environments
There are two important commands involved in the ProxySQL config file generation.
- Print the config file text in ProxySQL client itself ( like query output )
- Export the configurations in separate file
Print the config file text in ProxySQL client ( like …
[Read more]
How to Deploy MySQL InnoDB Replica Set in Production?
Before i talk about Deployment process of MySQL InnoDB Replica Set , it is more important to know below details:-
- What is MySQL InnoDB Replica Set?
- What is prerequisite and limitation of using MySQL Replica Set?
- In what kind of scenarios MySQL Replica Set is not recommended.
- How to configure and deploy MySQL Replica Set- (step by step guide )
- How to use InnoDB Replica Set?
- What if Primary goes down? Does select query re-routed to another server?
- What if Secondary goes down while executing select queries?
§ I will answer these all question in this blog.
What is Replica Set ?
MySQL InnoDB ReplicaSet a quick and easy way to get MySQL replication(Master-Slave), making it well suited …
[Read more]
This article is written to share how to setup SSL Replication
between MySQL InnoDB Cluster and Slave single node via MySQL
Router.
It is for Demo Purpose ONLY.
The video below shows the Replication working between Primary
Node failover in the MySQL InnoDB Cluster. The
Replication switches to another Primary Node via the MySQL
Router.
https://youtu.be/R0jOMfZlF8c
The General Steps as follows :
Setup as follows (Demo only)
Virtual Machine 1
1. A working MySQL InnoDB Cluster
Virutal Machine 2
2. A working MySQL Node as Slave
3. A working MySQL Router setup on Slave Node to point to the
MySQL InnoDB Cluser on VM1.
The key part is to ensure the "key files" to be the same on each
node of the MySQL InnoDB Cluster.
For the InnoDB Cluster on VM1 setup :
For example with MySQL InnoDB …