Showing entries 11 to 20 of 33
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: MySQL NDB Cluster (reset)
MySQL NDB Cluster Replication Topologies (Part – II)

In the previous blog, we were able to setup a MySQL NDB Cluster replication between one source and one replica cluster. In this blog, we will discuss about replication between one source and three replica clusters.

Note: With MySQL version (8.0.21), we have started changing the term “master” to “source”, the term “slave” to “replica”. So in this blog we will refer these terms ‘source’ and ‘replica’ wherever applicable.

The main advantage of this type of topology is good for giving ‘local’ reads in geographically distant areas as well as increased redundancy in case of issues.

Let’s create four MySQL NDB Cluster with the following environment, from which one will be termed as ‘source’ cluster while the rest will be ‘replica’ clusters.

  • MySQL NDB Cluster version (Latest GA version)
  • 1 Management node
  • 4 Data nodes …
[Read more]
MySQL NDB Cluster Replication Topologies (Part – I)

In this blog series, we will discuss various MySQL NDB Cluster replication topologies possible with a demonstration. We will start with a simple case i.e. one source (formerly called master) and one replica (formerly called slave).

Note: With MySQL version (8.0.21), we have started changing the term “master” to “source”, the term “slave” to “replica”. So in this blog we will refer these terms ‘source’ and ‘replica’ wherever applicable.

Let’s create two MySQL NDB Cluster with the following environment, from which one will be termed as ‘source’ cluster while the other one will be ‘replica’ cluster. For now, let’s stick to the identical environment from both the cluster. Later in the following blog series, we will change the environments and will run the replication.

  • MySQL NDB Cluster version (Latest GA version)
  • 1 Management node
  • 4 Data nodes …
[Read more]
Research on Thread Pipelines using RonDB

 

Introduction

In my previous two blogs on this topic I first introduced the concept of automatic thread configuration and the thread model we use in RonDB. After receiving some questions on the topic I dived a bit deeper into explaining the RonDB thread model and its thread pipeline and compared it to another similar concept called batch pipelines.


Since then I read up a bit more on the research in this area with a focus on implementations in other key-value stores. Some researchers argue that a model where one handles the request immediately is superior to a model using a thread pipeline.

RonDB Software …

[Read more]
Sysbench evaluation of RonDB

 

Introduction

Sysbench is a tool to benchmark to test open source databases. We have integrated Sysbench into the RonDB installation. This makes it extremely easy to run benchmarks with RonDB. This paper will describe the use of these benchmarks in RonDB. These benchmarks were executed with 1 cluster connection per MySQL Server. This limited the scalability per MySQL Server to about 12 VCPUs. Since we executed those benchmarks we have increased the number of cluster connections per MySQL Server to 4 providing scalability to at least 32 VCPUs per MySQL Server.


As preparation to run those benchmarks we have created a RonDB cluster using the Hopsworks framework that is currently used to create …

[Read more]
Accessing MySQL NDB Cluster Database From MySQL Connector/Python

In this post, we will see how to access database and its objects in MySQL NDB Cluster from Connector/python program. I assume that the reader has some basic understanding of python language and MySQL NDB Cluster.
Let’s create a MySQL NDB Cluster with the following environment:

  • MySQL NDB Cluster version (Latest GA version)
  • 1 Management node
  • 4 Data nodes
  • 1 Mysqld server
  • Configuration slots for up to 4 additional API nodes
  • Connector/Python version (Latest GA version)

Note: Python software must be installed on the same host where we are planning to install MySQL Connector/Python.

MySQL NDB Cluster Architecture:
Let’s look at the MySQL NDB Cluster architecture.

[Read more]
How to do online configuration changes in MySQL NDB Cluster (Part I)

In this blog, we will discuss about how to perform cluster configuration changes while cluster is up and processing transactions (online).

In MySQL NDB Cluster, configuration data is parsed and distributed by the management server (MGMD) nodes. Users supply an input text file (commonly known as config.ini) which describes cluster topology, resource usage limits and other parameters. The MGMD nodes parse this file, combine it with designed in defaults and serve the resulting configuration to other node types (data nodes, api nodes), when they connect.
Reasons for changing configuration might include:

- Increased resource usage limits (Data memory, IndexMemory, buffers)
- Adding a new configuration parameter(s) i.e. enabling a new feature
- Unsupported configuration parameter taken out during downgrade to lower version i.e. disabling a feature
- etc ..

MySQL Cluster nodes pick …

[Read more]
MySQL NDB Cluster Installation Through Docker

In this post, we will see how to setup MySQL NDB Cluster from a docker image. I assume that the reader has some basic understanding of docker and its terminology.

Steps to install MySQL NDB Cluster:


Let's create a MySQL NDB Cluster with the following environment:

MySQL NDB Cluster version (Latest GA version)1 Management Node4 Data Nodes1 Mysqld ServerConfiguration slots for upto 4 additional API nodes 
Note: Docker software must be installed and running on the same host where we are planning to install MySQL NDB Cluster. Also make sure we have enough resources allocated to docker so that we shouldn’t face any issues later on.

Step 1: Get the MySQL NDB Cluster docker image on your host


Users can get the MySQL NDB Cluster image from github site (link). Then …

[Read more]
Table Partitioning In MySQL NDB Cluster And What’s New (Part IV)

Whats new in NDB Cluster 8.0 version (8.0.23)

With new configuration variables introduced in NDB cluster version 8.0.23, user now have more control in table partitioning. Below are the new config variables that can influence the table partitioning scheme:

  • PartitionsPerNode
  • ClassicFragmentation


PartitionsPerNode:


In earlier cluster versions, the default number of table partitions is based on the number of LDM threads running on a node multiplied by the number of data nodes in the cluster. User can not set any random values to MaxNoOfExecThreads (#LDM) rather the value should be less than or equal to NoOfFragmentLogParts. With cluster version 8.0.23, user can have many no of LDM threads assign to a data node.

The rationale is:

- Having many LDMs allows a data node to make good use of modern hardware.
- Having one …

[Read more]
Table partitioning in MySQL NDB Cluster and what’s new (Part III)

Whats new in NDB Cluster 7.5 version (Contd.)

In cluster 7.5 the READ_BACKUP and FULLY_REPLICATED table features were added. These features are both designed to improve read performance and scalability, and can be set on a per-table basis. These features are fully implemented inside MySQL Cluster, and tables using these features support all of the normal MySQL Cluster features - secondary unique and ordered indexes, foreign keys, disk resident columns, replication etc. The read performance improvements do not require any special effort to take advantage of – MySQL Cluster automatically chooses the most efficient way to execute reads, whether issued over NdbApi, or via SQL, executed in MySQLD or pushed down for parallel execution in the data nodes.

READ_BACKUP:

Prior to 7.5, all Committed Read operations were routed to a primary replica of the table or index fragment to be read. The …

[Read more]
Designing a Thread Pipeline for optimal database throughput with high IPC and low CPU cache misses

 There are a couple of questions about the blog post on automatic thread configuration in RonDB. Rather than providing an extensive answer in the comment section I thought it was better to answer the questions in a separate blog. In addition I performed a set of microbenchmarks to verify my expectations.


The first question was the following:


ScyllaDB also does something similar with execution stages for better instruction cache. Is this similar to what you're implementing for instruction cache on separating the threads?


The second question is:


I don't really get the functional separation of threads. To execute a query, your data will now go through a pipeline of …

[Read more]
Showing entries 11 to 20 of 33
« 10 Newer Entries | 10 Older Entries »