Showing entries 1 to 10 of 190
10 Older Entries »
Displaying posts with tag: ha (reset)
pre-FOSDEM MySQL Day 2019

For the third year in a row, we will take advantage of the mass presence of our MySQL Engineers during FOSDEM to organize the pre-FOSDEM MySQL Day.

The program of this 3rd edition is already on track, thank you to all the speakers who already confirmed their participation.

Start End Event Speaker Company Topic
Friday 1st February
09:30 10:00 MySQL Community Team Welcome
10:00
[Read more]
MySQL InnoDB Cluster with 2 Data Centers for Disaster Recovery: howto – part 2

In the first part of this howto, I illustrated how to setup two MySQL InnoDB Cluster linked by an asynchronous replication.

In that solution, I didn’t use any replication filters to ignore the replication of the InnoDB Cluster’s metadata (mysql_innodb_cluster_metadata), but I used the same metadata tables with two different clusters in it.

The benefit is that this allows to backup everything from any node in any of the data center, it works also in MySQL 5.7, and there is not risk to mess up with the replication filters.

In this blog I will show how to use replication filters to link two different clusters. This doesn’t work on …

[Read more]
MySQL InnoDB Cluster with 2 Data Centers for Disaster Recovery: howto

As you know, MySQL InnoDB Cluster is a High Availability solution for MySQL. However more and more people are trying to use it as a Disaster Recovery solution with 2 data centers. Natively, this is not yet supported. But it’s already possible to realize such setup if we agree with the following points:

  •  a human interaction is required in case of Disaster Recovery which, by my own experience, is often acceptable
  • a human interaction is required if the any Primary-Master acting as asynchronous slave leaves its group  (crash, network problem, …) or becomes secondary

These are not big constraints and it’s relatively easily to deal with them.

The Architecture

The situation is as follow:

  • 2 data centers (one active, one inactive, only used for disaster recovery)
  • 2 MySQL InnoDB Clusters (one in each DC)
  • 3 members in each …
[Read more]
No-Downtime Cluster Software Upgrades

One important way to protect your data is to keep your Continuent Clustering software up-to-date.

A standard cluster deployment uses three nodes, which allows for no-downtime upgrades along with the ability to have a fully available cluster during maintenance.

Please note that with only two database cluster nodes, there is a window of vulnerability created by leaving zero failover candidates available when the lone slave is taken down for service.

The Best Practices: Staging Performing a No-Downtime Upgrade for a Staging Deployment

When upgrading a Staging-style deployment, all nodes are upgraded at once in parallel via the tools/tpm update command run from inside the staging directory on the staging host.

No Master switch happens, and all layers are restarted to use the new code. …

[Read more]
Picking a Deployment Method: Staging versus INI

Continuent Clustering is an extraordinarily flexible tool, with options at every layer of operation.

In this blog post, we will describe and discuss the two different methods for installing, updating and upgrading Continuent Clustering software.

When first designing a deployment, the question of installation methodology is answered by inspecting the environment and reviewing the customer’s specific needs.

Staging Deployment Methodology

All for One and One for All

Staging deployments were the original method of installing Continuent Clustering, and relied upon command-line tools to configure and install all cluster nodes at once from a central location called the staging server.

This staging server (which could be one of the cluster nodes) requires SSH access to …

[Read more]
Cluster Performance Validation via Load Testing

Your database cluster contains your most business-critical data and therefore proper performance under load is critical to business health. If response time is slow, customers (and staff) get frustrated and the business suffers a slow-down.

If the database layer is unable to keep up with demand, all applications can and will suffer slow performance as a result.

To prevent this situation, use load tests to determine the throughput as objectively as possible.

In the sample load.pl script below, increase load by increasing the thread quantity.

You could also run this on a database with data in it without polluting the existing data since new test databases are created to match each node’s hostname for uniqueness.

Note: The examples in this blog post assume that a Connector is …

[Read more]
Essential Cluster Monitoring Using Nagios and NRPE

In a previous post we went into detail about how to implement Tungsten-specific checks. In this post we will focus on the other standard Nagios checks that would help keep your cluster nodes healthy.

Your database cluster contains your most business-critical data. The slave nodes must be online, healthy and in sync with the master in order to be viable failover candidates.

This means keeping a close watch on the health of the databases nodes from many perspectives, from ensuring sufficient disk space to testing that replication traffic is flowing.

A robust monitoring setup is essential for cluster health and viability – if your replicator goes offline and you do not know about it, then that slave becomes effectively useless because it has stale data.

Nagios Checks The Power of Persistence

One …

[Read more]
Global Multimaster Cluster Monitoring Using Nagios and NRPE

Your database cluster contains your most business-critical data. The slave nodes must be online, healthy and in sync with the master in order to be viable failover candidates.

This means keeping a close watch on the health of the databases nodes from many perspectives, from ensuring sufficient disk space to testing that replication traffic is flowing.

A robust monitoring setup is essential for cluster health and viability – if your replicator goes offline and you do not know about it, then that slave becomes effectively useless because it has stale data.

Big Brother is Watching You! The Power of Nagios

Even while you sleep, your servers are busy, and you simply cannot keep watch all the time. Now, more than ever, with global deployments, it is literally impossible to watch everything all the time.

Enter Nagios, you best big brother ever. As a long-time player in the monitoring market, Nagios has both …

[Read more]
Worldwide Multimaster Cluster Administration Using Tungsten Dashboard

Continuent Clustering support true distributed multimaster clustering. In this topology, there are cross-site replicator services for each remote site. In a 3-site configuration, there are a total of 9 replication streams to manage.

Continuent Clustering also offers a graphical administration tool called the Tungsten Dashboard to help with your management burden. The GUI makes the deployment much easier to visualize and administer.

For our example, we will have a Composite Multimaster dataservice called global with three active, writable member clusters (one per site), east, west and north.

Dashboard Summary View

In the summary, collapsed view, the composite service and all member clusters are listed with associated information and controls. Note that the Type for the composite dataservice global is CompMM

[Read more]
Multi-Cloud SaaS Applications: Speed + Availability = Success!

In this blog post, we talk about how to run applications across multiple clouds (i.e. AWS, Google Cloud, Microsoft Azure) using Continuent Clustering. You want your business-critical applications to withstand node, datacenter, availability-zone or regional failures. For SaaS apps, you also want to bring data close to your application users for faster response times and a better user experience. With cross-cloud capability, Continuent also helps avoid lock-in to any particular cloud provider.

The key to success for the database layer is to be available and respond rapidly.

From both a business and operational perspective, spreading the application across cloud environments from different vendors provides significant protection against vendor-specific outages and vendor lock-in. Running on multiple platforms provides greater …

[Read more]
Showing entries 1 to 10 of 190
10 Older Entries »