Showing entries 1 to 10 of 22
10 Older Entries »
Displaying posts with tag: multisite (reset)
10 Reasons Why Tungsten Clustering Beats the DIY Approach for Geo-Distributed MySQL Deployments

Why does the DIY approach fail to deliver vs. the Tungsten Clustering solution for geo-distributed MySQL multimaster deployments?

Before we dive into the 10 reasons, note why commercially-supported enterprise software is less risky and in fact less costly:

  • The labor time spent building and maintaining a DIY solution costs more than a supported solution that just works.
  • There is documentation, training, support, so your mission-critical process is never dependent upon an irreplaceable individual.
  1. Tungsten Clustering is a complete solution, comprised of the Replicator, Manager and Connector components
    • With DIY, you must first decide the architecture, then select the individual tools to handle each layer of the topology. …
[Read more]
Using Keep-Alives To Ensure Long-Running MySQL & MariaDB Sessions Stay Connected

Overview The Skinny

In this blog post we will discuss how to use the Tungsten Connector keep-alive feature to ensure long-running MySQL & MariaDB/Percona Server client sessions stay connected in a Tungsten Cluster.

Agenda What’s Here?

  • Briefly explore how the Tungsten Connector works
  • Describe the Connector keep-alives – what are they and why do we use them?
  • Discuss why the keep-alive feature is not available in Bridge mode and why
  • Examine how to tune the keep-alive feature in the Tungsten Connector

Tungsten Connector: A Primer A Very Brief Summary

The Tungsten Connector is an intelligent MySQL database proxy located between the clients and the database servers, providing a single connection point, while routing queries to …

[Read more]
The Important Role of a Tungsten Rollback Error

The Question Recently, a customer asked us:

What is the meaning of this error message found in trepsvc.log?

2019/05/14 01:48:04.973 | mysql02.prod.example.com | [east - binlog-to-q-0] INFO pipeline.SingleThreadStageTask Performing rollback of possible partial transaction: seqno=(unavailable)

Simple Overview The Skinny

This message is an indication that we are dropping any uncommitted or incomplete data read from the MySQL binary logs due to a pending error.

The Answer Safety First

This error is often seen before another error and is an indication that we are rolling back anything uncommitted, for safety. On a master this is normally very little and would likely be internal transactions in the trep_commit_seqno table, for example.

As you may know with the replicator we always extract complete transactions, and so this particular message is …

[Read more]
Understanding Cross-Site Replication in a Tungsten Composite Multi-Master Cluster for MySQL, MariaDB and Percona Server

Overview The Skinny

In this blog post we will discuss how the managed cross-site replication streams work in a Composite Multi-Master Tungsten Cluster for MySQL, MariaDB and Percona Server.

Agenda What’s Here?

  • Briefly explore how managed cross-site replication works in a Tungsten Composite Multi-Master Cluster
  • Describe the reasons why the default design was chosen
  • Explain the pros and cons of changing the configuration
  • Examine how to change the configuration of the managed cross-site replicators

Cross-Site Replication A Very Brief Summary

In a standard Composite Multi-Master (CMM) deployment, the managed cross-site replicators pull Transaction History Logs (THL) from every remote cluster’s current master node. …

[Read more]
Performance Tuning Tungsten Replication to MySQL

The Question Recently, a customer asked us:

Why would Tungsten Replicator be slow to apply to MySQL?

The Answer Performance Tuning 101

When you run trepctl status and see:
appliedLatency : 7332.394
like this on a slave, it is almost always due to the inability for the target database to keep up with the applier.

This means that we often need to look first to the database layer for the solution.

Here are some of the things to think about when dealing with this issue:

Architecture and Environment
 Are you on bare metal?
 Using the cloud?
 Dev or Prod?
 Network speed and latency?
 Distance the data needs to travel?
 Network round trip times? Is the …

[Read more]
Troubleshooting Data Differences in a MySQL Database Cluster

Overview The Skinny

From time to time we are asked how to check whether or not there are data discrepancies between Master/Slave nodes within a MySQL (or MariaDB) cluster that’s managed with Tungsten Clustering. This is always a challenging task, not least because we hope and believe that our replication mechanism would avoid such occurrences, that said there can be factors outside of our control that can appear to “corrupt” data – such as inadvertent execution of DML against a slave using a root level user account.

Tungsten Replicator, the core replication component in our Tungsten Clustering solution for MySQL (& MariaDB), is just that, a replicator – it takes transactions from the binary logs and replicates them around. The replicator isn’t a data synchronisation tool in that respect, the …

[Read more]
SSH Differences Between Staging and INI Configuration Methods

The Question Recently, a customer asked us:

If we move to using the INI configuration method instead of staging, would password-less SSH still be required?

The Answer The answer is both “Yes” and “No”

No, for installation and updates/upgrades specifically. Since INI-based configurations force the tpm command to act upon the local host only for installs and updates/upgrades, password-less SSH is not required.

Yes, because there are certain commands that do rely upon password-less SSH to function. These are:

  • tungsten_provision_slave
  • prov-sl.sh
  • multi_trepctl
  • tpm diag (pre-6.0.5)
  • tpm diag --hosts (>= 6.0.5)
  • Any tpm-based backup and restore operations that involve a remote node

Summary The Wrap-Up

In …

[Read more]
How to Integrate Tungsten Clustering Monitoring Tools with PagerDuty Alerts

Overview The Skinny

In this blog post we will discuss how to best integrate various Continuent-bundled cluster monitoring solutions with PagerDuty (pagerduty.com), a popular alerting service.

Agenda What’s Here?

  • Briefly explore the bundled cluster monitoring tools
  • Describe the procedure for establishing alerting via PagerDuty
  • Examine some of the multiple monitoring tools included with the Continuent Tungsten Clustering software, and provide examples of how to send an email to PagerDuty from each of the tools.

Exploring the Bundled Cluster Monitoring Tools A Brief Summary

Continuent provides multiple methods out of the box to monitor the cluster health. The most popular is the suite of Nagios/NRPE scripts (i.e. cluster-home/bin/check_tungsten_*). We also have Zabbix scripts (i.e. cluster-home/bin/zabbix_tungsten_*). Additionally, there is …

[Read more]
Tungsten Clustering versus AWS RDS/MySQL

Enterprises require high availability for their business-critical applications. Even the smallest unplanned outage or even a planned maintenance operation can cause lost sales, productivity, and erode customer confidence. Additionally, updating and retrieving data needs to be robust to keep up with user demand.

Let’s take a look at how Tungsten Clustering helps enterprises keep their data available and globally scalable, and compare it to Amazon’s RDS running MySQL (RDS/MySQL).

Replicas and Failover What does RDS do?

Having multiple copies of a database is ideal for high availability. RDS/MySQL approaches this with “Multi-AZ” deployments. The term “Multi-AZ” here is a bit confusing, as enabling this simply means a single “failover replica” will be created in a different availability zone from the primary database instance. …

[Read more]
No-Downtime Cluster Software Upgrades

One important way to protect your data is to keep your Tungsten Clustering software up-to-date.

A standard cluster deployment uses three nodes, which allows for no-downtime upgrades along with the ability to have a fully available cluster during maintenance.

Please note that with only two database cluster nodes, there is a window of vulnerability created by leaving zero failover candidates available when the lone slave is taken down for service.

The Best Practices: Staging Performing a No-Downtime Upgrade for a Staging Deployment

When upgrading a Staging-style deployment, all nodes are upgraded at once in parallel via the tools/tpm update command run from inside the staging directory on the staging host.

No Master switch happens, and all layers are restarted to use the new code. …

[Read more]
Showing entries 1 to 10 of 22
10 Older Entries »