Showing entries 1 to 10 of 78
10 Older Entries »
Displaying posts with tag: Clustering (reset)
We’re hiring: Database Clustering Technical Sales & Support Engineer

We are growing!

Consequently, Continuent needs additional talented staff.

We are currently looking for a person who would be a good fit for our “Database Clustering Technical Sales & Support Engineer” position, in the US Pacific timezone.

If you know someone who would be a good fit for this role and our company’s culture, please send them our way.

Continuent Clustering 5.3.3/5.3.4 and Tungsten Replicator 5.3.3/5.3.4 Released

Continuent is pleased to announce that Continuent Clustering 5.3.4 and Tungsten Replicator 5.3.4 are now available!

Release 5.3.4 was released shortly after 5.3.3 due to a specific bug in our reporting tool tpm diag. All of the major changes except this one fix are in the 5.3.3 release.

Our 5.3.3/4 release fixes a number of bugs and has been released to improve stability in certain parts of the products.

Highlights common to both products:

  • Fixed an issue with LOAD DATA INFILE
  • Replicator now outputs the filename of the file when using thl to show events
  • tpm diag has been improved the way we extract MySQL and support Net:SSH options

Highlights in the clustering product:

  • Tungsten Manager stability has been improved by identifying some memory leaks.
  • Tungsten Connector has fixed a number of bugs relating to bridge mode, …
[Read more]
Watch the webinar replay: How Bluefin ensures 24/7 operation and application availability with Continuent Clustering

Watch the relay of this webinar and learn how Bluefin Payment Systems provides 24/7/365 operation and application availability for their PayConex payment gateway and Decryptx decryption-as-a-service, essential to point-of-sale (POS) solutions in retail, mobile, call centers and kiosks.

We discuss why Bluefin uses Continuent Clustering, and how Bluefin runs two co-located data centers with multimaster replication between each cluster in each data center, with full failover within the cluster and between clusters, handling 350 million records each month.

Watch this webinar replay at https://youtu.be/crlgsflH7Gw

Multi-Cloud SaaS Applications: Speed + Availability = Success!

In this blog post, we talk about how to run applications across multiple clouds (i.e. AWS, Google Cloud, Microsoft Azure) using Continuent Clustering. You want your business-critical applications to withstand node, datacenter, availability-zone or regional failures. For SaaS apps, you also want to bring data close to your application users for faster response times and a better user experience. With cross-cloud capability, Continuent also helps avoid lock-in to any particular cloud provider.

The key to success for the database layer is to be available and respond rapidly.

From both a business and operational perspective, spreading the application across cloud environments from different vendors provides significant protection against vendor-specific outages and vendor lock-in. Running on multiple platforms provides greater …

[Read more]
Global Read-Scaling using Continuent Clustering

Did you know that Continuent Clustering supports having clusters at multiple sites world-wide with either active-active or active-passive replication meshing them together?

Not only that, but we support a flexible hybrid model that allows for a blended architecture using any combination of node types. So mix-and-match your highly available database layer on bare metal, Amazon Web Services (AWS), Azure, Google Cloud, VMware, etc.

In this article we will discuss using the Active/Passive model to scale reads worldwide.

The model is simple: select one site as the Primary where all writes will happen. The rest of the sites will pull events as quickly as possible over the WAN and make the data available to all local clients. This means your application gets the best of both worlds:

  • Simple deployment with no application changes needed. All writes …
[Read more]
Mastering Continuent Clustering Series: Converting a standalone cluster to a Composite Primary/DR topology using INI configuration

In this blog post, we demonstrate how to convert a single standalone cluster into a Composite Primary/DR topology running in two data centers.

Our example starting cluster has 5 nodes (1 master and 4 slaves) and uses service name alpha. Our target cluster will have 6 nodes (3 per cluster) in 2 member clusters alpha_east and alpha_west in composite service alpha.

This means that we will reuse the existing service name alpha as the name of the new composite service, and create two new service names, one for each cluster (alpha_east and alpha_west).

Below is an INI file extract example for our starting standalone cluster with 5 nodes:

[defaults]
...

[alpha]
connectors=db1,db2,db3,db4,db5
master=db1
members=db1,db2,db3,db4,db5
topology=clustered

To convert the above configuration to a Composite Primary/DR:

  1. First you must stop all services on all existing nodes:
    shell> stopall …
[Read more]
Mastering Continuent Clustering Series: Tungsten and SELinux, a Case Study

In this blog post, we talk about what happened during an installation of the Tungsten Cluster into an environment with SELinux running and mis-configured.

An attempt to execute `tpm install` on v5.3.2 recently failed with the below error:

ERROR >> node3_production_customer_com >> Unable to run 'sudo systemctl status mysqld.service' or the database server is not running (DatasourceBootScriptCheck) 
Update the /etc/sudoers file or disable sudo by adding --enable-sudo-access=false 

Worse, this customer reported that this appeared as a WARNING only in Dev and Staging tests. So we checked, and it seemed we were able to access systemctl properly:

shell> sudo systemctl status mysqld.service
● mysqld.service - MySQL Percona Server
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
Active: activating (start-post) since Tue 2018-06-19 17:46:19 BST; 1min 15s ago …
[Read more]
Mastering Continuent Clustering Series: Configuring Startup on Boot

In this blog post, we talk about how to configure automatic start at boot time for the Tungsten Clustering components.

By default, Tungsten Clustering does not start automatically on boot. To enable Tungsten Clustering to start at boot time, use the deployall script provided to create the necessary boot scripts:

shell> sudo /opt/continuent/tungsten/cluster-home/bin/deployall

To disable automatic startup at boot time, use the undeployall command:

shell> sudo /opt/continuent/tungsten/cluster-home/bin/undeployall

For Multisite/Multimaster deployments in specific, there are separate cross-site replicators running. In this case, a custom startup script must be created, otherwise the replicator will be unable to start as it has been configured in a different directory.

  1. Create a link from the Tungsten Replicator service startup script in the operating system …
[Read more]
How Bluefin ensures 24/7/365 operation and application availability for PayConex and Decryptx with Continuent Clustering

Join MC Brown, VP Products at Continuent, on August 8th for our new webinar on high availability and disaster recovery for MySQL, MariaDB and Percona Server with Continuent Clustering. Learn how Bluefin Payment Systems provides 24/7/365 operation and application availability for their PayConex payment gateway and Decryptx decryption-as-a-service, essential for Point-Of-Sale solutions in retail, mobile, call centers and kiosks.

We’ll discuss why Bluefin uses Continuent Clustering, and how Bluefin runs two co-located data centers with multimaster replication between each cluster in each data center, with full fail-over within the cluster and between clusters, handling 350 million records each month.

Tune in …

[Read more]
Mastering Continuent Clustering Series: Automatic Reconnect in the Tungsten Connector

In this blog post, we talk about the Automatic Reconnect feature in the Tungsten Connector.

Automatic reconnect enables the Connector to re-establish a connection in the event of a transient failure. Under specific circumstances, the Connector will also retry the query.

Connector automatic reconnect is enabled by default in Proxy and Smartscale modes.

Use the following tpm command option on the command line (remove the leading hyphens inside INI files):

--connector-autoreconnect=false

to disable automatic reconnect.

This feature is not available while running in Bridge Mode. Use the tpm command option --connector-bridge-mode=false to disable Bridge mode.

Automatic reconnect enables retries of statements under the following circumstances:

  • not in bridge mode
  • not inside a transaction
  • no temp table has been created
[Read more]
Showing entries 1 to 10 of 78
10 Older Entries »