Read about our journey, the host of benefits for our customers, our exceptional team, and future roadmap in this Amazon Special Edition.
10 Older Entries »
In this blog post, we talk about how to run applications across multiple clouds (i.e. AWS, Google Cloud, Microsoft Azure) using Continuent Clustering. You want your business-critical applications to withstand node, datacenter, availability-zone or regional failures. For SaaS apps, you also want to bring data close to your application users for faster response times and a better user experience. With cross-cloud capability, Continuent also helps avoid lock-in to any particular cloud provider.
The key to success for the database layer is to be available and respond rapidly.
From both a business and operational perspective, spreading the application across cloud environments from different vendors provides significant protection against vendor-specific outages and vendor lock-in. Running on multiple platforms provides greater …[Read more]
Did you know that Continuent Clustering supports having clusters at multiple sites world-wide with either active-active or active-passive replication meshing them together?
Not only that, but we support a flexible hybrid model that allows for a blended architecture using any combination of node types. So mix-and-match your highly available database layer on bare metal, Amazon Web Services (AWS), Azure, Google Cloud, VMware, etc.
In this article we will discuss using the Active/Passive model to scale reads worldwide.
The model is simple: select one site as the Primary where all writes will happen. The rest of the sites will pull events as quickly as possible over the WAN and make the data available to all local clients. This means your application gets the best of both worlds:
- Simple deployment with no application changes needed. All writes …
In this blog post, we demonstrate how to convert a single standalone cluster into a Composite Primary/DR topology running in two data centers.
Our example starting cluster has 5 nodes (1 master and 4 slaves) and uses service name alpha. Our target cluster will have 6 nodes (3 per cluster) in 2 member clusters alpha_east and alpha_west in composite service alpha.
This means that we will reuse the existing service name alpha as the name of the new composite service, and create two new service names, one for each cluster (alpha_east and alpha_west).
Below is an INI file extract example for our starting standalone cluster with 5 nodes:
[defaults] ... [alpha] connectors=db1,db2,db3,db4,db5 master=db1 members=db1,db2,db3,db4,db5 topology=clustered
To convert the above configuration to a Composite Primary/DR:
- First you must stop all services on all existing nodes:
shell> stopall …
In this blog post, we talk about what happened during an installation of the Tungsten Cluster into an environment with SELinux running and mis-configured.
An attempt to execute `tpm install` on v5.3.2 recently failed with the below error:
ERROR >> node3_production_customer_com >> Unable to run 'sudo systemctl status mysqld.service' or the database server is not running (DatasourceBootScriptCheck) Update the /etc/sudoers file or disable sudo by adding --enable-sudo-access=false
Worse, this customer reported that this appeared as a WARNING
only in Dev and Staging tests. So we checked, and it seemed we
were able to access
shell> sudo systemctl status mysqld.service ● mysqld.service - MySQL Percona Server Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled) Active: activating (start-post) since Tue 2018-06-19 17:46:19 BST; 1min 15s ago …[Read more]
In this blog post, we talk about how to configure automatic start at boot time for the Tungsten Clustering components.
By default, Tungsten Clustering does not start automatically on
boot. To enable Tungsten Clustering to start at boot time, use
deployall script provided to create the
necessary boot scripts:
shell> sudo /opt/continuent/tungsten/cluster-home/bin/deployall
To disable automatic startup at boot time, use the
shell> sudo /opt/continuent/tungsten/cluster-home/bin/undeployall
For Multisite/Multimaster deployments in specific, there are separate cross-site replicators running. In this case, a custom startup script must be created, otherwise the replicator will be unable to start as it has been configured in a different directory.
- Create a link from the Tungsten Replicator service startup script in the operating system …
In this blog post, we talk about the Automatic Reconnect feature in the Tungsten Connector.
Automatic reconnect enables the Connector to re-establish a connection in the event of a transient failure. Under specific circumstances, the Connector will also retry the query.
Connector automatic reconnect is enabled by default in Proxy and Smartscale modes.
Use the following tpm command option on the command line (remove the leading hyphens inside INI files):
to disable automatic reconnect.
This feature is not available while running in Bridge Mode. Use
the tpm command option
to disable Bridge mode.
Automatic reconnect enables retries of statements under the following circumstances:
- not in bridge mode
- not inside a transaction
- no temp table has been created …
In this blog post, we talk about how existing client connections are handled by the Tungsten Connector when a manual master role switch is invoked and how to adjust that behavior.
When a graceful switch is invoked via cctrl or the Tungsten Dashboard, by default the Connector will wait for five (5) seconds to allow in-flight activities to complete before forcibly disconnecting all active connections from the application side, no matter what type of query was in use.
If connections still exist after the timeout interval, they are forcibly closed, and the application will get back an error.
This configuration setting ONLY applies to a manual switch. During a failover caused by loss of MySQL availability, there is no wait and all connections are force-closed immediately.
This timeout is adjusted via the tpm option …[Read more]
In this blog post, we talk about the basic function and features of the Tungsten Connector.
The Tungsten Connector is an intelligent MySQL proxy that provides key high-availability and read-scaling features. This includes the ability to route MySQL queries by inspecting them in-flight.
The most important function of the Connector is failover handling. When the cluster detects a failed master because the MySQL server port is no longer reachable, the Connectors are signaled and traffic is re-routed to the newly-elected Master node.
Next is the ability to route MySQL queries based on various factors. In the default Bridge mode, traffic is routed at the TCP layer, and read-only queries must be directed to a different port (normally 3306 for writes and 3307 for reads).
There are additional modes, Proxy/Direct and …[Read more]
In this blog post, we talk about how query connections are handled by the Tungsten Connector, especially read-only connections.
There are multiple ways to configure session handling in the Connector. The three main modes are Bridge, Proxy/Direct and Proxy/SmartScale.
In Bridge mode, the data source to connect to is chosen ONCE for the lifetime of the connection, which means that the selection of a different node will only happen if a NEW connection is opened through the Connector.
So if your application reuses its connections, all traffic sent through that session will continue to land on the selected read slave, i.e., when using connection pooling.
The key difference is in how the slave latency checking is handled: …[Read more]
10 Older Entries »