Showing entries 1 to 10 of 12
2 Older Entries »
Displaying posts with tag: active-active (reset)
7 Galera Cluster presentations in Percona Live Santa Clara 18-21.4. Meet us there!

Once again Galera Cluster is a popular topic at Percona Live 2016. There are 7 Galera Cluster presentations in Percona Live Santa Clara 18-21.4. Codership’s co-founder and Galera developer Seppo Jaakola talks about Schema Upgrades in Galera Cluster, 20 April 01:00 PM – 01:50 PM @ Ballroom A.  The number includes many Percona XtraDB Cluster presentations, which is based on Codership’s Galera Cluster.

 

Codership will also have a booth at the conference. Meet the developers and experts of Galera Cluster and visit our booth 507!

 

Tuesday 19 April

Orchestrating Percona XtraDB Cluster with Kubernetes (Raghavendra Prabhu)  3:50 PM – 04:40 PM @ Ballroom D

Percona XtraDB Cluster Reference Architecture 2016 (Jay …

[Read more]
Codership Galera Cluster Webinar USA – Migrating from Master-Slave MySQL Replication to Multi-Master Galera Cluster-April 12th

AGENDA:

In this webinar, we will discuss the practical aspects of migrating a database setup based on traditional asynchronous replication to multi-master Galera Cluster. We will discuss the benefits Galera provides and how traditional replication settings, architecture and practices can be converted to Galera Cluster. We will show the steps that are needed to perform the migration with limited or no downtime, and will demonstrate the procedure in practice using a live database with an actual workload.

Galera Cluster is trusted by thousands of users. Galera Cluster powers Percona XtraDB Cluster and MariaDB Enterprise Cluster. This is a webinar presented by Codership, the developers and experts of Galera Cluster.

DATE AND TIME:  Tue, Apr 12, 2016 10:00 AM – 11:00 AM PDT (Pacific time)

PRESENTERS: Philip Stoev, Sakari Keskitalo Codership

[Read more]
Codership Galera Cluster Webinar EMEA – Migrating from Master-Slave MySQL Replication to Multi-Master Galera Cluster-April 12th

AGENDA:

In this webinar, we will discuss the practical aspects of migrating a database setup based on traditional asynchronous replication to multi-master Galera Cluster. We will discuss the benefits Galera provides and how traditional replication settings, architecture and practices can be converted to Galera Cluster. We will show the steps that are needed to perform the migration with limited or no downtime, and will demonstrate the procedure in practice using a live database with an actual workload.

Galera Cluster is trusted by thousands of users. Galera Cluster powers Percona XtraDB Cluster and MariaDB Enterprise Cluster. This is a webinar presented by Codership, the developers and experts of Galera Cluster.

DATE AND TIME:  Tue, Apr 12, 2016 11:00 AM -12:00 PM EEST (Eastern Europe time)

PRESENTERS: Philip Stoev, Sakari Keskitalo Codership

[Read more]
10 reaons active-active is hard and how to solve it

Read the original article at 10 reaons active-active is hard and how to solve it

Multi-master replication provides redundant copies of your most important business assets. What’s more it allows applications to scale out, which is perfect for cloud hosting solutions like Amazon Web Services. But when you decide you need to scale your write capacity, you may be considering active-active setup. This is dangerous, messy and prone to failure. [...]

For more articles like these go to Sean Hull's Scalable Startups

Related posts:

  1. Why does MySQL replication fail?
[Read more]
The CAP theorem and MySQL Cluster

tldr; A single MySQL Cluster prioritises Consistency in Network partition events. Asynchronously replicating MySQL Clusters prioritise Availability in Network partition events.

I was recently asked about the relationship between MySQL Cluster and the CAP theorem. The CAP theorem is often described as a pick two out of three problem, such as choosing from good, cheap, fast. You can have any two, but you can't have all three. For CAP the three qualities are 'Consistency', 'Availability' and 'Partition tolerance'. CAP states that in a system with data replicated over a network only two of these three qualities can be maintained at once, so which two does MySQL Cluster provide?

Standard 'my interpretation of CAP' section

Everyone who discusses CAP like to rehash it, and I'm no exception. …

[Read more]
Eventual Consistency in MySQL Cluster - implementation part 3




As promised, this is the final post in a series looking at eventual consistency with MySQL Cluster asynchronous replication. This time I'll describe the transaction dependency tracking used with NDB$EPOCH_TRANS and review some of the implementation properties.

Transaction based conflict handling with NDB$EPOCH_TRANS

NDB$EPOCH_TRANS is almost exactly the same as NDB$EPOCH, except that when a conflict is detected on a row, the whole user transaction which made the conflicting row change is marked as conflicting, along with any dependent transactions. All of these rejected row operations are then handled using inserts to an exceptions table and realignment …

[Read more]
Eventual consistency in MySQL Cluster - implementation part 2




In previous posts I described how row conflicts are detected using epochs. In this post I describe how they are handled.

Row based conflict handling with NDB$EPOCH

Once a row conflict is detected, as well as rejecting the row change, row based conflict handling in the Slave will :

  • Increment conflict counters
  • Optionally insert a row into an exceptions table

For NDB$EPOCH, conflict detection and handling operates on one Cluster in an Active-Active pair designated as the Primary. When a Slave MySQLD attached to the Primary Cluster detects a conflict between data stored in the Primary and a replicated event …

[Read more]
Eventual consistency in MySQL Cluster - implementation part 1




The last post described MySQL Cluster epochs and why they provide a good basis for conflict detection, with a few enhancements required. This post describes the enhancements.

The following four mechanisms are required to implement conflict detection via epochs :

  1. Slaves should 'reflect' information about replicated epochs they have applied
    Applied epoch numbers should be included in the Slave Binlog events returning to the originating cluster, in a Binlog position corresponding to the commit time of the replicated epoch …
[Read more]
Eventual Consistency in MySQL Cluster - using epochs




Before getting to the details of how eventual consistency is implemented, we need to look at epochs. Ndb Cluster maintains an internal distributed logical clock known as the epoch, represented as a 64 bit number. This epoch serves a number of internal functions, and is atomically advanced across all data nodes.

Epochs and consistent distributed state

Ndb is a parallel database, with multiple internal transaction coordinator components starting, executing and committing transactions against rows stored in different data nodes. Concurrent transactions only interact where they attempt to lock the same row. This design minimises unnecessary system-wide …

[Read more]
Eventual Consistency - detecting conflicts




In my previous posts I introduced two new conflict detection functions, NDB$EPOCH and NDB$EPOCH_TRANS without explaining how these functions actually detect conflicts? To simplify the explanation I'll initially consider two circularly replicating MySQL Servers, A and B, rather than two replicating Clusters, but the principles are the same.

Commit ordering

Avoiding conflicts requires that data is only modified on one Server at a time. …

[Read more]
Showing entries 1 to 10 of 12
2 Older Entries »