Planet MySQL Planet MySQL: Meta Deutsch Español Français Italiano 日本語 Русский Português 中文
Showing entries 1 to 9

Displaying posts with tag: active-active (reset)

10 reaons active-active is hard and how to solve it
+0 Vote Up -0Vote Down

Read the original article at 10 reaons active-active is hard and how to solve it

Multi-master replication provides redundant copies of your most important business assets. What’s more it allows applications to scale out, which is perfect for cloud hosting solutions like Amazon Web Services. But when you decide you need to scale your write capacity, you may be considering active-active setup. This is dangerous, messy and prone to failure. [...]

For more articles like these go to Sean Hull's Scalable Startups

Related posts:
  • Why does MySQL replication fail?
  •   [Read more...]
    The CAP theorem and MySQL Cluster
    Employee +8 Vote Up -0Vote Down
    tldr; A single MySQL Cluster prioritises Consistency in Network partition events. Asynchronously replicating MySQL Clusters prioritise Availability in Network partition events.


    I was recently asked about the relationship between MySQL Cluster and the CAP theorem. The CAP theorem is often described as a pick two out of three problem, such as choosing from good, cheap, fast. You can have any two, but you can't have all three. For CAP the three qualities are 'Consistency', 'Availability' and 'Partition tolerance'. CAP states that in a system with data replicated over a network only two of these three qualities can be maintained at once, so which two does MySQL Cluster provide?

    Standard 'my interpretation of CAP' section

    Everyone who discusses CAP like to rehash





      [Read more...]
    Eventual Consistency in MySQL Cluster - implementation part 3
    Employee +3 Vote Up -0Vote Down



    As promised, this is the final post in a series looking at eventual consistency with MySQL Cluster asynchronous replication. This time I'll describe the transaction dependency tracking used with NDB$EPOCH_TRANS and review some of the implementation properties.

    Transaction based conflict handling with NDB$EPOCH_TRANS

    NDB$EPOCH_TRANS is almost exactly the same as NDB$EPOCH, except that when a conflict is detected on a row, the whole user transaction which made the conflicting row change is marked as conflicting, along with any dependent transactions. All of these rejected row operations are then handled using






      [Read more...]
    Eventual consistency in MySQL Cluster - implementation part 2
    Employee +5 Vote Up -0Vote Down



    In previous posts I described how row conflicts are detected using epochs. In this post I describe how they are handled.

    Row based conflict handling with NDB$EPOCH

    Once a row conflict is detected, as well as rejecting the row change, row based conflict handling in the Slave will :
    • Increment conflict counters
    • Optionally insert a row into an exceptions table
    For NDB$EPOCH, conflict detection and handling operates on one Cluster in an Active-Active pair designated as the Primary. When a Slave MySQLD attached to the Primary Cluster detects a conflict between data stored in the








      [Read more...]
    Eventual consistency in MySQL Cluster - implementation part 1
    Employee +4 Vote Up -0Vote Down



    The last post described MySQL Cluster epochs and why they provide a good basis for conflict detection, with a few enhancements required. This post describes the enhancements.

    The following four mechanisms are required to implement conflict detection via epochs :
  • Slaves should 'reflect' information about replicated epochs they have applied
    Applied epoch numbers should be included in the Slave Binlog events returning to the originating cluster, in a Binlog position corresponding to the commit time of the






  •   [Read more...]
    Eventual Consistency in MySQL Cluster - using epochs
    Employee +5 Vote Up -0Vote Down



    Before getting to the details of how eventual consistency is implemented, we need to look at epochs. Ndb Cluster maintains an internal distributed logical clock known as the epoch, represented as a 64 bit number. This epoch serves a number of internal functions, and is atomically advanced across all data nodes.

    Epochs and consistent distributed state

    Ndb is a parallel database, with multiple internal transaction coordinator components starting, executing and committing transactions against rows stored in different data nodes. Concurrent transactions only interact where they attempt to lock the same row. This






      [Read more...]
    Eventual Consistency - detecting conflicts
    Employee +3 Vote Up -0Vote Down



    In my previous posts I introduced two new conflict detection functions, NDB$EPOCH and NDB$EPOCH_TRANS without explaining how these functions actually detect conflicts? To simplify the explanation I'll initially consider two circularly replicating MySQL Servers, A and B, rather than two replicating Clusters, but the principles are the same.

    Commit ordering

    Avoiding conflicts requires that data is only modified on one Server at






      [Read more...]
    Eventual consistency with transactions
    Employee +3 Vote Up -0Vote Down



    In my last post I described the motivation for the new NDB$EPOCH conflict detection function in MySQL Cluster. This function detects when a row has been concurrently updated on two asynchronously replicating MySQL Cluster databases, and takes steps to keep the databases in alignment.

    With NDB$EPOCH, conflicts are detected and handled on a row granularity, as opposed to column granularity, as this is the granularity of the epoch metadata used to detect conflicts. Dealing




      [Read more...]
    Eventual consistency with MySQL
    Employee +2 Vote Up -0Vote Down



    tl;dr : New 'automatic' optimistic conflict detection functions available giving the best of both optimistic and pessimistic replication on the same data

    MySQL replication supports a number of topologies, and one of the most interesting is an active-active, or master-master topology, where two or more Servers accept read and write traffic, with asynchronous replication between them.

    This topology has a number of attractions, including :
    • Potentially higher availability
    • Potentially low impact on read/write latency
    • Service availability insensitive to replication







      [Read more...]
    Showing entries 1 to 9

    Planet MySQL © 1995, 2014, Oracle Corporation and/or its affiliates   Legal Policies | Your Privacy Rights | Terms of Use

    Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.