Intro
Oracle is widely use to support back-end systems. On the
other hand, MySQL is the "go-to" data management solution for the
web-facing part of many businesses. If you have both Oracle
and MySQL in-house, you may already also have the need to share
data between them. In this article I'll describe software
that my colleagues and I have been working on to move data from
Oracle to MySQL in real-time without costing an arm and a leg.
Tungsten to the Rescue!
Latest Tungsten Replicator has many features, most of which are open-source,
but the recent one for me is particularly exciting - thanks to
the development done by my colleague Stephane Giron in the …
In my previous post I covered the shard-disk paradigm's pros
and cons, but the conclusion that is that it cannot really
qualify as a scale-out solution, when it comes to massive OLTP,
big-data, big-sessions-count and mixture of reads and
writes.
Read/Write splitting is achieved when numerous
replicated database servers are used for reads. This way the
system can scale to cope with increase in concurrent load. This
solution qualifies as a scale-out solution as it
allow expansion beyond the boundaries of one DB, DB
machines are shared-nothing, can be added as a slave to the
replication "group" when required.
And, as a fact, read/write …
From a Tumblr engineering blog post:
Tumblr is one of the largest users of MySQL on the web. At present, our data set consists of over 60 billion relational rows, adding up to 21 terabytes of unique relational data. Managing over 200 dedicated database servers can be a bit of a handful, so naturally we engineered some creative solutions to help automate our common processes.
Today, we’re happy to announce the open source release of Jetpants, Tumblr’s in-house toolchain for managing huge MySQL database topologies. Jetpants offers a command suite for easily cloning replicas, rebalancing shards, and performing master …
[Read more]Once MySQL is deployed inside a datacenter environment (i.e. forms a cloud ;-), major feature in it becomes replication. It is used to maintain hot copies, standby copies, read-only copies, invalidate external systems, replicate to external systems, etc. If this functionality is broken, datacenter is broken – components are not synchronized anymore, invalidations not done, data not consistent.
From performance perspective, replication not working properly results in unusable slaves so load cannot be spread. This results in higher load on other machines, including master (especially on master, if environment needs stronger consistency guarantees).
Judging on replication importance in MySQL deployments, it should attract performance engineering as much as InnoDB and other critical pieces. Though slave replication performance is being increased in 5.6, master side is not (well, group commit may help a bit, but not as much).
…
[Read more]MySQL and Continuent Tungsten at Constant Contact - How We Architected Our Replication StrategyThursday, June 14th10:00 am PDT/1:00 pm EDT19:00 CEST/18:00 BSTReserve your seat!Constant Contact is a provider of marketing services for over 500,000 small businesses and organizations worldwide, helping them to drive engagement and build relationships with current and prospective customers.As the
With the release of MySQL 5.6 binary log group commit is included, which is a feature focused on improving performance of a server when the binary log is enabled. In short, binary log group commit improve performance by grouping several writes to the binary log instead of writing them one by one, but let me digress a little on how transactions are logged to the binary log before going into the details. Before going into details about the problem and the implementation, let look at what you do to turn it on.
Nothing.
Well... we actually have a few options to tweak it, but nothing
required to turn it on. It even works for existing engines since
we did not have to extend the handlerton interface to implement
the binary log group commit. However, InnoDB has some
optimizations to take advantage of the binary log group commit
implementation.
-
binlog_order_commits={0|1}
- This is a …
As described in the first article of this series, Tungsten
Replicator can replicate data from MySQL to Vertica in real-time.
We use a new batch loading feature that applies
transactions to data warehouses in very large blocks using COPY
or LOAD DATA INFILE commands. This second and concluding
article walks through the details of setting up and testing MySQL
to Vertica replication.
To keep the article reasonably short, I assume that readers are
conversant with MySQL, Tungsten, and Vertica. Basic
replication setup is not hard if you follow all the steps
described here, but of course there are variations in every
setup. For more information on Tungsten check out the
Tungsten Replicator project at code.google.com
site well as …
Real-time analytics allow companies to react rapidly to changing
business conditions. Online ad services process
click-through data to maximize ad impressions. Retailers
analyze sales patterns to identify micro-trends and move
inventory to meet them. The common theme is speed: moving
lots of information without delay from operational systems to
fast data warehouses that can feed reports back to users as
quickly as possible.
Real-time data publishing is a classic example of a big
data replication problem. In this two-part article
I will describe recent work on Tungsten Replicator to move data out of MySQL
into Vertica at high speed with minimal load on
DBMS servers. This feature …
The best (and truly the only) MySQL multi-master, multi-site solution on the market gets even better! Continuent is happy to announce immediate availability of Continuent Tungsten 1.5.New Continuent Tungsten 1.5 allows you to build multi-site, disaster recovery (DR) and multi-master solutions with ease:
Multi-Master Operations - Tungsten can support your multi-master operations today by linking
On Wednesday May 16th, we ran a webinar to provide an overview of all of the new replication features and enhancements that are previewed in the MySQL 5.6 Development Release – including Global Transaction IDs, auto-failover and self-healing, multi-threaded, crash-safe slaves and more.
Collectively, these new capabilities enable MySQL users to scale for next generation web and cloud applications.
Attendees posted a number of great questions to the MySQL developers, serving to provide additional insights into how these new features are implemented. So I thought it would be useful to post those below, for the benefit of those unable to attend the live webinar (note, you can listen to the On-Demand replay which is available now).
Before getting to the Q&A, there are a couple of other resources that maybe useful to …
[Read more]