A useful overview of options, syntax and tools that have been deprecated or removed for the upcoming MySQL 5.7 release.
Some applications, particularly those written with a single-node database server in mind, attempt to immediately read a value they have just inserted into the database, without making those those operations part of a single transaction. A read/write splitting proxy or a connection pool combined with a load-balancer can direct each operation to a different database node.
Since Galera allows, for performance reasons, a very small amount of “slave lag”, the node that is processing the read may have not yet applied the write. It can return old data, causing an application that did not expect that to misbehave or produce an error.
Through the mechanism of flow control, slave lag is kept to a minimum, but additionally Galera provides the causal wait facility for those queries that must always see the most up-to-date …[Read more]
Dear MySQL users,
The MySQL developer tools team announces 6.3.4 as our GA release
MySQL Workbench 6.3.
For the full list of changes in this revision, visit
For discussion, join the MySQL Workbench Forums:
Download MySQL Workbench 6.3.4 GA now, for Windows, Mac OS X
Oracle Linux 6 and 7, Fedora 21 and Fedora 22, Ubuntu 14.04, Ubuntu
14.10 and Ubuntu 15.04 or sources, from:
The MySQL 5.7.7 JSON lab release has been getting a lot of attention. At a recent conference, I was cornered by a developer who wanted to jump in with both feet by running this release on his laptop on the flight home. However the developer was not sure how to begin.
1. Down load the MySQL JSON release from http://labs.mysql.com/. You will get the choice of a Linux binary or source code. Please grab the binary if you are using Linux and un-gzip/tar the download.
2. Shut down the current running version of MySQL. I was lucky in this case that the developer was using a recent copy of Ubuntu.
3. Change directory to the ~/Downloads/mysql-5.7.7-labs-json-linux-el6-x86_64 directory.
4. sudo ./bin/mysqld_safe –user=mysql&
5. ./bin/mysql -u root -p, then provde the …[Read more]
This is a little quiz (could be a discussion). I know what we tend to prefer (and why), but we’re interested in hearing additional and other opinions!
Given the way MySQL/MariaDB is architected, what would you prefer to see in a new server, more cores or higher clock speed? (presuming other factors such as CPU caches and memory access speed are identical).
For example, you might have a choice between
- 2x 2.4GHz 6 core, or
- 2x 3.0GHz 4 core
which option would you pick for a (dedicated) MySQL/MariaDB server, and why?
And, do you regard the “total speed” (N cores * GHz) as relevant in the decision process? If so, when and to what degree?
Recently i have reviewed a simple web application, where problem was with moving “read” count of news from main table to another table in MySQL. The logic is separating counting “read”s for news from base table. The way you can accomplish this task, you can create a new “read” table in MySQL, then add necessary code to news admin panel for inserting id,read,date into this new “read” table, while adding new articles. But for test purposes, i decide to move this functionality to MongoDB. Overall task is -> Same data must be in MySQL, counting logic must be in MongoDB and data must be synced from MongoDB to MySQL. Any programming language will be sufficient but, Python is an easy one to use. You can use Official mysql-connector-python and pymongo. Firstly you must create empty “read” table in MySQL, insert all necessary data from base table to “read” and there should be after insert trigger for …[Read more]
In an earlier post titled Using Perl to send tweets stored in a MySQL database to twitter, I showed you a way to use MySQL to store tweets, and then use Perl to automatically send your tweets to twitter.
In this post, we will look at automatically sending a “thank you” to people who retweet your tweets – and we will be using Perl and MySQL again.
Just like in the first post, you will need to register your application with twitter via apps.twitter.com, and obtain the following:
consumer_key consumer_secret access_token access_token_secret
One caveat: twitter has a rate limit on how often you may connect with your application – depending upon what you are trying to do. See …[Read more]
By Erkan Yanar
In the previous article of this series, we described how to run a multi-node Galera Cluster on a single Docker host.
In this article, we will describe how to deploy Galera Cluster over multiple Docker hosts.
By design, Docker containers are reachable using port-forwarded TCP ports only, even if the containers have IP addresses. So we will set up port forwarding for all TCP ports that are required for Galera to operate.
The following TCP port are used by Galera:
- 3306-MySQL port
- 4567-Galera Cluster
- 4568-IST port
- 4444-SST port
Before we start, we need to stop enforcing AppArmor for Docker:
$ aa-complain /etc/apparmor.d/docker
Building a multi-node cluster using the default ports
Building a multi-node cluster using the default ports is not complicated. Besides mapping the ports 1:1, we also need to set …[Read more]
This is in continuation to the previous post, if you have not read
that, I would advise you to go through that before continue
I discussed about three use cases in my last post, here I will explain them in detail.
- Extracting data from a binary log file:
The binary log file is full of information, but what if you want selected info, for example:
- printing the timestamp for every event in the binary log file.
- extracting the event types of all the events occurring in the binary log file.
And any other such data from an event in real time. Below is some sample code which shows how to do that step by step.
- Connect to the available transport
MySQL replication is among the top features of MySQL. In
replication data is replicated from one MySQL Server (also knows
as Master) to another MySQL Server(also known as Slave). MySQL
Binlog Events is a set of libraries which work on top of
replication and open directions for myriad of use cases like
extracting data from binary log files, building application to
support heterogeneous replication, filtering events from binary
log files and much more.
All this in REAL TIME .
I have already defined what MySQL Binlog Events is, to deliver on any of the above use-cases, you first need to read the event from the binary log. This can be done using two types of transports:
1) TCP transport: Here your application will connect to an online MySQL Server and receive events as and when they occur.
2) File transport: As the name suggests the application will connect to an …[Read more]