MySQL for Visual Studio 1.1.3 introduced a new feature: MySQL Data Export tool. This tool allows the users to create a dump of an existing MySQL database. This video shows a quick tutorial on how to use this tool for creating MySQL exporting script inside Visual Studio.
I have just implemented MASTER_GTID_WAIT() in MariaDB 10.0. This can be used to give a very elegant solution to the problem of stale reads in replication read-scaleout, without incuring the overheads normally associated with synchronous replication techniques. This idea came up recently in a discussion with Stephane Varoqui, and is similar to the concept of Lamport logical clock described in this Wikipedia article.
I wanted to describe this, hoping to induce people to test and maybe start using this, as it is a simple but very neat idea, actually.
A very typical use of MariaDB/MySQL …
[Read more]
Thomas Nielsen and I recently presented a webinar explaining the
latest developments in managing MySQL Cluster. In case you
weren’t able to attend (or wanted to refresh your memory) then
the webinar replay and charts are now available.
As a reminder, this webinar covered what’s new in MySQL Cluster Manager 1.3 which recently went GA.
By their very nature, clustered environments involve more efforts and resources to administer than standalone systems and this holds true for MySQL Cluster, the database designed for web-scale throughput with carrier-grade …
[Read more]The MariaDB project is pleased to announce the immediate availability of MariaDB 10.0.8. This is a Release Candidate release.
See the Release Notes and Changelog for detailed information on this release and the What is MariaDB 10.0? page in the MariaDB Knowledge Base for general information about the MariaDB 10.0 series.
[Read more]In this post, I’m going to briefly cover the signs that you’re doing multi-tenancy wrong. Some of these practices are entrenched in software: there are gems in Ruby on Rails, for instance, use the first anti-pattern to achieve multi-tenancy. Listen, you can drive a car with a flat tire and you can eat yogurt with a fork. People have made these solutions work, but there’s a better way.
Creating tables or schemas per customer
If you find yourself running DDL (Create Table…) for each new company or user that you add to your system, most likely you’re committing a pretty big anti-pattern. Now every time you update the table definition or need to update data across all tables, you’ll have to use a script to generate the SQL for you. Those updates will take longer and it’s much more prone to failure.
If you’re doing this for performance reasons, you have two options in most database systems to …
[Read more]If you happen to work with personal data, chances are you are subject to SOX (Sarbanes-Oxley) whether you like it or not.
One of the worst aspects of this is that if you want to be able to analyse your data and you replicate out to another host, you have to find a way of anonymizing the information. There are of course lots of ways of doing this, but if you are replicating the data, why not anonymize it during the replication?
Of the many cool features in Tungsten Replicator, one of my favorites is filtering. This allows you to process the stream of changes that are coming from the data extracted from the master and perform operations on it. We use it a lot in the replicator for ignoring tables, schemas and columns, and for ensuring that we have the correct information within the THL.
Given this, let’s use it to anonymize the data as it is being replicated so that we don’t need to post-process it for analysis, and …
[Read more]MySQL Cluster is a highly resilient and scalable database platform designed to deliver 99.999% availability with features such as self-healing and online operations, and capable of performing over 1,00,000,000 updates per minute. The full feature set includes development and management platforms alongside monitoring and administration tools, all backed by Oracle Premier Lifetime Support.
To learn more about MySQL Cluster, consider taking the MySQL Cluster training. Events already on the schedule for this 3-day instructor-led course include:
Location |
Date |
Delivery … |
One of the common tasks requested by our support customers is to optimize slow queries. We normally ask for the table structure(s), the problematic query and sample data to be able to reproduce the problem and resolve it by modifying the query, table structure, or global/session variables. Sometimes, we are given access to the server to test the queries on their live or test environment. But, more often than not, customers will not be able to provide us access to their servers or sample data due to security and data privacy reasons. Hence, we need to generate the test data ourselves.
A convenient way of generating test data is visiting http://generatedata.com which provides a web form where you can provide the columns and its corresponding data types, and turn them into test data. The website is capable of …
[Read more]
Mysqldump is a fantastic tool for backing up
and restoring small and medium sized MySQL tables and databases
quickly. However, when databases surge into the
multi-terabyte range restoring from logical backups is
inefficient. It can take a significant amount of time to insert a
hundred million plus rows to a single table, even with very fast
I/O. Programs like MySQL Enterprise Backup and Percona XtraBackup
allow non-blocking binary copies of your InnoDB tables to be
taken while it is online and processing requests. XtraBackup also
has an export feature that allows InnoDB file per tablespaces to
be detached from the shared table space and imported to a
completely different MySQL instance.
The necessary steps to export and import InnoDB tables are in the
XtraBackup documentation …
MySQL 5.5.36 was recently released (it is the latest MySQL 5.5, is GA), and is available for download here:
http://dev.mysql.com/downloads/mysql/5.5.html
I was reading through the changelogs to review the changes and fixes, and to summarize, I must say this release is mostly uneventful.
There was one new feature added (for building, so not really applicable to everyone), and only 17 bugs fixed.
The new feature is this:
- CMake now supports a -DTMPDIR=dir_name option to specify the default tmpdir value. If unspecified, the value defaults to P_tmpdir in . Thanks to Honza Horak for the patch. (Bug #68338, Bug #16316074)
Out of the 17 bugs, there was only 1 I thought worth mentioning (because it is a wrong results bug):
- COUNT(DISTINCT) sometimes produced an incorrect result when the last read row …