In the theoretical part of this series, we have seen
the basics of monitoring. In that article, though, we have barely
mentioned the new tools available in MySQL 5.7 and MariaDB 10.
Let’s start from something that has the potential of dramatically
changing replication as we know it.
Crash-safe tables and Global transaction identifiers in MySQL 5.6
and 5.7Global transaction identifiers (GTID) is a feature that
has been in my wish list for long time, since the times I was
working with the MySQL team. By the time I left Oracle, this
feature was not even in the plans.
When MySQL 5.6 was first disclosed, the biggest improvement for
replication was the introduction of crash-safe tables (see
Status persistence in …
Replication is the process that transfers data from an active
master to a slave server, which reproduces the data stream to
achieve, as best as possible, a faithful copy of the data in the
master.
To check replication health, you may start with
sampling the service, i.e. committing some
Sentinel value in the master and retrieving it
from the slave.
Sentinel data: Tap tap… Is this thing on?
If you want to make sure that replication is working, the easiest
test is using replication itself to see if data is being copied
across from the master to the slaves. The method is easy:
- Make sure that the data you want to see is NOT in the master or in the slave. If you skip this step, you may think that replication is working, while in fact it may not.
- Either create a table …
On Tuesday Steaphan Greene announced that all currently-pushed changes have now been rebased onto the newly-released upstream MySQL-5.6.24.
The new branch at this point is available on github.
Our .deb and .rpm builds are available in the PSCE repo as well as being browsable here http://repo.psce.com/download/webscalesql/
Instructions for using the repo are available for Debian …
[Read more]
default_tmp_storage_engine
variable was introduced
in 5.6.3, allowing the configuration of the default engine for
temporary tables. This seems to be in the direction, as I
commented before, of making MyISAM an optional engine. In 5.7,
a separate tablespace is being created to hold
those tables in order to reduce its performance penalty
(those tables do not need to be redone if the server crashes, so
extra writes are avoided).
However, I have seen many people assuming that because
default_tmp_storage_engine
has the value “InnoDB”,
all temporary tables are created in InnoDB format in
5.6. This is not true: first, because implicit
temporary tables are still being created in memory using …
Long time ago I wrote a blog post that we started using
Semi-Synchronous replication in production at Facebook. We are
running our own MySQL facebook-5.6 branch, which added many replication
(including Semisync) patches. Some of them were also merged with
WebScaleSQL.
Recently I heard from people from community that they were
suffering from Semisync performance issues in official MySQL 5.6.
I took some time to review our previous MySQL bug reports, then
realized that some important bugs were either still "verified" or
inefficiently fixed. Two most affecting bug reports were …
I have been working with MySQL for some time and it has changed significantly from what I was using in 5.0 to what we have now in 5.6. One of the biggest handicap we’ve had in the past is to not be able to see what MySQL is doing or why. MySQL 5.5 introduced us … Continue reading pstop – a top-like program for MySQL (based on performance_schema)
When I queried information_schema.innodb_trx (introduced in 5.1
with the InnoDB Plugin) I noticed there were a few transactions
in LOCK WAIT state.
Example:
mysql [information_schema] > select trx_id,trx_state
-> from information_schema.innodb_trx;
+--------+-----------+
| trx_id | trx_state |
+--------+-----------+
| 7974 | LOCK WAIT |
| 7962 | RUNNING |
+--------+-----------+
2 rows in set (0.00 sec)
Then I made a query to join a few information_schema and
performance_schema tables to find out which query was blocking my
transactions. It turned out that the blocking transaction had a
trx_query=NONE. So my query was block by a transaction doing
nothing. That's not really helpful.
Let's try to recreate the situation and see exactly what
happened. I'll use two sessions for the transactions and a third
to monitor the server.
…
As I mentioned on my last post, where I compared the default configurations options in 5.6 and 5.7, I have been doing some testing for a particular load in several versions of MySQL. What I have been checking is different ways to load a CSV file (the same file I used for testing the compression tools) into MySQL. For those seasoned MySQL DBAs and programmers, you probably know the answer, so you can jump over to my 5.6 versus 5.7 results. However, the first part of this post is dedicated for developers and MySQL beginners that want to know the answer to the title question, in a step-by-step fashion. I must say I also learned something, as I under- and over-estimated some of the effects of certain …
[Read more]With the upcoming release of MySQL 5.7 I begin to see a problem which I think needs attention at least for 5.8 or whatever comes next. The GA release cycle is too long, being about 2 years and that means 3 years between upgrades in a production environment More people use MySQL and the data … Continue reading Making MySQL Better More Quickly
When you're testing out a new version of MySQL in a
non-production environment there is a temptation to go wild and
turn on all kinds of new features. Especially if you're
reading the changelogs or the manual and scanning through
options. You want to start with the most reasonable set of
defaults, right? Maybe you're even doing benchmarks to
optimize performance using all the new bells and whistles.
Resist the temptation! If your goal is to upgrade your
production environment then what you really want is to isolate
changes. You want to preform the upgrade with as little to
no impact as possible. Then you can start turning on
features or making changes one-by-one.
Why? Anytime you're doing a major upgrade to something as
fundamental as your core RDBMS, there are many ways things can go
wrong. Performance regressions & incompatible changes,
client/server incompatibilities …