As we are collecting an ever increasing amount of data and analyzing it, data storage products with alternate file structures have become popular, such as LSMs (log structured merge files) which is implemented in Cassandra and Hbase, as well as fractal indexes found in TokuDB/MX, which is an alternate file structure plugin for mysql and [...]
Someone asked me the other day about some of the more obscure metrics available from a database server’s internals, and wanted access to those. As it happens, we have a feature that lets you see every metric on your systems—yes, every metric, which is typically many thousands, sometimes millions.
First, though, let’s look at what this user was examining. We ship VividCortex with a prebuilt set of templates for graphing popular systems, to reduce time-to-insight as much as possible. (No more Google searching for good graph templates and fighting to get them installed!) They look like this:
There’s quite a variety of templates, dozens for some kinds of servers that have a lot of metrics to expose. But these prebuilt templates aren’t the best solution in some cases. For example, some of the charts will have scores of metrics, and it can be hard to see them sometimes, especially when some are large and some are small:
…[Read more]Someone asked me the other day about some of the more obscure metrics available from a database server’s internals, and wanted access to those. As it happens, we have a feature that lets you see every metric on your systems—yes, every metric, which is typically many thousands, sometimes millions.
First, though, let’s look at what this user was examining. We ship VividCortex with a prebuilt set of templates for graphing popular systems, to reduce time-to-insight as much as possible. (No more Google searching for good graph templates and fighting to get them installed!) They look like this:
There’s quite a variety of templates, dozens for some kinds of servers that have a lot of metrics to expose. But these prebuilt templates aren’t the best solution in some cases. For example, some of the charts will have scores of metrics, and it can be hard to see them sometimes, especially when some are large and some are small:
…[Read more]The MySQL 5.7.8 Release Candidate was released August 3rd. But before you upgrade, be sure to read how to upgrade from 5.6 to 5.7 PLEASE.
Yes, you need to make a backup (or three or four).
Be sure to run mysql_upgrade after starting the 5.7 binary. There are some changes to tables that must be made and this is the way to do it.
The upgrade docs offers several upgrade scenarios.
Also take time to read the MySQL 5.7 Release Notes! This is not only a list of new goodies but it warns you to …
[Read more]Many of us have been there in the past, you get an alert telling you that replication has stopped because of an error, you dig in to it to find that you’re getting an error for an update event that is trying to update a non-existent row, or a duplicate key error because the row ID for some INSERT already exists.
Even with the server set to read only (and not using the new super_read_only variable from MySQL 5.7.8), these problems can still happen – how many of you have seen over-zealous ops trying to “quickly fix” some problem only to royally screw up your data integrity?
The question then becomes – “who or what is making changes on my replica that shouldn’t be?!?”.
The only way to find this out in the past, and still “the conventional wisdom” (I just saw it recommended …
[Read more]The MySQL 5.7 Release Notes for version 5.7.8 are out. Besides the new JSON data type, there is also a new tool, called mysqlpump, which offers the following features:
Parallel processing of databases, and of objects within databases, to speed up the dump process Better control over which databases and database objects (tables, views, stored programs, user accounts) to dump Dumping of user
Here’s a way to detect the sql query causing a lock or a session to fail, and also to identify the o.s.pid if need be (btw, no rocket science). “a” way.. I’m sure there are many others, so feel free to suggest, please.
So, we’re using MCM, and have created a MySQL Cluster like mentioned in the cluster intro session (in Spanish I’m afraid), using 7.4.6, which comes with 5.6.24.
With the env up and running, set up a schema, some data and run a few queries:
mysql> create database world; mysql> use world; Database changed mysql> source world_ndb.sql
(world_ndb.sql, as you might guess, is the world_innodb tables script, with a little adjustment as to which storage engine to be used.)
Once created, let’s lock things up in Cluster:
mysql -uroot -h127.0.0.1 -P3306 mysql> use test; …[Read more]
Today’s episode is all about Valgrind – from the pro’s to the con’s, from the why to the how! This episode will be of interest to anyone who is or wants to work with Valgrind on a regular or semi-regular basis.
- Pro’s/Why
- Con’s
- How
- Using the latest version
sudo [yum/apt-get] install valgrind
#OR#
sudo [yum/apt-get] remove valgrind
sudo [yum/apt-get] install bzip2 glibc*
wget http://valgrind.org/downloads/valgrind-3.10.1.tar.bz2
tar -xf valgrind-3.10.1.tar.bz2; cd valgrind-3.10.1
./configure; make; sudo make install
valgrind –version # This should now read 3.10.1 - VGDB (cd ./mysql-test)
./lib/v1/mysql-test-run.pl –start-and-exit –valgrind –valgrind-option=”–leak-check=yes”
–valgrind-option=”–vgdb=yes” …
- Using the latest version
Execute “Show Engine InnoDB Status;” query analyze the result in “Transaction” section, find out problematic queries.
Now next step is to how to avoid this problem, and here is a solution:
Deadlocks are a classic problem in transactional databases, but they are not dangerous unless they are so frequent that you cannot run certain transactions at all. Normally, you must write your applications so that they are always prepared to re-issue a transaction if it gets rolled back because of a deadlock.
- Always be prepared to re-issue a transaction if it fails due to deadlock. Deadlocks are not dangerous. Just try again.
- Keep transactions small and short in duration to make them less prone to collision.
Reference : …
[Read more]It mentions in the message that you can try to repair it. Also, if you look at the actual FILEPATH you get, you can find out more:
- if it is something like
/tmp/#sql_actor_return
it means that MySQL needs to create a temporary table because of the query size. It stores it in /tmp, and that there is not enough space in your /tmp for that temporary table. increase the tmp_table_size variable and /tmp/ partition size - If it contains the name of an actual table instead, it means that this table is very likely corrupted and you should repair it using REPAIR TABLE