Showing entries 9951 to 9960 of 44885
« 10 Newer Entries | 10 Older Entries »
VividCortex Live Demo

Have you been considering VividCortex but putting off a free trial? Are you a user who wants to get the most of the product? Are you a huge fan who simply wants to hear the brains behind the product give you the inside scoop?

Join us on Tuesday, September 15 at 2 p.m. ET (6 p.m. GMT), for a live walk through of VividCortex on production systems from our CEO, Baron Schwartz. He will use the product to demonstrate the following:

  • Using our top queries feature to quickly find queries that need attention
  • Finding and diagnosing stalls with Adaptive Fault Detection
  • Comparing queries before and after code deployments, to find workload changes
  • Understanding how your databases and queries are driving CPU, I/O and other operating system work
[Read more]
VividCortex Live Demo

Have you been considering VividCortex but putting off a free trial? Are you a user who wants to get the most of the product? Are you a huge fan who simply wants to hear the brains behind the product give you the inside scoop?

Join us on Tuesday, September 15 at 2 p.m. ET (6 p.m. GMT), for a live walk through of VividCortex on production systems from our CEO, Baron Schwartz. He will use the product to demonstrate the following:

  • Using our top queries feature to quickly find queries that need attention
  • Finding and diagnosing stalls with Adaptive Fault Detection
  • Comparing queries before and after code deployments, to find workload changes
  • Understanding how your databases and queries are driving CPU, I/O and other operating system work
[Read more]
MYSQL_HISTFILE and .mysql_history

The MySQL manual says:
"mysql Logging
On Unix, the mysql client logs statements executed interactively to a history file. By default, this file is named .mysql_history in your home directory. To specify a different file, set the value of the MYSQL_HISTFILE environment variable."
The trouble with that is: it doesn't tell you what you don't need to know. So I'll tell you.

Heritage

The history-file concept that MySQL and MariaDB are following is indeed "on Unix" and specifically is like the GNU History Library. There is a dependence on external libraries, Readline or …

[Read more]
Is mysql/innodb good for high volume inserts?

As we are collecting an ever increasing amount of data and analyzing it, data storage products with alternate file structures have become popular, such as LSMs (log structured merge files) which is implemented in Cassandra and Hbase, as well as fractal indexes found in TokuDB/MX, which is an alternate file structure plugin for mysql and [...]

Inspecting All The Metrics With VividCortex

Someone asked me the other day about some of the more obscure metrics available from a database server’s internals, and wanted access to those. As it happens, we have a feature that lets you see every metric on your systems—yes, every metric, which is typically many thousands, sometimes millions.

First, though, let’s look at what this user was examining. We ship VividCortex with a prebuilt set of templates for graphing popular systems, to reduce time-to-insight as much as possible. (No more Google searching for good graph templates and fighting to get them installed!) They look like this:

There’s quite a variety of templates, dozens for some kinds of servers that have a lot of metrics to expose. But these prebuilt templates aren’t the best solution in some cases. For example, some of the charts will have scores of metrics, and it can be hard to see them sometimes, especially when some are large and some are small:

[Read more]
Inspecting All The Metrics With VividCortex

Someone asked me the other day about some of the more obscure metrics available from a database server’s internals, and wanted access to those. As it happens, we have a feature that lets you see every metric on your systems—yes, every metric, which is typically many thousands, sometimes millions.

First, though, let’s look at what this user was examining. We ship VividCortex with a prebuilt set of templates for graphing popular systems, to reduce time-to-insight as much as possible. (No more Google searching for good graph templates and fighting to get them installed!) They look like this:

There’s quite a variety of templates, dozens for some kinds of servers that have a lot of metrics to expose. But these prebuilt templates aren’t the best solution in some cases. For example, some of the charts will have scores of metrics, and it can be hard to see them sometimes, especially when some are large and some are small:

[Read more]
MySQL 5.6 to 5.7 Upgrade Warning

The MySQL 5.7.8 Release Candidate was released August 3rd. But before you upgrade, be sure to read how to upgrade from 5.6 to 5.7 PLEASE.

Yes, you need to make a backup (or three or four).

Be sure to run mysql_upgrade after starting the 5.7 binary. There are some changes to tables that must be made and this is the way to do it.

The upgrade docs offers several upgrade scenarios.

Also take time to read the MySQL 5.7 Release Notes! This is not only a list of new goodies but it warns you to …

[Read more]
Quickly tell who is writing to a MySQL replica

Many of us have been there in the past, you get an alert telling you that replication has stopped because of an error, you dig in to it to find that you’re getting an error for an update event that is trying to update a non-existent row, or a duplicate key error because the row ID for some INSERT already exists.

Even with the server set to read only (and not using the new super_read_only variable from MySQL 5.7.8), these problems can still happen – how many of you have seen over-zealous ops trying to “quickly fix” some problem only to royally screw up your data integrity?

The question then becomes – “who or what is making changes on my replica that shouldn’t be?!?”.

The only way to find this out in the past, and still “the conventional wisdom” (I just saw it recommended …

[Read more]
mysqlpump — A Database Backup Program

The MySQL 5.7 Release Notes  for version 5.7.8 are out. Besides the new JSON data type, there is also a new tool, called mysqlpump, which offers the following features:

Parallel processing of databases, and of objects within databases, to speed up the dump process Better control over which databases and database objects (tables, views, stored programs, user accounts) to dump Dumping of user

NDB 7.4 & SYS schema: When getting locks, detecting the guilty SQL & o.s.pid.

Here’s a way to detect the sql query causing a lock or a session to fail, and also to identify the o.s.pid if need be (btw, no rocket science). “a” way.. I’m sure there are many others, so feel free to suggest, please.

So, we’re using MCM, and have created a MySQL Cluster like mentioned in the cluster intro session (in Spanish I’m afraid), using 7.4.6, which comes with 5.6.24.

With the env up and running, set up a schema, some data and run a few queries:

mysql> create database world;
mysql> use world;
Database changed
mysql> source world_ndb.sql

(world_ndb.sql, as you might guess, is the world_innodb tables script, with a little adjustment as to which storage engine to be used.)

Once created, let’s lock things up in Cluster:

mysql -uroot -h127.0.0.1 -P3306
mysql> use test; …
[Read more]
Showing entries 9951 to 9960 of 44885
« 10 Newer Entries | 10 Older Entries »