Showing entries 111 to 120 of 142
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: tips (reset)
Hint: throttling xtrabackup

Using xtrabackup for copying files can really saturate your disks, and that why we made special option --throttle=rate to limit rate of IO per second. But it really works when you do local copy.
What about stream backup ? Even you copy just to remote box with
innobackupex --stream=tar | ssh remotebox "tar xfi -"
, read may be so intensive so your mysqld feels impact, slave getting behind, etc...

For this there is a nice small utility - pv .

With pv you run:
innobackupex --stream=tar | pv -q -L10m | ssh remotebox "tar xfi -"

and it will limit channel rate to 10 M per second.

Entry posted by Vadim | No comment

Add to: …

[Read more]
What time 18446744073709550.000 means

Sometimes when you do profiling you can see number like this in timestamps. Periodically in our patches there was Query_time: 18446744073709550.000 in slow.log file (well, it was fixed recently, but still appears in other places).

I faced this problem several years ago when only 2-core AMD Opteron systems appeared and I noticed sysbench getting crazy showing query execution time 18446744073709550.000 or like this.

Obviously this is unsigned integer which was received by subtraction bigger number from smaller.
But how it can be if we use:

start_time = clock_gettime(CLOCK_THREAD_CPUTIME_ID, &tp);
...
query_execution
...
end_time = clock_gettime(CLOCK_THREAD_CPUTIME_ID, &tp);

total_time = end_time - start_time;

How we can get query executed in negative time ? It would be too good to be real :).

After some research I found the reason, and it is …

[Read more]
Why MySQL’s binlog-do-db option is dangerous

I see a lot of people filtering replication with binlog-do-db, binlog-ignore-db, replicate-do-db, and replicate-ignore-db. Although there are uses for these, they are dangerous and in my opinion, they are overused. For many cases, there's a safer alternative.

The danger is simple: they don't work the way you think they do. Consider the following scenario: you set binlog-ignore-db to "garbage" so data in the garbage database (which doesn't exist on the slave) isn't replicated. (I'll come back to this in a second, so if you already see the problem, don't rush to the comment form.)

Now you do the following:

PLAIN TEXT CODE:

  1. $ mysql
  2. mysql> delete from garbage.junk;
  3. mysql> use garbage;
  4. mysql> update production.users set disabled = 1 where user …
[Read more]
How to reduce Internet traffic when connecting to remote host?

Recently we received a letter from our customer who was unsatisfied with the amount of Internet traffic that dbForge Studio for MySQL produced when he was working with the remote MySQL server via VPN connection.

Here are two tips on how you can reduce your traffic in version 3.50.

First tip: Turn off code completion

SQL editor code completion feature requires all object names from all databases to be queried from server. This is no problem when working with MySQL server on local network but it takes time on remote connection. To turn off loading completions from database go to Tools->Options->Text Editor->Code Completion option tab and turn off option ‘Load completions from database’.

Second tip: Use compression

Version 3.50 of dbForge Studio for MySQL contains database connection option named ‘Use …

[Read more]
Btw xtrabackup is not only backup..

It is obvious thing for me, but I just figured out it may be not common understanding. Xtrabackup is also can be used (not only can, but we actually use it this way) to clone one slave to another, or just setup new slave from the master. And it is done in almost non-blocking way ( true for InnoDB setups) for cloned server. Here is command

PLAIN TEXT CODE:

  1. innobackupex-1.5.1 --stream=tar /tmp/--slave-info | ssh user@DESTSERVER "tar xfzi - -C /DESTDIR"

When it finished on destination server you run

PLAIN TEXT CODE:

  1. innobackupex-1.5.1 --apply-log --use-memory=2G  /DESTDIR

And you have ready database directory, just copy my.cnf from original …

[Read more]
Interesting Programmer Links

Really interesting read about how to examine what’s stored in memcached.

Peep uses ptrace to freeze a running memcached server, dump the internal key metadata, and return the server to a running state. If you have a good host ejection mechanism in your client, such as in the Twitter libmemcached builds, you won’t even have to change the production server pool. The instance is not restarted, and no data is lost.

Quick look at MySQL 5.4

Highlights include scalability improvements, subquery optimizations and join improvements, improved stored procedure management, out parameters in prepared statements, and new information schema additions.

[Read more]
How to decrease InnoDB shutdown times

Sometimes a MySQL server running InnoDB takes a long time to shut down. The usual culprit is flushing dirty pages from the buffer pool. These are pages that have been modified in memory, but not on disk.

If you kill the server before it finishes this process, it will just go through the recovery phase on startup, which can be even slower in stock InnoDB than the shutdown process, for a variety of reasons.

One way to decrease the shutdown time is to pre-flush the dirty pages, like this:

PLAIN TEXT CODE:

  1. mysql> set global innodb_max_dirty_pages_pct = 0;

Now run the following command:

[Read more]
PROCEDURE ANALYSE

Quite common task during schema review is to find the optimal data type for the column value - for example column is defined as INT but is it really needed or may be SMALLINT or even TINYINT will do instead. Does it contain any NULLs or it can be defined NOT NULL which reduces space needed and speeds up processing in most cases ?

These and similar tasks are often done by bunch of SQL queries while really MySQL has a native feature to perform this task - PROCEDURE ANALYSE

Here is sample run of PROCEDURE ANALYSE on drupal schema of Percona web site. The output contains row for each column in the table but I've omitted everything by few rows.

PLAIN TEXT SQL:

  1. mysql> SELECT * FROM node  …
[Read more]
MySQL random freezes could be the query cache

I feel like I've been seeing this a lot lately.

occasionally, seemingly innocuous selects take unacceptably long.

Or

Over the past few weeks, we've been having bizarre outages during which everything seems to grind to a halt... and then fixes itself within 5 minutes. We've got plenty of memory, we're not running into swap, and we can't find any queries that would seem to trigger outages: just tons of simple session requests all hung up for no obvious reason.

Problems like this are always hard to debug. If it happens twice a week for 5 minutes at a time, your chance of getting someone logged onto the machine to watch it in action are pretty slim. And of course, when they do look at it, they see nothing wrong on the surface; it takes some very clever, very fast work with OS-level debugging and tracing utilities to really prove what's happening.

The two cases mentioned above were caused by …

[Read more]
Converting Character Sets

The web is going the way of utf8.  Drizzle has chosen it as the default character set, most back-ends to websites use it to store text data, and those who are still using latin1 have begun to migrate their databases to utf8.  Googling for "mysql convert charset to utf8" results in a plethora of sites, each with a slightly different approach, and each broken in some respect.  I'll outline those approaches here and show why they don't work, and then present a script that can generically be used to convert a database (or set of tables) to a target character set and collation.

Approach #1:

PLAIN TEXT SQL:

  1. ALTER TABLE `t1` CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;

Take the following table as an example why this approach will not work:

[Read more]
Showing entries 111 to 120 of 142
« 10 Newer Entries | 10 Older Entries »