Showing entries 22936 to 22945 of 44124
« 10 Newer Entries | 10 Older Entries »
Beyond great cache hit ratio

I worked with application recently which has great memcached hit ratio - over 99% but yet still has average page response time over 500ms. Reason ? There are hundreds memcached gets and even though they have some 0.4ms response time they add up to add hundreds of ms to the total response time.

The great memcached hit ratio is great however even more than that you should target eliminating requests all together. Hit rate is very bad indicator to begin with. Imagine you have application which gets 90 memcache gets (hits) to retrieve some data plus there are 10 more requests which resulted in misses and caused MySQL queries. The hit ratio is 90%. Imagine now you have found a way to store the data which required 90 requests as the single object. You have 1 request (hit) now and 10 misses which drops your hit rate down to less than 10% but performance will likely be a lot better.

The answer in many cases is to use larger objects for the …

[Read more]
mk-query-digest uses less memory

Daniel changed mk-query-digest to use much less memory. It parsed and aggregated a 2GB MySQL slow query log file in a few dozen megabytes of memory for me yesterday. Thanks to Facebook for sponsoring this work.

Related posts:

  1. mk-query-digest now supports Postgres logs
  2. Learn about mk-query-digest at PgEast 2010
  3. Slides from my session on mk-query-digest at PgEast 2010
[Read more]
A workaround for the performance problems of TEMPTABLE views

MySQL supports two different algorithms for views: the MERGE algorithm and the TEMPTABLE algorithm. These two algorithms differ greatly. A view which uses the MERGE algorithm can merge filter conditions into the view query itself. This has significant performance advantages over TEMPTABLE views. A view which uses the TEMPTABLE algorithm will have to compute the entire contents of the view before any filter is applied. Computing the entire contents requires a temporary table and many more rows may be accessed than otherwise would had the filter been merged into the view query.

As long as a view avoids aggregation, DISTINCT, GROUP BY, ORDER BY, etc, then it can use the MERGE algorithm. Unfortunately, this means that views of any significant complexity will almost always use the TEMPTABLE algorithm.

This blog post demonstrates a workaround that allows the convenience of complex views, including those with aggregation (that is views …

[Read more]
Pacemaker, please meet NDB Cluster or using Pacemaker/Heartbeat to start a NDB Cluster

Customers have always asked me to make NDB Cluster starts automatically upon startup of the servers. For the ones who know NDB Cluster, it is tricky to make it starts automatically. I know at least 2 sets of scripts to manage NDB startup, ndb-initializer and from Johan configurator www.severalnines.com. If all the nodes come up at about the same time, it is not too bad but what if one the critical node takes much longer to start because of an fsck on a large ext3 partition. Then, a startup script becomes a nightmare. Finally, if the box on which the script is supposed to run didn't start at all. That's a lot of rules to handle.

Since all aspects of HA interest me, I was recently reading the …

[Read more]
When should I NOT use InfiniDB?

It's great to see such good interest in InfiniDB, but now that we are seeing increasing numbers of users, we're also seeing people who are trying to use InfiniDB in places where they shouldn't. The end result is a bad experience for them and a bad impression of InfiniDB, which is a shame. InfiniDB is somewhat like MySQL Cluster: in the right use cases, you can't beat it, in the wrong use cases, it beats you.


So I thought I would quickly speak to when you should NOT use InfiniDB in hope...

How to insert information on Access denied on the MySQL error log

===============================================================

MySQL is really poor in is support for AUDITING.


There will be some new feature in 5.5 or later, in which we will see the AUDIT interface finally implemented.

But ... who knows what will really happen, and who know if it will work or not.

So in the meantime, if you want to have some information printed out (at least), you can use this simple pathc.


What you need to do is modifying the file mysqld.cc in the sql directory.


in:> sql/mysqld.cc
Look for the function : int my_message_sql(uint error, const char *str, myf MyFlags)

change the code in the function with the one here, but also check that it is not inserting new bugs ;-).

int my_message_sql(uint error, const char *str, myf MyFlags)
{
THD *thd;
DBUG_ENTER("my_message_sql");

[Read more]
Sphinx As MySQL Storage Engine (SphinxSE)

Sphinx As MySQL Storage Engine (SphinxSE)

SphinX is a great full-text search engine for MySQL. Installing the Sphinx daemon was straightforward as you can compile it from the source or use a .DEB/.RPM package but SphinxSE was a little bit tricky since it needed to be installed as a plugin on a running MySQL server. So if you use Debian or Centos and install your MySQL from a .deb or .rpm package this is how you do it.

MySQL ‘max_allowed_packet’ fix

I was trying to import some data into my local MySQL (Mac) and ran into the following error: #1153 – Got a packet bigger than ‘max_allowed_packet’ bytes. I was storing some images in a LongBlob column which was causing this …don’t even bother saying it ;-).

In short, you need to locate your mysql my.cnf config file and add or change the max allowed packet:

[mysqld]
max_allowed_packet = 50M

At the shell, you can type “mysqladmin” and scroll up a bit to see what the default location options are for my.cnf.

In my case, I tried using MySQL Workbench (awesome tool by the way) to change this parameter.  It changed it but it appeared MySQL wasn’t reading Workbenches location for my.cnf (as circled at the bottom of the screen-shot).  I’m not sure why my.cnf wasn’t in a standard directory like “/etc” or why that location was …

[Read more]
The MySQL Track at Kaleidoscope is set!

The MySQL track at Kaleidoscope in Washington, DC during June 28-July 1st is set! Here is the schedule, Lincoln VI is the MySQL track room.

Ronald has done a super job and spent a ton of hours in the past several weeks coordinating this effort. Work has kept me much busier than normal, but I have lent some time to the coordination as well. It is a credit mostly to Ronald that we have been able to plan an entire 19-session conference track, complete with confirming speakers, in less than a month. (You may notice the schedule does not have all 19 sessions full, we are just waiting for some more speakers to confirm details.)

Whether …

[Read more]
How to log all MySQL query without stressing the MySQL server with the general Log

It is possible to trap all the query sent to MySQL, using tcpdump.

This will be done capturing the network traffic. Since this uses the libpcap library, we need to have root privileges.

Because tcpdump will not capture queries sent to socket file, be sure that the request you want to track will not use it.

So having clarified this obvious limitation, let us assume that all the application running locally are accessing MySQL using 127.0.0.1 on standard port 3306.

 

Capturing the queries to a file named all_query_tcpdump.log executing:

tcpdump -i lo port 3306 -s 2000 -w all_query_tcpdump.log

Please note that we will capture 2000 (-s 2000) headers bytes, which should be fine in the most cases.

 

Having done the data collection, we have now to analyze it.
For that we will use  …

[Read more]
Showing entries 22936 to 22945 of 44124
« 10 Newer Entries | 10 Older Entries »