Showing entries 36081 to 36090 of 44922
« 10 Newer Entries | 10 Older Entries »
MySQL 5.1 partitions in practice

This article explains how to test the performance of a large database with MySQL 5.1, showing the advantages of using partitions.

The test database uses data published by the US Bureau of Transportation Statistics. Currently, the data consists of ~ 113 million records (7.5 GB data + 5.2 GB index).

MySQL Conference Registration Open

The MySQL Conference & Expo web site for next year is now live.  Although the program is not completely finalized, we've got some of the basic information up and you can now register.  The preliminary schedule of sessions and tutorials have been posted.  In the coming weeks expect more info on keynotes, as well as the …

[Read more]
MySQL Query Profiling Tools ? part 0, Ma?atkit Query Profiler

Today I’ve been checking out a new client environment. My mission is to figure out (cold) some of the characteristics of the queries being run, and particularly if they’re “good” or “bad”. In my arsenal of “tools I really want to check out” has been Ma’atkit’s Query Profiler. They’re very different tools. Ma’atkit’s query [...]

What?s New In The Upcoming 5.0.11 Release

Apart from more than 60 bug fixes the upcoming MySQL Workbench 5.0.11 release will contain a few major improvements in respect to the last release two weeks ago.

  • The partitioning settings are now fully supported during reverse engineering of SQL scripts and live database and CREATE / ALTER generation for synchronizations. We had a preliminary implementation but this has been replaced by full parser support.
  • Addition of Standard Insert grid input. Instead of having to type in the full INSERT statements the initial/test data can now be entered by using a data grid.
  • Improved formatting of generated SQL output.
  • Improved GRT Shell console. This is in preparation of the upcoming tutorials on the scripting- and plugin writing possibilities

The show-stopper bug that is holding back the release is now fixed. We will run detailed tests tomorrow and if nothing else comes up will upload to the …

[Read more]
How to Analyze Slow Query Logs in a Production Database

I have 25 GB of data and I keep the parsed slow query logs in my database. How am I going to analyze these slow query logs? Sure these logs can tell you the top 10 slow queries, but is this the only use of these logs?

What if I migrate my database to a new hardware? How do I compare the slow query logs of the two hardware? The problem is that this is a live system and therefore the amount of query are not the same, so it is like comparing apples and oranges.

After a cup of coffee, I realized that I would just do the following:

1. Get the high level average count of slow queries and average query time for 1 week before and after the migration.
2. Drill down the average count of slow queries and average query time on a server by server basis.

MySQL 5.0.51 uploaded to Debian

Yesterday in the evening I uploaded MySQL 5.0.51-1 to Debian unstable. Unfortunately it took a bit longer to prepare the updated package than expected, mainly because 5.0.51 was not MySQL's best release, and I had to backport and include some fixes from 5.0.52 and 5.0.54 to get it working (see the changelog for details). Then I had to wait a few more days until 5.0.45-5 got into testing before I could finally upload 5.0.51-1 to unstable.


The most important changes are:

  • Testsuite is now enabled during build process, which requires some time, but it ensures that everything is fine with the build. I already found two gcc 4.2.x related bugs in 5.0.51 which were fixed before 5.0.51-1 was uploaded to Debian.
  • Manpages re-added, MySQL AB has put them under the GPL so we could include …
[Read more]
Commodity Hardware (Disk and the Performance Myth)

I have been out on several client sites over the last 3-4 months and the one thing that I see time and time again is the proliferation of commodity hardware. Not only are smaller servers popping up all over the place, but these servers are getting more and more powerful. Now you can not order hardware without getting dual or quad core machines with the capability to run a 64 bit OS. The machines built 2 or 3 years ago are nothing more then ebay fodder now. Items that end up collecting dust in a back room some where. Look at the evolution that we have gone through. Since 2005 look at what has become affordable in the commodity space: we have got dual core AMD&Intel, quad core Intel, AMD Opterons, 64bit processing, and a slew of chipsets and manufacturing processes in between. Cache on these processors has jumped from 512KB to 8MB in some cases. Not only has the processor seen marked improvements, so has the architecture on the motherboard. The …

[Read more]
Fixing column encoding mess in MySQL

Just had an interesting issue with an encoding mess on a column containing non-ASCII (Russian) text. The solution was not immediately obvious so I decided it's worth sharing.

The column (actually the whole table) was created with DEFAULT CHARSET cp1251. Most of the data was in proper cp1251 national encoding indeed. However, because of web application failure to properly set the encoding, some of the rows were actually in UTF-8. That needed to be fixed.

Simply using CONVERT(column USING xxx) did not work because MySQL treated the source data as if it was in cp1251. One obvious solution would be to write a throwaway PHP script which would SET NAMES cp1251, pull the offending rows (they'd come out in UTF-8), iconv() them to proper cp1251, and UPDATE them with new values.

However it's possible to fix the issue within MySQL. The trick is to tell it to treat the string coming from the table as binary, and then do charset …

[Read more]
WordCamp Melbourne: making money with Wordpress, bbPress, caching, Wordpress Sandbox

On November 17 2007, I went to the inaugural WordCamp Melbourne. It was a truly interesting event - crowded, filled to the brim, and held at the pretty amazing Watermark Bar, in Docklands. It was a really warm day, and the only complaint would be that Watermark put is in a glass dome, with only 2 fans and no air-conditioning - nice, greenhouse we were in! Food and drink were good, as were the talks in general. I took some notes, and am placing them online now (late, but better than never). Note that the event was sold out - so kudos to James Farmer for organising it.

Making money with Wordpress - Darren Rowse

- exclusive content, for a paid area (like forums?)
- textlinkads/paid links/paid reviews - nasties from Google, so this can be an …

[Read more]
Multi-delete libmemcached, it is all about the pipe lining...

So what happens when you pipe line a bunch of data into a socket?

Much better performance:

Testing mdelete_generate 1.542 [ ok ]
Testing delete_generate 5.720 [ ok ]

The theory in all of the re-factoring I have done for the next version of libmemcached is to allow me to pipe line more data into a socket. AKA, operations that can be packed into a single TCP payload should be. This allows me to make much better use of the network.

I've had support for this for get since nearly the beginning:

Testing get_read 6.848 [ ok ]
Testing mget_read 2.786 [ ok ]

It has always made a big difference in performance. Delete is just the first operator, next on this list is all of the storage operators (set, add, cas, and replace).

BTW the key to making the multi-delete operations work? Re-factoring.

The more time I spend on making …

[Read more]
Showing entries 36081 to 36090 of 44922
« 10 Newer Entries | 10 Older Entries »