What are the skills of a MySQL DBA? Here are my 2 cents.
1. Solid understanding of database system including design and
how data is processed.
2. Handy scripting language that you can use like PERL, Python,
or even Shell.
3. MySQL Backup, High Availability and Performance Tuning.
4. SQL language.
Just finished reading this:
http://dev.mysql.com/tech-resources/articles/testing-partitions-large-db.html
His comments on partitioning affecting Archive are dead on. If you can keep it from
scanning then it works better (though if all engines have to scan
it normally is a big factor of improvement over the other
engines).
Archive's compression cannot do much with a single column table
like he is using, for that it has to do too much effort (and
while there is optimizations for this, I would rather just write
a column oriented store some day to solve these sorts of
problems).
It does make me wondering how those tests would come out with
6.0. He has a multi-processor box, and in 6.0 Archive uses
multiple threads for IO to …
This article explains how to test the performance of a large database with MySQL 5.1, showing the advantages of using partitions.
The test database uses data published by the US Bureau of Transportation Statistics. Currently, the data consists of ~ 113 million records (7.5 GB data + 5.2 GB index).
The MySQL Conference & Expo web site for next year is now live. Although the program is not completely finalized, we've got some of the basic information up and you can now register. The preliminary schedule of sessions and tutorials have been posted. In the coming weeks expect more info on keynotes, as well as the …
[Read more]Today I’ve been checking out a new client environment. My mission is to figure out (cold) some of the characteristics of the queries being run, and particularly if they’re “good” or “bad”. In my arsenal of “tools I really want to check out” has been Ma’atkit’s Query Profiler. They’re very different tools. Ma’atkit’s query [...]
Apart from more than 60 bug fixes the upcoming MySQL Workbench 5.0.11 release will contain a few major improvements in respect to the last release two weeks ago.
- The partitioning settings are now fully supported during reverse engineering of SQL scripts and live database and CREATE / ALTER generation for synchronizations. We had a preliminary implementation but this has been replaced by full parser support.
- Addition of Standard Insert grid input. Instead of having to type in the full INSERT statements the initial/test data can now be entered by using a data grid.
- Improved formatting of generated SQL output.
- Improved GRT Shell console. This is in preparation of the upcoming tutorials on the scripting- and plugin writing possibilities
The show-stopper bug that is holding back the release is now fixed. We will run detailed tests tomorrow and if nothing else comes up will upload to the …
[Read more]
I have 25 GB of data and I keep the parsed slow query logs in my
database. How am I going to analyze these slow query logs? Sure
these logs can tell you the top 10 slow queries, but is this the
only use of these logs?
What if I migrate my database to a new hardware? How do I compare
the slow query logs of the two hardware? The problem is that this
is a live system and therefore the amount of query are not the
same, so it is like comparing apples and oranges.
After a cup of coffee, I realized that I would just do the
following:
1. Get the high level average count of slow queries and average
query time for 1 week before and after the migration.
2. Drill down the average count of slow queries and average query
time on a server by server basis.
Yesterday in the evening I uploaded MySQL 5.0.51-1 to Debian unstable.
Unfortunately it took a bit longer to prepare the updated package
than expected, mainly because 5.0.51 was not MySQL's best
release, and I had to backport and include some fixes from 5.0.52
and 5.0.54 to get it working (see the changelog for details). Then I had to wait a few
more days until 5.0.45-5 got into testing before I could finally
upload 5.0.51-1 to unstable.
The most important changes are:
- Testsuite is now enabled during build process, which requires some time, but it ensures that everything is fine with the build. I already found two gcc 4.2.x related bugs in 5.0.51 which were fixed before 5.0.51-1 was uploaded to Debian.
- Manpages re-added, MySQL AB has put them under the GPL so we could include …
I have been out on several client sites over the last 3-4 months and the one thing that I see time and time again is the proliferation of commodity hardware. Not only are smaller servers popping up all over the place, but these servers are getting more and more powerful. Now you can not order hardware without getting dual or quad core machines with the capability to run a 64 bit OS. The machines built 2 or 3 years ago are nothing more then ebay fodder now. Items that end up collecting dust in a back room some where. Look at the evolution that we have gone through. Since 2005 look at what has become affordable in the commodity space: we have got dual core AMD&Intel, quad core Intel, AMD Opterons, 64bit processing, and a slew of chipsets and manufacturing processes in between. Cache on these processors has jumped from 512KB to 8MB in some cases. Not only has the processor seen marked improvements, so has the architecture on the motherboard. The …
[Read more]Just had an interesting issue with an encoding mess on a column containing non-ASCII (Russian) text. The solution was not immediately obvious so I decided it's worth sharing.
The column (actually the whole table) was created with DEFAULT CHARSET cp1251. Most of the data was in proper cp1251 national encoding indeed. However, because of web application failure to properly set the encoding, some of the rows were actually in UTF-8. That needed to be fixed.
Simply using CONVERT(column USING xxx) did not work because MySQL treated the source data as if it was in cp1251. One obvious solution would be to write a throwaway PHP script which would SET NAMES cp1251, pull the offending rows (they'd come out in UTF-8), iconv() them to proper cp1251, and UPDATE them with new values.
However it's possible to fix the issue within MySQL. The trick is to tell it to treat the string coming from the table as binary, and then do charset …
[Read more]