I had this idea around 5.4. It is meant to scale better on
multi-core machines compared to, say, standard 5.1. This has been
proven also it seems. 5.4 is still in Beta, but testing has shown
that scalability is definitively better.
What I wanted to know was how well 5.4 would work on a lower spec
box compared to 5.1. One reason for me wanting to test this is
that I am currently in my summer house, and I have no multi-core
64-bit machines around. All I have in terms of Linux boxes is an
old Dell Laptop with Gentoo on it, all this will as similar setup
as possible between 5.1 and 5.4.
So, I download 5.4 sources, as there are no 32-bit Linux binaries
yet, and I do a simple configure and make. And.. I fail. Out of
the box, some of the inline optimizations in innodb (the config I
built with was max-no-ndb) will screw up gcc when using -O3
optimization (OK, I admit, I'm not 100% sure that this is the
cause, but it sure looks …
If you’ve attended just one of my recent talks, either at the UC, LOSUG or MySQL University, you should know that MySQL 5.1.30 will be in the next official drop of OpenSolaris.
In fact, you can find MySQL 5.1 in the current pre-release builds – I just download build 111 of the future 2009.06 release.
Key things about the new MySQL 5.1 in OpenSolaris:
- Contains the set of DTRACE probes that also exists in MySQL 5.4 (see DTrace Documentation)
- Like the 5.0, we have SMF integration, so you can start, stop, monitor and change some of the core configuration through SMF
- Directory layout is similar to 5.0, with a version specific directory (/usr/mysql/5.1), and the two can coexist if you want to handle a migration from 5.0 to 5.1
To install MySQL 5.1, use the pkg tool to …
For anybody interested in trying out our new MySQL Workbench 5.2 Alpha2 I have prepared a short Quick-Tour that will show you the most important steps to successfully use WB to query your databases.
Manage Your Connections
MySQL Workbench 5.2 introduces a new Home Page that makes it very easy to access all your Database Connections and EER Models. It features the Workbench Central Panel, the Database Connections Panel and the Model Files Panel.
In order to be able to connect to your MySQL server you have to create a new Database Connection so MySQL Workbench knows about your server instance. Follow the steps shown below to create your first Database Connection.
Click on the screen shot to see it at full size.
Connecting to and Working with the MySQL Server
Once you have created your …
[Read more]
A couple of weeks ago I wrote a lua script to use with the MySQL Proxy
that transforms the Proxy into a key=>value lookup
dictionary.
But I couldn't just stop there. So I decided to add replication
to it :).
The basic idea is that you have one proxy that acts like a
master, it can handle all write/read operations. Once it receives
a write query, it will send that query to the slave proxy
instances, and after the last slave gets the query, the master
will return a confirmation to the mysql client.
And of course, you send your read queries to the slave proxy
instances.
Show me the code.
It is available on the …
I have to confess I'm kind of a wannabe hacker. I think of myself as a developer, yet in practice I always end up being a customer facing person like a Sales Engineer, a Trainer or basically anything where you do more talking than coding. But there is this tiny little Drupal module, footnotes, that I'm actually the proud maintainer of for several years now.
dbForge Studio for MySQL has the Database Designer functionality that allows you to build and view the structure of your database using the diagrams. You can read more about it in Getting Started with Database Designer article.
Database Designer also can save a database diagram as an image.
Saving Database Diagram as an image
To save database diagram as an image, perform the following
steps:
1. Right-click on the diagram and select “Export to
image…”.
Figure 1: Export to an Image
2. Choose one of the image formats to which you want to save your diagram.
…
[Read more]We've been looking at high concurrency level issues with Drizzle and MySQL. Jay pointed me to this article on the concurrency issues due to shared cache lines and decided to run some of my own tests. The results were dramatic, and anyone who is writing multi-threaded code needs to be aware of current CPU cache line sizes and how to optimize around them.
I ran my tests on two 16-core Intel machines, one with a 64 byte cache line, and one with 128 byte cache line. First off, how did I find these values?
one:~$ cat /proc/cpuinfo | grep cache_alignment cache_alignment : 64 ... two:~$ cat /proc/cpuinfo | grep cache_alignment cache_alignment : 128 ...
You will see one line for each CPU. If you are not familiar with /proc/cpuinfo, take a closer look at the full output. It's a nice …
[Read more]A while ago Arjen Lentz blogged about transient MySQL errors that can occur when using a transactional storage engine, like say InnoDB.
Since I'm a fan of the reliability and automated recovery that InnoDB provides, I use it for all the Drupals that I host. However, on a very busy site, this may lead to deadlocks. These in turn lead to users seeing errors, which is something I'd like to avoid. Especially if the error could be prevented.
What I've done to make use of these error codes is change the _db_query() function in includes/database.mysql.inc and wrap the call to mysql_query() in a loop.
The function now checks the returned error code and if the code indicates a transient error, it will try to rerun the query after sleeping for 50 milliseconds. It will try each query up to three …
[Read more]A while ago Arjen Lentz blogged about transient MySQL errors that can occur when using a transactional storage engine, like say InnoDB.
Since I'm a fan of the reliability and automated recovery that InnoDB provides, I use it for all the Drupals that I host. However, on a very busy site, this may lead to deadlocks. These in turn lead to users seeing errors, which is something I'd like to avoid. Especially if the error could be prevented.
What I've done to make use of these error codes is change the _db_query() function in includes/database.mysql.inc and wrap the call to mysql_query() in a loop.
The function now checks the returned error code and if the code indicates a transient error, it will try to rerun the query after sleeping for 50 milliseconds. It will try each query up to three …
[Read more]
Would you like to find out how to build a continuous ETL process
integrating source systems, MySQL data warehouse, and Mondrian
OLAP engine?
I'm going to be hosting a webinar tomorrow describing how to do
this using SQLstream. (Basically a repeat of the webinar I gave
at the MySQL conference this year, but many of you missed
it.)
Join me and Damian Black, CEO of SQLstream, on the webinar at
11am PDT/2pm EDT tomorrow, Wednesday 27th May. To register for
the webinar, visit https://www2.gotomeeting.com/register/668399275.