I've worked on DTrace probes for a while now. It's
a really interesting tool. I've worked on MySQL Cluster
code since 1996 but this is the most advanced tool
I've used to see exactly what's going on inside the
MySQL Server and the data nodes.
I'm still at an early stage of using these DTrace probes
and there is still some work before they are publishable
but one can see very well what's going on inside the
processes in real-time.
My first finding was that I quickly discovered that CPU
percentage that is reported at 1% in prstat in Solaris
actually means that it uses 64% of a CPU thread 1% is the
percentage of the total CPU resources, this is different
to what I'm used to from top.
The benchmark I'm analysing is the same DBT2 I've used in
a fairly long line of analysis on MySQL Cluster performance
over the last 2 years. This …
Log Buffer, the weekly review of database blogs, is 100 editions (and almost two-years) old today! Lewis Cunningham has returned to LB to publish The Big 100th edition of LB on An Expert?s Guide to Oracle Technology.
No speech, but I would like to thank Log Buffer’s readers and especially all of Log Buffer’s editors for making LB a worthwhile and fun stop in the database “blogosphere”. It’s very easy to see why LB editors are successful in what they do — they are consistently enthusiastic, diligent, and adaptable. And I enjoy working with them.
Okay, okay — I can hear the orchestra starting to play me off, so …
[Read more]According to the manual, FLUSH LOGS is supposed to:
Closes and reopens all log files. If binary logging is enabled, the sequence number of the binary log file is incremented by one relative to the previous file. On Unix, this is the same thing as sending a SIGHUP signal to the mysqld server (except on some Mac OS X 10.3 versions where mysqld ignores SIGHUP and SIGQUIT).
If the server is writing error output to a named file (for example, if it was started with the –log-error option), FLUSH LOGS causes it to rename the current error log file with a suffix of -old and create a new empty log file. No renaming occurs if the server is not writing to a named file (for example, if it is writing errors to the console).
There is a bug, however. In the case when the error log writes to a non-default path, FLUSH LOGS actually does not work as specified …
[Read more]I've been doing some sales calls to prospects and customers in the midwest the last week or so. I like to do this periodically to make sure I'm not just drinking the open source koolaid, but really hearing from customers, prospects and the sales reps in the field. One advantage of MySQL being a part of Sun is we are now able to get appointments with CIOs and CTOs of Fortune 500 companies more readily than before. These are large accounts that have significant scale and expertise in IT, but are usually more conservative than most of the west coast... READ MORE
As you can see in the MySQL Workbench Edition feature grid, Live Schema Synchronization is a Standard Edition feature only. But that does not mean that you cannot make use of the same functionality in the OSS Edition in an offline scenario - which is even more preferable in some cases.
- Create an SQL CREATE script from your model
You might already have the SQL CREATE script if you started your model with an import of an existing schema. If you started designing your model from scratch inside Workbench, you are going to export an SQL CREATE script anyway - in order to create the initial schema on the database server. - Update your Workbench model
At this point your database is already running. But as we all know you always have to make changes to your first design. Do the necessary changes to the model. - Export …
How To Repair MySQL Replication
If you have set up MySQL replication, you probably know this problem: sometimes there are invalid MySQL queries which cause the replication to not work anymore. In this short guide I explain how you can repair the replication on the MySQL slave without the need to set it up from scratch again.
How To Repair MySQL Replication
If you have set up MySQL replication, you probably know this problem: sometimes there are invalid MySQL queries which cause the replication to not work anymore. In this short guide I explain how you can repair the replication on the MySQL slave without the need to set it up from scratch again.
While this blog is co-authored by the whole MySQL Telecom team, many members in or around the team also write their personal blogs, which you will find very useful. So please follow me on a tour on the absolute top MySQL Cluster blogs in the world:
Johan Andersson is the MySQL Cluster Principal Consultant, and has been with MySQL Cluster since the Ericsson days. He travels around the world to our most demanding customers and shares his guru advice. Rumor has it that recently on a training gig the students made him sign their MySQL t-shirts, can you get closer to living like a rock star than this? Occasionally he also shares some great tips and status info on his blog. Like right now you can find a set of handy scripts to manage all of your MySQL Cluster from one command line, definitively recommended to try!
…
[Read more]mysql 5.1 is nearing release, with the present release candidate 5.1.24.
The most important new feature, in my eyes, is the new partitioning capability. When I get some time, I will write up a more complete post on my experiences so far with 5.1 partitioning, but I am going to try to keep the turnover on posts a bit higher, and post smaller things on here more regularly.
Partitioning has the potential to make large tables in mysql manageable once again. This is music to the ears of anyone that has had the misfortune of having to learn, the hard way, about MyISAM’s often painfully slow “Repair by keycache” loading and repairing of large tables with unique keys. Add that to MyISAM’s propensity to table corruption, especially with large tables, and you have a ticking timebomb on many pre-5.1 servers out there. If you are lucky, you can repair a …
[Read more]Backing up MySQL Database most people compress them - which can make a good sense in terms of backup and recovery speed as well as space needed or be a serious bottleneck depending on circumstances and approach used.
First I should mention this question mainly arises for medium and large size databases - for databases below 100GB in size compression performance is usually not the problem (though backup impact on server performance may well be).
We also assume backup is done on physical level here (cold backup, slave backup, innodb hot backup or snapshot backup) as this is only way practical at this point for databases of decent size.
Two important compression questions you need to decide for backup is where to do compression (on the source or target server if you backup over network) and which compression software to use.
Compression on source server is most typical approach and it is great, …
[Read more]