Hello ... do you have any estimate for a GA release of mysql cluster 7.2? Thanks.
We organized our first OTN Developer Day: MySQL earlier this year in Santa Clara, CA, and the result far exceeded our expectation. Before we kicked off the seminar at 9am, the attendees had already taken every single seat in the beautiful auditorium in Oracle's Santa Clara campus, and there was still a line in front of the registration desk. We recruited MySQL experts from several teams to present, and had great questions and discussions along the way and at the end. I personally received many positive comments saying that the seminar …
[Read more]The first two Percona Live MySQL conferences in San Francisco and New York were sold-to-capacity events with the bonus of great evening parties. Percona Live’s huge popularity convinced us to take the conference series on the road to London, England. We also decided to broaden learning, content, networking, presentation and marketing opportunities for attendees and sponsors by extending it to two days!
Percona Live London takes place October 24-25. We will have one day of tutorials and one day of sessions, and we added exhibit space for sponsors. Percona Live London tickets are offered at a discounted early-bird registration rate until Sept 19th, for those who want to save money. (You do like saving money, right?)
…
[Read more]It’s a busy summer at Pythian, with our continuing wave of speaking sessions at upcoming community and regional industry events.
Coming to a city near you, watch for Pythian presenting hot
Oracle and Microsoft SQL Server database topics:
IN CANADA:
Oracle Technology Days – Montreal
August 9, 2011 – 8:30am – 1pm, Hilton Montreal
Bonaventure
- 11:00 am – Join Marc Fielding for Mixed Workload Management for Oracle Exadata
- More Pythian Oracle Exadata resources
- View the Evite
- Register/Inscrire
Oracle Technology Days – Toronto
August 25, …
There are many ways of improving response times for users. There are some people that spend a lot of time, energy and money on trying to have the application respond as fast as possible at the time when the users made the request.
Those people may miss out on an opportunity to do some or all of the processing the application needs to do at a different point in time. In other words, if you preprocess your data ahead of time, you can reduce the time it takes to complete a request.
Allow me to give you three examples of what I mean:
1) There is a sales report that your managers would like to see on their fancy new dashboards. The query for this report takes 45 minutes to run and may disrupt other functions that the database server needs to do. You decide to run this report at 3am when there is very little happening on the database server and save the results to a separate table. When the dashboard …
[Read more]
I am happy to announce that the first MariaDB book is
released!
The book is called MariaDB Crash Course and is written by Ben Forta,
who also wrote the MySQL Crash Course book.
Quoting the book description:
"This book will teach you all you need to know to be immediately
productive with MySQL. By working through 30 highly focused
hands-on lessons, your MySQL Crash Course will be both easier and
more effective than you'd have thought possible"
This is great news for new users to SQL and to MariaDB as it
makes it easier for them to get things going quickly!
You can find a link to this book and other recommended MariaDB /
MySQL books …
There has been a lot of chatter the past week about Apple replacing MySQL with Postgres in the new OSX Lion Server [U.S. | England | New Zealand ]. Most of it seems to tie things back to Oracle's new stewardship over the MySQL project, a lot of that stemming from what I would say is FUD from the EnterpriseDB folks, regarding doom and gloom about the way Oracle might handle the project in the future. Not that the FUD is entirely unwarrented; While Oracle has done a pretty decent job with MySQL so far, looking at what Oracle has done to projects like Open Solaris certainly would make one queasy. And …
[Read more]About a month ago I needed to compare tens of thousands of tables in hundreds of databases between a few different servers. The obvious choice was, mk-table-checksum! The only problem was, that the tool needs to know the minimum and maximum value of the column by which each table is to be subdivided into chunks and checksummed. This select min(col), max(col) from table locks all write operations on the table and on a big table it meant downtime.
Looking at the source it was clear we could make mk-table-checksum run the select min(col), max(col) from table on the read-only slave and use the values to checksum the master.
It was subtle code changes in function:
get_range_statistics adding
my $cxn_string_dc =
“DBI:mysql:;host=slavehost;port=3306;mysql_read_default_group=client”;
my $user = ‘user’;
my $pass = ‘password’;
my $dbh_slave = DBI->connect($cxn_string_dc, $user, $pass); …
Abstract: In this article we have a look at the compression options of common zipping tools and its impact on the size of the compressed files and the compression time. Further we look at the new parallel zip tools which make use of several cores.
Start with a backup first
From time to time I get into the situation where I have to
compress some database files. This happens usually when I have to
do some recovery work on customers systems. Our rule number 1
before starting with a recovery is: Do a file system backup
first before starting!
This is sometimes difficult to explain to a customer especially
if it is a critical system and time runs (and money is lost).
It happens as well that there is not enough space available on the disks (in an ideal world I like to have a bit more than 50% of free space on the disks). Up to now I have used the best compression method. This comes …
[Read more]
I'm posting this here since it has been useful for me, and the
blog is a nice place to keep public notes.
If you have servers which have multiple application servers
connected to them, you often need to see things like who's
connected, how many connections they have, and which users.
Using SHOW PROCESSLIST doesn't work that well,
since it gives you a row for each server.
What we want is an output similar to this:
+-----------------+-----------------+----------+[Read more]
| host_short | users | count(*) |
+-----------------+-----------------+----------+
| slave1 | repl | 1 |
| slave2 | repl | 1 |
| localhost | event_scheduler | 1 |
| 111.111.222.111 | root, foo | 2 |
| 111.111.222.222 | appuser, bar | 3 |
| 111.111.222.333 | appuser, moshe | 9 | …