I was looking today at the set_current_time() call in memcached
looking for a bug and noticed this bit of code:
new_time = (rel_time_t) (time(0) - stats.started);
What is the issue? Time!
No two time calls cost the same.
The difference?
gettimeofday() 14.558
time() 14.664
clock_gettime() 13.958
All of these were compared in a loop. The obvious winner is
clock_gettime() though it Linux specific so an ifdef is needed so
that other platforms can use gettimeofday().
I suspect it can be made to be faster :)
Evgen Potemkin is a developer in query optimizer team at MySQL, we've been working together for about 3 years now. So, it was a bit unexpected to read in the PostgreSQL news that (originally) his CONNECT BY patch has made it into PostgreSQL. No, we don't have people coding for two DBMSes at the same time (yet?), the patch has been developed years before Evgen has joined MySQL.
One of the questions raised by the prospect of becoming a part of Sun was whether/how we will handle being a part of company that develops other DBMS as well. I suppose this patch news sets a good start.
UPDATE
It turns out the patch is not in the PostgreSQL tree after all.
It is just people at www.postgresql.at (which is NOT just a
different name for www.postgresql.org, and that was my error)
merged the patch into a 8.3 version of PostgreSQL and made the …
I've finally found time to put together a couple of wiki pages describing what work we're doing for subquery optimizations in MySQL 6.0:
- The Subquery_Works page has an overview of what we've done so far, we're doing now and our future plans.
- The 6.0 Subquery Optimization Cheatsheet has a short, easy-to-read description of what has been released in MySQL 6.0.3. That should be enough if you just want to get the new version and see if your subquery is/can be faster now or not. If you want to know more details, the nearest occasion is my MySQL University session on February, 28.
- I've done a quick preliminary assessment of the impact of new optimizations. The …
As documented, the FLOAT datatype does not guarantee precise
storage of decimal values. But where the non-precision is
apparent can be a little confusing - take the following
example:
mysql> CREATE TABLE my_table (a FLOAT);
Query OK, 0 rows affected (0.25 sec)
mysql> insert into my_table (a) VALUES ('2.2');
Query OK, 1 row affected (0.08 sec)
mysql> SELECT * FROM my_table WHERE a = 2.2;
Empty set (0.05 sec)
mysql> SELECT a, IFNULL(a,0) FROM my_table;
+------+-----------------+
| a | IFNULL(a,0) |
+------+-----------------+
| 2.2 | 2.2000000476837 |
+------+-----------------+
1 row in set (0.00 sec)
mysql> SELECT a, a+0 FROM my_table;
+------+-----------------+
| a | a+0 |
+------+-----------------+
| 2.2 | 2.2000000476837 |
+------+-----------------+
1 row in set (0.00 sec)
Need precision? Try Decimal.
Today I attended a webex demo of MySQL Enterprise Monitor from MySQL. As most of the Yahoo’s are interested in learning about this tool, so arranged a web-ex demo from MySQL. MySQL is kind enough to host this event.
I always thought a dedicated monitoring and alerting system is completely missing from MySQL product line for all these days, and I can see that this tool is heading in the right direction to capture the market. Currently it monitors all server variables, errors and identifies any critical conditions upfront to avoid a disaster. But the new features that are in the pipeline for the coming months seemed to be promising (like upgrade adviser, Load balancing, query analyzer and connection manager).
As this is not like a single node …
[Read more]MySQL AB today launched the MySQL Authorized Hosting Partner Program, a new partner offering specifically designed for top-tier hosting companies and Managed Hosting Providers (MHPs). MySQL Authorized Hosting Partners get access to MySQL Enterprise -- MySQL AB's premium, commercial-grade software and services -- to affordably deliver the high database availability and performance required by the growing number of online, on-demand, Web-hosted applications.
For more information on the program's benefits and service levels, please visit this page.
Apparently the role of community managers is one of the coolest jobs around. Jono Bacon’s one for Ubuntu, and Jeff Waugh did an awesome job before that. Jay Pipes does a nice job for MySQL. Dawn Foster is the community manager that "powers" community managers at Jive Software who makes wicked-cool collaborative software.
Glyn penned an article about the proliferation of community managers. In his article he mentions …
[Read more]Matt Asay asks why Asia doesn’t contribute more to open source. It’s an interesting question, and the responses are equally interesting. There are a few in the comments to Matt’s piece, for example, pointing out Asia’s contribution to projects such as Ruby on Rails, not to mention Andrew Tridgell’s crucial involvement in the Samba project.
Those are the exception rather than the rule, however. There are also a few clues regarding the comparative lack of open source contribution from Asia in the ZDnet article that prompted the question in the first place.
“Harish Pillay, open source evangelist with Red Hat Asia-Pacific, acknowledges …
[Read more]As mentioned earlier I am testing out SSD disk performance on a 4 core machine with 6GB of memory. I spent last week comparing the drive to a standard 10K RPM SATA Raptor drive (EXT3 file system right now). As noted here and elsewhere the performance of these drives really shines with a specific workload, but they are not for everyone out there. The random write performance of these drives leaves a great deal to be desired while their read performance is outstanding:
Above you can see that when we perform 10K random reads with 0 writes we peak at about 5200 IOPS vs the 161 IOPS on a standard SATA drive. When we flip the IO to all writes we end up getting around 100 IOPS out of the SSD drive. Not many sites are 100% reads, so some sort of mixed IO load is expected. Here you can see how the number of IO’s per second varies under different workloads:
One of the cooler inventions in recent versions of MySQL is having the slow log and general log available as plain-text files as well as CSV-engine type tables directly within the mysql database.
So now you want to analyze the slow query log using SQL, but (a) you don’t want to lock up the table while you do your work, and (b) you suspect that your work may involve adding indexes, dropping columns and such during the analysis.
So you decide to make a copy of the table.
Except that you can’t.
mysql> CREATE TABLE slow SELECT * FROM mysql.slow_log; ERROR 1553 (HY000): You can't use locks with log tables.
As all the data is stored as a plain CSV …
[Read more]