In this blog post I will try to answer some of the most common
questions I have heard during the last week:
A. Can MySQL be killed?
1. The easiest way to kill MySQL would be to not sell licenses
any more or make their prices 'really high'.
2. Another scenario is that the development resources are
drastically reduced in some important areas. Then people would
stop believing in the future of MySQL, which slowly will kill the
product. Especially if the present license is in place. (Remember
that most of the development of the core of MySQL is done by the
developers at SUN, not by a large community)
B. "But anyone can fork it!"
One can fork a GPL project (i.e. the code), but one can't easily
duplicate the economic infrastructure around it.
MySQL is not an end user application, but an infrastructure
project that is quite deep in the system stack. Most of the …
I’ve just released version 1.1.3 of the Cacti templates I wrote for MySQL. This is a bug-fix release only, and affects only ss_get_mysql_stats.php. To upgrade from the previous release, upgrade ss_get_mysql_stats.php. Don’t forget to save and restore your configuration options, if any. (Note that there is a feature to help with this: you can keep configuration options in ss_get_mysql_stats.php.cnf to avoid making them in ss_get_mysql_stats.php.)
Next up: actual template changes! More graphs!
The changelog follows.
2009-10-24: version 1.1.3
* This is a bug-fix release only, and contains no template changes.
* To upgrade from the previous release, upgrade ss_get_mysql_stats.php.
* MySQL 5.1 broke backwards compatibility with table_cache (issue 63).
* Added a version number to the script (partial fix for issue …
[Read more]
The MySQL 5.0 and MySQL/MariaDB 5.1 source code is now also available through Launchpad. If you were waiting for a version for 5.1 and are ok with building the plugin from source, now you can!
The repo contains a subdir for examples, we’re hoping many people will contribute little snippets and scripts to import and use interesting datasets. To give you a hint, with graph capabilities you are able to deal with RDF data sources. You just need to transform the XML to say CSV, import into a suitable structure, and copy the edge information across to an OQGRAPH table.
Roland Bouman’s tree-of-life (which uses xslt stylesheets) are a good example of that approach, and was the first entry in the examples tree, including an SQL dump of the base dataset (it was CC-NC licensed) so you don’t …
[Read more]
Yesterday we realeased the press release titled "Italy's CASPUR Relies on MySQL Enterprise to
Support its Scientific Research" and I want to spend a few
lines in commenting it.
First of all it was a pleasant surprise for me how
extensively MySQL in particular and Open Source Software in
general is pervasive into scientific research. I particularly
appreciated a quote from Caspur:
CASPUR
selected MySQL™ because it is a top database choice for the
Bioinformatics industry, preferred by both application developers
and the computational biology research community, thanks to its
simplicity and high-performance.
One of the reasons why they have chosen MySQL answers a common
question asked by our user base. The reason is high-performance,
and the concern is about MySQL …
Sun Microsystems, Inc. today announced that CASPUR, a non-profit, consortium of Italian universities focused on scientific supercomputing and innovative technologies, has subscribed to Sun's MySQL Enterprise™ database service.
I’ve just released version 1.1.3 of the Cacti templates I wrote for MySQL. This is a bug-fix release only, and affects only ss_get_mysql_stats.php. To upgrade from the previous release, upgrade ss_get_mysql_stats.php. Don’t forget to save and restore your configuration options, if any. (Note that there is a feature to help with this: you can keep configuration options in ss_get_mysql_stats.php.cnf to avoid making them in ss_get_mysql_stats.php.) VividCortex is the startup I founded in 2012.
Issue 634 made me wonder how the various mk-table-sync algorithms (Chunk, Nibble, GroupBy and Stream) perform when faced with a small number of rows. So I ran some quick, basic benchmarks.
I used three tables, each with integer primary keys, having 109, 600 and 16k+ rows. I did two runs for each of the four algorithms: the first run used an empty destination table so all rows from the source had to be synced; the second run used an already synced destination table so all rows had to be checked but none were synced. I ran Perl with DProf to get simple wallclock and user time measurements.
Here are the results for the first run:
When the table is really small (109 rows), …
[Read more]Hello. I'm Ryan Mack, a new member of the Facebook MySQL team. My first order of business is evaluating MySQL 5.1 and the new InnoDB plugin. So far things look very promising, but I came across one issue worth sharing.
Setting up two replicas of the same master, one running 5.0.84 and one running 5.1.38+1.0.4 showed the 5.1 server writing about 2x as much to disk, and having a little trouble keeping up the master. Mark helped identify the insert buffer cache as the likely culprit. SHOW INNODB STATUS showed 5.1 only had 10 pages in the insert buffer and had a 1:1 insert to merge ratio, while 5.0 had over 16k pages and was getting a 4:1 reduction in merges. Merging 4x as many pages into the secondary indexes was definitely the problem.
In 5.0 the number of merges performed per background IO loop was hardcoded to 5% of 100 IOPS. 5.1 has made this 5% of a variable number of IOPS, configured with the innodb_io_capacity variable. The …
[Read more]Due to some issues with our hosting provider, the demo server will be offline until otherwise noted.
From Stack Overflow:
When I run an SQL command like the one below, it takes more than 15 seconds:
SELECT * FROM news WHERE cat_id = 4 ORDER BY id DESC LIMIT 150000, 10
EXPLAIN
shows that its using where and the index on
(cat_id, id)
LIMIT 20, 10
on the same query only takes several
milliseconds.
This task can be reformulated like this: take the last
150,010 rows in id
order and return
the first 10 of them
It means that though we only need 10 records we still need to count off the first 150,000.
The table has an index which keeps the records ordered. This
allows us not to use a filesort
. …