If you have followed, or have tried to follow, my different
attempts at getting Key-Value Store, in this case represented by
MongoDB, from MySQL on a single machine with all data in RAM. I
have not been very successful so far, but many smart people, way
smarter than yours truly, has been giving me suggestions for
things to try to get MySQL closer to the performance of MongoDB,
where MongoDB did some 110 k row reads per second, whereas MySQL
was at best reading some 43 k rows per second (using the HEAP /
MEMORY storage engine) and 46 k row reads per second (using NDB
and without CLIENT_COMPRESS). Note that not all combinations has
been tested, so it would be reasonably safe to assume that using
the HEAP / MEMORY storage engine and excluding the memory storage
engine, would be even faster than the 43 k rows when using
CLIENT_COMPRESS.
As I could see that the CPU load on mysqld was very high, and as
everything is in memory and hence there …
Besides of voices of Lata Mangeshkar, Rahat Fateh Ali Khan and Celine Dion, the thing which ticks my heart to no bound is the blogs written by various bloggers covering Oracle, SQL Server, and MySQL related technologies. This Log Buffer Edition is brimming with those chords in Log Buffer #283. Enjoy !!! Oracle: Angela Poth [...]
At the end of September, the MySQL Connect 2012 conference will be held as part of Oracle OpenWorld in San Francisco. MySQL Connect is a two day event that allows attendees to focus on MySQL at a technical depth with presentations and interaction with many of the MySQL developers, engineers and other knowledgeable staff. There is also a range a international speakers to give broader knowledge to the presentations.
I am presenting a Hands-On Lab on Sunday 30th September 16:15 - 17:15 entitled HOL10474 - MySQL Security: Authentication and Auditing. The sessions goes through an introduction to the plugin API and how it can help expand the capabilities of MySQL. Since it is a hands-on lab, …
[Read more]Recent MySQL versions (first the chaotic series of releases that preceeded 5.5 – 5.2, 6.0 and 5.4 – and now 5.6) adds new ‘character sets’ to MySQL. But little of it is useful.
Let us take it from the beginning: before 4.1 MySQL supported a wide range of single-byte character sets: regional ones (‘latin1′/Western, ‘latin2′/Central-european, arabic etc.) as well as strictly national ones (hebrew, ‘armsci’/Armenian, ’tis60′/Thai etc.) and also a few multibyte *non-unicode* character sets for Chinese, Korean and Japanese. 4.1 added the support for Unicode to MySQL with the UTF8 and UCS2 charsets. Since then UTF16, UTF8MB4 (they are useful to a limited amount of users) and also UTF32 was added and in early 5.6 UTF16LE (and they are not useful at all)
What does it mean? Let’s start with a Wikipedia quote. …
[Read more]I promised to still post some general comments about the MySQL ecosystem, to conclude my outlook of State of the MySQL forks and Drizzle. I will do this now in the form of answering questions I got in the comments, twitter and some that I make up just myself.
With 2 of the bigger Open Source projects I care about talking about certifications programs questions pop up again ...
Should we certify ourselves ?
So let me tell you about my experiences in getting Open Source related Certifications ..
Over a decade ago, (2001) when RedHat was still Redhat and not yet Fedora the company I was working for was about to partner with RedHat and needed to get a number of people certified for that.
So I took the challenge, I bored myselve to death during a 4 day
RedHat fast track training and set out to do the exam the next
day. Obviosly I scored pretty well given my yearlong experience
in the subject. Back then I was told that I scored the one but
European Record on the exam which was actually held by another
collegue (hey Ico) , our CTO however was not amused when I told
that I could have scored better but I didn't bother running a
chkconfig smb on
since I …
I'm using a very small MariaDB instance as a datastore for my
YouLess energy
monitor, my own mail server (postfix, roundcube). It's a
virtual machine from a commercial VPS provider.
All data fits in memory and the overhead of running with
performance_schema on is not an issue.
While I was reading a blog post about performance_schema by Mark
Leith I wanted to see what P_S could tell me about my own
server.
This is the output from the first query:
mysql> select * from file_summary_by_event_name order by count_read desc,count_write desc limit 10;[Read more]
+--------------------------------------+------------+-------------+--------------------------+---------------------------+
| EVENT_NAME | …
I presented a webinar this week to give an overview of several Full Text Search solutions and compare their performance. Even if you missed the webinar, you can register for it, and you’ll be emailed a link to the recording.
During my webinar, a number of attendees asked some good questions. Here are their questions and my answers.
Adrian B. commented:
Q: Would’ve been a good idea to retrieve the same number of rows on each benchmark (I noticed 100 rows on SQL and 20 on Sphinx). Also Sphinx does the relevance sorting by default, adding relevance sorting to the MySQL queries would make them even slower, I’m sure.
Indeed, the result set of 20 rows from SphinxQL queries is …
[Read more]MySQL data rules the cloud, but recent experience shows us that there's no substitute for maintaining copies of data, across availability zones, when it comes to Amazon Web Services (AWS) data resilience.
In this video (recording of our 8/23/12 webcast), we survey technologies for maintaining real-time copies of your data and the pros & cons of each. We conclude with a live demonstration of a
Need to copy a database from another server to another and make certain that the two are identical? The previous blog entry was a quick into to mysqldbcopy from the MySQL Utilities. This time we use mysqldbcompare to double check on the database we just copied. This is a very quick way to copy a database from a master to a slave or from production to a test server.
$ mysqldbcopy --force --source=root@10.0.0.18
--destination=root@@localhost davestuff:davestuff
# Source on 10.0.0.18: ... connected.
# Destination on localhost: ... connected.
# Copying database davestuff renamed as davestuff
# Copying TABLE davestuff.a
# Copying GRANTS from davestuff
# Copying data for TABLE davestuff.a
#...done.
$mysqldbcompare -a --server1=root@10.0.0.18
--server2=root@localhost …