I have patched up a MariaDB version with JSON support, just for
the fun of it. This is not the best version of MariaDB around, as
I am not a MariaDB developer by any means, and although I have
played with the MySQL and MariaDB sources before, I have never
attemped to look like I know it in detail. So although this
works, it's probably full of memory leaks, errors and bad code
(my additions, that is).
That said, it actually works. Surprise! For a simple prototype,
that is to show off a specific feature.
So, this is what I have: I grabbed MariaDB 10.0.2 sources and
worked from there. To support the JSON specifics, I included the
Jansson
C-library for JSON. So far so good, then I wanted a JSON
datatype, that was the first step. Adding a new datatype to
MariaDB / MySQL is a pretty major undertaking though, so I
decided to try something different, I decided to kidnap …
I had to type in the location and name of my identity file and a long string of username@hostname anytime I'd try to SSH into my Amazon EC2 instance for administration. I found an easy solution that I am sharing with you here.
Here is a fix for the MySQL/TokuDB/MariaDB bug I reported earlier today. I think this fix is correct (it is only one line) but I don’t delve into the storage engine very often (and particularly not into ha_partition.cc) so I think it would be good to wait for Oracle (or Percona, MariaDB, or Tokutek) to validate that it is correct before using it.
diff -u ha_partition.cc /tmp/ha_partition.cc --- ha_partition.cc 2013-04-05 05:27:18.000000000 -0700 +++ /tmp/ha_partition.cc 2013-05-27 02:45:01.680676228 -0700 @@ -6455,9 +6455,11 @@ void ha_partition::get_dynamic_partition_info(PARTITION_STATS *stat_info, uint part_id) { + handler *file= m_file[part_id]; DBUG_ASSERT(bitmap_is_set(&(m_part_info->read_partitions), part_id)); - file->info(HA_STATUS_CONST | HA_STATUS_TIME | HA_STATUS_VARIABLE | + + info(HA_STATUS_CONST | HA_STATUS_TIME | HA_STATUS_VARIABLE | HA_STATUS_VARIABLE_EXTRA | HA_STATUS_NO_LOCK); stat_info->records= …[Read more]
Recently I happen to setup a new MySQL instance with my tools – a standard MySQL 5.1+, xtrabackup setup and last-hotbackup.tar.gz. To restore from the backup we used xtrabackup binaries…
The post xtrabackup_51: not found & no ‘mysqld’ group in MySQL options first appeared on Change Is Inevitable.
I recently wrote about non-deterministic queries in the replication stream. That’s resolved by using either MIXED or ROW based replication rather than STATEMENT based.
Another thing that’s not fully handled by STATEMENT based replication is temporary tables. Imagine the following:
- Master: CREATE TEMPORARY TABLE rpltmpbreak (i INT);
- Wait for slave to replicate this statement, then stop and start mysqld (not just STOP/START SLAVE)
- Master: INSERT INTO rpltmpbreak VALUES (1);
- Slave: SHOW SLAVE STATUS \G
If for any reason a slave server shuts down and restarts after the temp table creation, replication will break because the temporary table will no longer exist on the restarted slave server. It’s obvious when you think about it, but nevertheless it’s quite …
[Read more]Scaling up a workload to many cores on a single host
Here are results for Shard-Query 2.0 Beta 1* on the Star Schema Benchmark at scale factor 10. In the comparison below the “single threaded” response times for InnoDB are the response times reported in my previous test which did not use Shard-Query.
Shard-Query configuration
Shard-Query has been configured to use a single host. The Shard-Query configuration repository is stored on the host. Gearman is also running on the host, as are the Gearman workers. In short, only one host is involved in the testing.
The …
[Read more]As you probably know, Tungsten Replicator can replicate data from MySQL to MongoDB. The installation is relatively simple and, once done, replication works very well. There was a bug in the installation procedure recently, and as I was testing that the breakage has been fixed, I wanted to share the experience of getting started with this replication.
Step 1: install a MySQL server
For this exercise, we will use a MySQL sandbox running MySQL 5.5.31.
We download the binaries from dev.mysql.com and install a sandbox, making sure that it is configured as master, and that it is used row-based-replication.
$ mkdir -p $HOME/opt/mysql[Read more]
$ cd ~/downloads
$ wget …
Shard-Query examines INFORMATION_SCHEMA.PARTITIONS to determine if a table is partitioned. When a table is partitioned, Shard-Query creates multiple background queries, each limited to a single partition. Unfortunately, it seems that examining INFORMATION_SCHEMA.PARTITIONS causes query plans to change after the view is accessed.
I have reported bug 69179 to MySQL AB Oracle Corporation(old habits die hard).
Be careful: If you have automated tools (like schema management GUI tools) then make sure they don’t examine INFORMATION_SCHEMA.PARTITIONS or you may get bad plans until you analyze your tables or restart the database, even if using persistent stats.
I can only get the bug to happen when a WHERE clause is issued that limits access to a single partition. It may be that the per partition statistics …
[Read more]
As you might know, I'm a big fan of JSON. One big reason is that
I believe that JSON is closer to most developers view on data,
whereas the Relational SQL based model is closer to what someone
working with data itself or someone working with infrastructure.
What I mean here is that neither view is wrong, but they are
different.
So, given that, can we merge the Object JSON world with the
relational model? Well, not JSON, but Hibernate does it quite
well. This is one of my objects to the NoSQL world, that the
datamodel is closely linked to the application at hand, and less
so to data itself and to other applications. Stuff such as
accounts, privileges, accounting data, orders and many other
things are global, and are not specifically connected a specific
application, but in many NoSQL applications, this is what it ends
up being.
And there are not that many good solutions, how can I easily
explore data in a NoSQL …
Backups is one of the most important part of any MySQL deployment, and nowadays, there’s a number of tools to choose from depending on how your organization implements them. The purpose of this post is to enumerate the main tools and some helpers that makes backing up and testing/restoring your backups more convenient. By all means this is not the complete list, I’m sure I am missing some, so feel free to add them through the comments.
The Core Tools
- mysqldump – is a logical backup tool for MySQL. It creates plain text files with SQL statements which you can directly import back to the server. Some would say mysqldump is not really a backup tool as you cannot get a consistent backup without disrupting operations while the server is running. I’d say this is just a limitation, if your dataset is small …