Showing entries 20021 to 20030 of 44118
« 10 Newer Entries | 10 Older Entries »
Short talk on MariaDB at Linuxtag 2011

If you happen to be around at this years LinuxTag 2011 in Berlin/Germany, you are invited to attend my short talk on MariaDB as a drop-in replacement for MySQL. The talk focusses on differences between MySQL Community Edition and MariaDB … Weiterlesen →

Shard-Query EC2 images available

Infobright and InnoDB AMI images are now available

There are now demonstration AMI images for Shard-Query. Each image comes pre-loaded with the data used in the previous Shard-Query blog post. The data in the each image is split into 20 “shards”. This blog post will refer to an EC2 instances as a node from here on out. Shard-Query is very flexible in it’s configuration, so you can use this sample database to spread processing over up to 20 nodes.

The Infobright Community Edition (ICE) images are available in 32 and 64 bit varieties. Due to memory requirements, the InnoDB versions are only available on 64 bit instances. MySQL will fail to start on a micro instance, simply decrease the values in the /etc/my.cnf file if you really want to try micro instances.

*EDIT*
The storage worker currently logs too much …

[Read more]
MySQL 5.6 — InnoDB and Memcached

One of the more exciting new features in MySQL 5.6 is the InnoDB to Memcached interface. Basically memcached runs as a daemon plugin and can bypass the SQL optimizer and parser for NoSQL access.

The first step is to download the new MySQL 5.6 with the InnoDB-Memcache preview. Sorry, Linux only at this time. And install memcached.

Second, run the provided configuration script mysql> < scripts/innodb_memcached_config.sql. This will do a lot of the work to get things running out of the box and one of the links below details what is happening behind the scenes when you run the script. Third, load the plugin mysql> install plugin daemon_memcached soname “libmemcached.so”;. Forth, to make sure we can see recently inserted data, you will need to set the transaction level mysql> set session TRANSACTION ISOLATION LEVEL read uncommitted;

[Read more]
[MySQL][Spider][VP]Spider-2.25 VP-0.14 released

I'm pleased to announce the release of Spider storage engine version 2.25(beta) and Vertical Partitioning storage engine version 0.14(beta).
Spider is a Storage Engine for database sharding.
http://spiderformysql.com/
Vertical Partitioning is a Storage Engine for vertical partitioning for a table.
http://launchpad.net/vpformysql

The main changes in this version are following.
Spider
- Add table parameter "skip_default_condition" and "direct_order_limit".
- Add server parameter "spider_skip_default_condition" and "spider_direct_order_limit".
  "direct_order_limit" improves some SQL with "order by" and "limit" performance.
- Add UDF "spider_flush_table_mon_cache".
  "spider_flush_table_mon_cache" is used for reflecting changing of …

[Read more]
4 performance fixes to MySQL on large servers

Yesterday I posted results from some MySQL benchmarks I had been doing on a large server. In this post I'd like to list 4 important fixes that were done to avoid bad performance:

read more

Introducing our Percona Live speakers

We have mostly finalized the Percona Live schedule at this point, and I thought I’d take a few minutes to introduce who’s going to be speaking and what they’ll cover. A brief explanation first: we’ve personally recruited the speakers, which is why it has been a slow process to finalize and get abstracts on the web. Sometimes you know someone’s a dynamite speaker and you discuss over the phone, and then it takes a long time to get a title and abstract from them. In many cases the better they are the busier they are, so this is expected.

Let me introduce just a few of the great speakers we have lined up for this event: Brendan Gregg, Dr. John Busch, and Vladimir Fedorkov.

Brendan Gregg

Brendan Gregg is the crazy guy who likes to scream at a chassis …

[Read more]
A General Purpose Dynamic Cursor - Part 2 of 3

Permalink: http://bit.ly/RcRieg



Refer to part 1 for the rationale behind the code or you can skip to part 3 for a working example as well as how you can debug the stored procedure.

Important: The SP will create a table named `dynamic_cursor`. Make sure this table does not exist in the database where you will be storing the procedure. Here's the 1st iteration of a general purpose dynamic cursor:

DELIMITER $$
DROP PROCEDURE IF EXISTS `dynamicCursor` $$
CREATE DEFINER=`root`@`localhost` PROCEDURE `dynamicCursor`(
IN selectStmt TEXT,
IN whatAction VARCHAR(255),
INOUT …
[Read more]
Innodb Caching (part 2)

Few weeks ago I wrote about Innodb Caching with main idea you might need more cache when you think you are because Innodb caches data in pages, not rows, and so the whole page needs to be in memory even if you need only one row from it. I have created the simple benchmark which shows a worse case scenario by picking the random set of primary key values from sysbench table and reading them over and over again.

This time I decided to “zoom in” on the time when result drop happens – 2x increase in number of rows per step hides a lot of details, so I’m starting with some number of rows when everything was still in cache for all runs and increasing number of rows being tested 20% per step. I’m trying standard Innodb page size, 4KB page size as 16K page size compressed to 4. The data in this case compresses perfectly (all pages …

[Read more]
Running spotlight from your Mac terminal window

A colleague just showed me this most excellent little command you can add to your .profile on the Mac to do Spotlight-indexed searches from the command line. Very nice.

function slocate() {
mdfind "kMDItemDisplayName == '$@'wc";
}

config[master]% time slocate my.cnf
/private/etc/my.cnf
/opt/local/var/macports/sources/rsync.macports.org/release/ports/databases/mysql4/files/my.cnf
real 0m0.018s
user 0m0.006s
sys 0m0.006s

Replication Issues: Never purge logs before slave catches them!!

A few days ago one of our customers contact us to inform a problem with one of their replication servers.

This server was reporting this issue:

Last_Error: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave.

After brief research we found the customer had deleted some binary logs from the master and relay logs from slave to release space since they were having space issues.

The customer requested us to get slave working again without affecting the production …

[Read more]
Showing entries 20021 to 20030 of 44118
« 10 Newer Entries | 10 Older Entries »