This Thursday (February 11th, 14:00 UTC), Seppo Jaakola & Alex Yurchenko will talk about MySQL Galera Multi-Master Replication. Galera provides synchronous multi-master replication and uses a certification-based replication method for replicating transaction write sets in a DBMS cluster. The replication method requires close co-operation with database transaction processing and DMBS must support a specific replication API to be compatible with Galera. Codership has integrated Galera replication in the InnoDB storage engine, and the resulting MySQL/Galera cluster product has been published as a production-ready GA release in December 2009. The MySQL/Galera release 0.7 is available on the Codership and …
[Read more]This blog clearly explains how to configure the MySQL sample database (sakila) with GlassFish. Even though the instructions use a specific database but should work for other databases (such as Oracle, JavaDB, PostgreSQL, and others) as well. The second half of the blog provide specific syntax for the Oracle sample database.
- Download sakila sample database and unzip the archive.
- Install the database as described here - basically load and run "sakila-schema.sql" and "sakila-data.sql" extracted from the archive.
- Create a new MySQL user account using MySQL CLI Admin and
assign the privileges
- …
This blog clearly explains how to configure the MySQL sample database (sakila) with GlassFish. Even though the instructions use a specific database but should work for other databases (such as Oracle, JavaDB, PostgreSQL, and others) as well. The second half of the blog provide specific syntax for the Oracle sample database.
- Download sakila sample database and unzip the archive.
- Install the database as described here - basically load and run "sakila-schema.sql" and "sakila-data.sql" extracted from the archive.
- Create a new MySQL user account using MySQL CLI Admin and
assign the privileges
- …
Today I reached 109k Queries per Second. I was quite impressed by
it.
Some background on the situation.
I developed some stored procedures to process some rather large
tables we had in our database.
I managed to get the stored procedures to be very efficient and
quick.
I then wanted to test it out and tried to overload the server to
see how much it could take.
Normally, the server would do around 1k at best with these kinds
of tasks. I have recently been able to tweak it to 20k QPS. But
today, for some reason, the cache managed to get itself in the
right position and produced this result.
The Server:
A 4+ year old Dell server, with SAS drives, 1 Quad-core CPU and
16Gbs of memory.
Database:
MySQL 5.0.48 - with MyISAM tables only
…
Yes, MySQL has transactions if you use InnoDB or NDB Cluster for example. Using these transactional storage engines, you'll have to commit (or roll back) your inserts, deletes or updates.
I've seen it a few times now with people being surprised that no data is going into the tables. It's not so a silly problem in the end. If you are used to the defaults in MySQL you don't have to commit anything since it is automatically done for you.
Take the Python Database Interfaces for MySQL. PEP-249 says that, by default, …
[Read more]Yes, MySQL has transactions if you use InnoDB or NDB Cluster for example. Using these transactional storage engines, you'll have to commit (or roll back) your inserts, deletes or updates.
I've seen it a few times now with people being surprised that no data is going into the tables. It's not so a silly problem in the end. If you are used to the defaults in MySQL you don't have to commit anything since it is automatically done for you.
Take the Python Database Interfaces for MySQL. PEP-249 says that, by default, …
[Read more]I explored two interesting topics today while learning more about Postgres.
Partial page writes
PostgreSQL’s partial page write protection is configured by the following setting, which defaults to “on”:
full_page_writes (boolean)
When this parameter is on, the PostgreSQL server writes the entire content of each disk page to WAL during the first modification of that page after a checkpoint… Storing the full page image guarantees that the page can be correctly restored, but at a price in increasing the amount of data that must be written to WAL. (Because WAL replay always starts from a checkpoint, it is sufficient to do this during the first change of each page after a checkpoint. Therefore, one way to reduce the cost of full-page writes is to increase the checkpoint interval parameters.)
Trying to reduce the cost of full-page writes by increasing the checkpoint interval highlights a compromise. …
[Read more]
News Monday!
Matt Asay to JOIN Canonical as COO
This took me a bit by surprise at first. I don't find myself
often agreeing with Matt. Most of what he tends to write/argue
for is what I have referred to in the past as "crippleware". Canonical in recent time has taken
to opening up their platform. I've been a strong advocate for
Launchpad, it is a great service. I love that they opened it up
in recent time. When it comes to infrastructure software on the
size of LP, I don't believe that many others will ever install
it. Slash, G-Forge, and the Livejournal software are examples of
infrastructure software that approach the size or outweigh the LP
codebase. They have rarely been successfully deployed by others.
The advantage in the Launchpad software being open source is the …
We have been using tpcc-mysql benchmark for long time, and there
many results published in our blog, but that's just single
workload. That's why we are looking into different benchmarks,
and one
of them is TPCE. Yasufumi made some efforts to make TPCE working
with MySQL, and we are making it available for public
consideration.
You can download it from our Lauchpad Percona-tools project,
it's
bzr branch lp:~percona-dev/perconatools/tpcemysql
Important DISCLAIMER:
Using this package you should agree with TPC-E License Agreement,
which in human words is:
- You can't name results as "TPC Benchmark Results"
- You can't compare results with results published on http://www.tpc.org/ and you can't pretend the …
If you have multiple database servers with strange names, or if you have to hop over multiple machines to connect to any mysql database server, then you know what a pain it can be to administer such a setup. Thanks to some scripting, you can automate such tasks as follows:
Create an expect script:
/path/to/sshmysql.exp
#!/usr/bin/expect -f
#script by darren cassar
#mysqlpreacher.com
set machine [lindex $argv 0]
set timeout -1
spawn ssh username@$machine
match_max 100000
expect -exact “assword: “
send — “password\r”
send — “sudo -k; sudo su – mysql\r”
expect -exact “sudo -k; sudo su – mysql”
expect -exact “assword:”
send — “password\r”
interact
# you should change the word password in ‘send — “password\r”‘ to
your login password
# if you have the same password for each …