I feel like I am hitting a good mastery of Xen. The real magic
left?
How the main process communicates during the point where you move
one
running server to another. That part... not quite sure what is
going
on with that.
virt-clone is an awesome little command that I just
discovered
(oops!). The absurdity I was doing before I figured that out was
more
then a little crazy.
I need to figure out how to backup the xen images. My base OS
images
are all 8 gigs a piece. This means:
S3 Slow.... very slow to transfer these images.
Revision Control System I am wondering how well
mercurial
or bizarre would work if I started tossing 8 gig images in them.
No
possibility for deltas. I can re-clone to drop history when I
need it
(though I suspect that the new bzng feature to lose history
might
work). Distributed …
Last night at the MySQL NY Meetup we continued on from a very successful July presentation on “Practical Performance Tips & Tricks”. I must admit after speaking and standing all day for the MySQL DBA Bootcamp for the Oracle DBA it was a stretch, and we didn’t cover all material as expected, but the evening was still very productive to everybody. Links are here for my August Presentation and July Presentation.
Thanks to Marc and the team from LogicWorks for again sponsoring our NY Meetup Event. We don’t get the beer and food any other way.
As a consultant …
[Read more]
Andrae, Jacinta and I spent some hours this afternoon going over
the proposals and making the final selections. Previously we (and
other volunteers) reviewed and commented on proposals based on
the volunteer's subject matter expertise, knowledge of the
speaker's subject and speaking ability, and so on.
Anyway, Andrae now has the magic pile with everything decided,
and we'll have the conference system send out notifications in
the coming week. If you made multiple proposals, you'll be
notified for each individually - so acceptance or rejection of
one says nothing about any other proposals you made... oh, and we
shifted the paper submission schedule, of course - you will have
time to prepare!
Thanks for your patience. It's going to be a great conference.
I'm scratching my head trying to write a stress-test/benchmark
tool for a LAMP (php) application. Here's what I want to do, does
something exist already?
* The tool should be able to put the application in "record
mode". At the start of hitting the record button, the contents of
the MySQL database are written to disk.
* Any URLs accessed are then logged (along with any
POST/GET/COOKIE data sent). When logged, they need to know what
thread they belong to (so that key actions can be replayed
chronologically).
* When I hit "stop recording", the tool outputs a bunch of shell
scripts that just have curl commands in them. I can then may
configuration changes and replay these shell scripts.
The idea is that each thread is one shell script, and I can
replay the scripts in concurrently to test how they compete for
database resources etc. Then I make a small change, and run the
test again.
…
I'll be giving a condensed version of my Next Generation Data Storage with CouchDb-talk next sunday (26.08.2007) at FrOSCon. The fine folks of the PHP Usergroup Dortmund got assigned a room to present all things PHP over the weekend, including a set of presentations. This is where I talk, this is not the main presentation track of the conference.
Condensed, eh? — Yeah, In Dortmund and Zurich before, I had plenty of time to talk and …
[Read more]This week I presented two one day free seminars, “MySQL DBA Bootcamp for the Oracle DBA” in New York and San Francisco. Both were very successful days providing an opportunity to speak to seasoned enterprise professionals.
One question I was asked was “As an Oracle DBA, how can I become a MySQL DBA, what do I do, where do I start?”
Here are my references and recommendations that have zero cost to get started.
- Read the MySQL Documentation Reference Manual.
- Download MySQL install and use it.
- The MySQL Developer Zone is a great sources for articles, information and references.
- …
We have all done it in the past, and probably most people that read this (will admit| or lie) to still doing it, but everybody must start making an effort to improving MySQL security in the usage on your MySQL Installation, including just on your laptop, and in presentations that people read.
I spotted a reference article on Planet MySQL this evening and without looking at the details the syntax presented typifies two basic and fundamental 101 MySQL security issues.
1. Always, always, always have a password for a MySQL account,
especially for the ‘root’ user.
2. Don’t use the ‘root’ user unless you really have to. The SUPER
privilege is just that SUPER, there are many things you really
don’t want every person accessing to have. In a larger
environment …
In case you live in the dark ages (that is, before RSS) and haven’t heard, MySQL Camp II is next week at Polytechnic University in Brooklyn, NY. Sign up and head over there, slackers!
I will be there to talk about Proven Scaling, HiveDB, DorsalSource, and much more! Send me a note if you’d like to meet up or talk about something specific. I will also have ample Proven Scaling bottle openers …
[Read more]
One of my favourite topics in MySQL performance talks is the
ambiguous description of what size of what your transactions
should be. The basic advice is:
Running InnoDB in autocommit, or with short transactions will
cause many more fsync()'s which will reduce your write
performance.
It seems that if I run entirely transaction-less the import speed
of a test I wrote is:
real 0m31.222s user 0m2.111s sys 0m1.070s real 0m30.318s user 0m2.111s sys 0m1.070s real 0m31.744s user 0m2.108s sys 0m1.078s
If I run in transactional, committing after approx 10 queries,
the time is awesomely better:
real 0m12.154s user 0m1.771s sys 0m0.869s real 0m11.976s user 0m1.773s sys 0m0.874s real 0m12.827s user 0m1.768s sys 0m0.872s
I tried hacking my code to commit even less frequently, and I can
get …
I took the same table as I used for MySQL Group by Performance Tests to see how much MySQL can sort 1.000.000 rows, or rather return top 10 rows from sorted result set which is the most typical way sorting is used in practice.
I tested full table scan of the table completes in 0.22 seconds giving us about 4.5 Million of rows/sec. Obviously we can't get sorted result set faster than that.
I placed temporary sort files on tmpfs (/dev/shm) to avoid disk IO as a variable as my data set fits in memory anyway and decided to experiment with sort_buffer_size variable.
The minimum value for sort_buffer_size is 32K which gives us the following speed:
PLAIN TEXT SQL:
- mysql> SELECT * FROM gt ORDER BY i DESC LIMIT 10; …