Recent Tips and News on Java EE 6 & GlassFish:
• How to configure GlassFish 3 + ColdFusion 9
and IIS7 (Benjamin Wong) |
Recent Tips and News on Java EE 6 & GlassFish:
• How to configure GlassFish 3 + ColdFusion 9
and IIS7 (Benjamin Wong) |
MySQL keeps many different files, some contain real data, some contain meta data. Witch ones are important? Witch can your throw away?
This is my attempt to create a quick reference of all the files used by MySQL, whats in them, what can you do if they are missing, what can you do with them.
When I was working for Dell doing Linux support my first words to a customer where “DO YOU HAVE COMPLETE AND VERIFIED BACKUP?” Make one now before you think about doing anything I suggest here.
You should always try to manage your data through a MySQL client. If things have gone very bad this may not be possible. MySQL may not start. If your file system get corrupt you may have missing files. Sometimes people create other files in the MySQL directory (BAD). This should help you understand what is safe to remove.
Before you try to work with one of these files make sure you have the file permissions set …
[Read more]WHAT TIME IS IT?
This post started with a simple question: “Does the function NOW() get executed every time a row is examined?” According to the manual, “Functions that return the current date or time each are evaluated only once per query …. multiple references to a function such as NOW() … produce the same result. …. (However,) as of MySQL 5.0.12, SYSDATE() returns the time (the row is) executes. “
Inspired by Baron's earlier post, here is one I hear quite frequently -
"If you enable innodb_file_per_table, each table is it's own .ibd file. You can then relocate the heavy hit tables to a different location and create symlinks to the original location."
There are a few things wrong with this advice:
A question that came up during the MySQL track at the UKOUG
conference in Birmingham was "Can I exclude only a few databases
from mysqldump? Let's say that I have 50 databases, and I want to
dump all of them, except a few."
As many know, mysqldump has an option to ignore specific tables.
SO if you have 1,000 tables in a databases, you can tell
mysqldump to dump all the tables except a few ones.
There is no corresponding option to exclude one or more
databases.
However, if you know your command line tools, the solution is
easy:
First, we get the list of all databases:
mysql -B -N -e 'show databases'
information_schema
employees
five
four
mysql
one
performance_schema
six
test
three
two
-B forces batch mode (no dashes box around the data), while -N
gets the result without the headers.
Now, let's say that we want to exclude databases four, …
Before I start a story about the data recovery case I worked on yesterday, here's a quick tip - having a database backup does not mean you can restore from it. Always verify your backup can be used to restore the database! If not automatically, do this manually, at least once a month. No, seriously - in most of the recovery cases I worked on, customers did have some sort of backup, but it just wasn't working, complete and what not. Someone set it up and never bothered to check if it still works after a while.
Anyway, this post is not really about the backups but rather about few interesting things I learned during last recovery case.
First, some facts about the system and how data was lost:
I have a 5G mysqldump which takes 30 minutes to restore from backup. That means that when the database reaches 50G, it should take 30x10=5 hours to restore. Right? Wrong.
Mysqldump recovery time is not linear. Bigger tables, or tables with more indexes will always take more time to restore.
If I restore from a raw backup (LVM snapshot, xtrabackup, innodb hot backup), it is very easy to model how much longer recovery time will take:
Backup is 80G
Copy is at 70MB/s.
10G is already complete.
= ((80-10) * 1024)/70/60 = ~17 minutes
I can tell progress with mysqldump by monitoring the rate at which show global status like 'Handler_write'; increases and compare it to my knowledge of about how many rows are in each table. But progress != a magic number like "17 minutes". Not unless I do a lot of complex modeling.
I am …
[Read more]The title is SEO bait – you can’t do it. We’ve seen a few recurring patterns trying to achieve similar – and I thought I would share with you my favorite two:
Option #1: Use a table to insert into, and grab the insert_id:
CREATE TABLE option1 (id int not null primary key auto_increment) engine=innodb; # each insert does one operations to get the value: INSERT INTO option1 VALUES (NULL); # $connection->insert_id();
Option #2: Use a table with one just row:
CREATE TABLE option2 (id int not null primary key) engine=innodb; INSERT INTO option2 VALUES (1); # start from 1 # each insert does two operations to get the value: UPDATE option2 SET id=@id:=id+1; SELECT @id;
So which is better? I don’t think it’s that easy to tell at a first glance, since option 2 does look more elegant – but if the next value is fetched as part of a transaction – I can see a potential …
[Read more]Recent Tips and News on Java EE 6 & GlassFish:
GlassFish
• An Eclipse / GlassFish / Java EE 6
Tutorial |
SQL Antipatterns, by Bill Karwin
I remember that when I finished reading The Lord Of The Rings, I
felt a pang of disappointment. "What? Already finished? What am I
going to read now? What can give me the same pleasure and sense
of accomplishment that these wonderful pages have given
me?"
That's how I felt when I came to the last page of SQL Antipatterns. And, no, Bill Karwin doesn't
tell imaginary tales from a fictitious world. This book is full
of very real and very practical advice, but all the material is
presented with such grace and verve that I could not put it down
until the very end. I read it cover to cover in just a few hours,
and I savored every page.
What is this Antipatterns, anyway? The title may deceive a casual
bookshop browser into believing that it's about some
philosophical database theory. Digging further, you realize …