I wanted to point out something that might not be obvious from
the name: MySQL Parallel Dump can be used as a generic wrapper to
discover tables and databases, and fork off worker processes to
do something to them in parallel. That "something" can easily be
invoking mysqlcheck -- or any other program. This
makes it really easy for you to do multi-threaded
whatever-you-need-to-do on MySQL tables. Here's how.
Basically, if you need a job, or hate your current job, and have time to commit, and if you’re a professional, or just disciplined and care about every little thing you do..Or, if you just want to work with me.. :
Geneva Data, an Internet Security company is
looking for PHP developer to work on a unique project in San
Antonio, Texas.
We?re open to a full-time, part-time, contract, consulting or
project work. We just want the most innovative local PHP
programmer available (with experience.)
“Experience” means you can show us proof of your work … whether you have been in the workforce for 6 months or 60 years.
“Innovative” means that you?ve never encountered a problem that you couldn?t solve. We appreciate individuals who experiment with new technologies on personal projects. Creativity is a plus with us.
? MySQL and/or Linux proficiency is a further plus.
? Experience …
[Read more]MySQL Parallel Dump can now dump a single table simultaneously into many files of a user-specifed size. This not only helps speed dumps, but it paves the way for much more efficient parallel restores. Read on for the details.
Bob Zurek of EnterpriseDB posted a blog entry today titled, "We slammed into a brick wall with MySQL". If you read his blog entry, the information he is referencing is in this press release, FortiusOne Migrates GeoCommons Intelligent Mapping Website to EnterpriseDB Advanced Server.
If you read that press release, it says:
“We slammed into a brick wall with MySQL,” said Chris Ingrassia, chief technology officer, FortiusOne. “As an example, MySQL’s rather limited and incomplete spatial support dramatically impacted performance. We were looking for an affordable database solution, but we required enterprise-class features and performance that MySQL simply couldn’t deliver. Plus, philosophically we want to support …
[Read more]While writing up a review on a database tool I discovered today, I was inspired to spark a discussion about database GUIs in general. The value of GUI tools for administering database systems like MySQL has been a topic of much debate. ...
Cluster
This blog entry describes how to install MySQL clusters on
Solaris.
MySQL cluster consist of 3 seperate types of nodes:
- SQL nodes
- Storage nodes
- Management nodes
The SQL nodes are the nodes that applications can connect to.
Internally SQL nodes connect to storage nodes to process
the queries and return the result set to the end client.
The storage nodes are controlled by management nodes. They do
most of the work in processing the queries.
Managment nodes manages the entire cluster. They start and stop
the data and SQL nodes and manage backups.
Lets start with the simplistic installation where all the nodes
of the cluster are on the same box. Of course this is not how you
would do a typical MySQL cluster installation...but this is just
to get a feel of what is involved in MySQL cluster …
http://www.allthingsdistributed.com/2007/10/amazons_dynamo.html
"Most of these services only store and retrieve data by primary
key
and do not require the complex querying and management
functionality
offered by an RDBMS. This excess functionality requires
expensive
hardware and highly skilled personnel for its operation, making
it a
very inefficient solution. In addition, the available
replication
technologies are limited and typically choose consistency
over
availability."
1) Most web work is primary key.
2) Its not transactional.
3) Availability is more important then a lost data
When I joined MySQL four years ago, there was quite a lot of debate about product management. We didn't actually have any product managers and the view in Engineering was "we don't need 'em." The rationale was that we were so far behind in implementing features requested by customers that there was no need to have another opinion. "We already know exactly what we need to do" or "The Community tells us what we should focus on" were typical responses. It took me a while to convince people that product management could add value in helping to prioritize things and... READ MORE
The hiatus is over; I am reasonably settled at my new home. Therefore, HackMySQL.com is back to normal operation.
Parlez-vous le français? Je peux parler le français mais pas bien. Si vous voulez, on peut essayer à parler en français. Merci de votre patience.
In previous MySQL 6.0 alpha’s, the new Falcon engine didn’t handle ‘large’ transactions (meaning lots of rows inserted at one time) very well. You typically had to fall back to looping through the data with various commit points to get all the data inserted in a timely fashion.
The Falcon team should get some good kudos for putting out the latest alpha release that has much improved handling of large transactions. Below are just a few examples of large inserts on a Fedora Core box with a single CPU. Falcon was given a 200MB record cache size and InnoDB got a comparable 200MB buffer pool size.
mysql> show create table t_mG
*************************** 1. row ***************************
Table: t_m
Create Table: CREATE TABLE `t_m` (
`client_transaction_id` int(11) NOT NULL DEFAULT '0',
`client_id` int(11) NOT NULL DEFAULT '0',
`investment_id` int(11) NOT NULL DEFAULT '0',
`action` varchar(10) NOT NULL,
`price` …[Read more]