|Showing entries 1 to 21|
At first I thought it would be an easy problem of the master and slaves being out of sync and the row-based replication failing on not finding the row, but then I noticed all machines were actually still running statement-based replication. As far as I could recall we did that to circumvent another issue that has already been solved months ago but for some reason we never put it back to row-based replication.
A simple SHOW SLAVE STATUS revealed something similar to this:
Question asked by a student: is there a way to limit a table’s quote on disk? Say, limit a table to 2GB, after which it will refuse to grow? Note that the requirement is that rows are never DELETEd. The table must simply refuse to be updated once it reaches a certain size.
There is no built-in way to limit a table’s quota on disk. First thing to observe is that MySQL has nothing to do with this. It is entirely up to the storage engine to provide with such functionality. The storage engine is the one to handle data storage: how table and keys are stored on disk. Just consider the difference between MyISAM’s .MYD & .MYI to InnoDB’s shared tablespace ibdata1 to InnoDB’s file-per table .ibd files.
The only engine I know of that has a quota is the MEMORY engine: it[Read more...]
Here’s a few thoughts on current status and further possibilities for Facebook’s Online Schema Change (OSC) tool. I’ve had these thoughts for months now, pondering over improving oak-online-alter-table but haven’t got around to implement them nor even write them down. Better late than never.
The tool has some limitations. Some cannot be lifted, some could. Quoting from the announcement and looking at the code, I add a few comments. I conclude with a general opinion on the tool’s abilities.
In posts on June 30 and July 6, I explained how implementing the commands “replace into” and “insert ignore” with TokuDB’s fractal trees data structures can be two orders of magnitude faster than implementing them with B-trees. Towards the end of each post, I hinted at that there are some caveats that complicate the story a little. On July 21st I explained one caveat, secondary keys, and on August 3rd, Rich explained another caveat. In this[Read more...]
If you like to keep your ddl backed up in some source management tool like svn or cvs and want to do it individually for stored procedures, events, triggers, tables and such rather than having a single file you can easily do so using the below. You could even include the –skip-dump-date or –skip-comments and use the below to compare ddl daily checking for alterations thus making sure you are aware of any ddl changes done on the database.
mysqldump -u$user -p$password -h$hostname -P$port --no-create-info --no-data --no-create-db --skip-opt $dbname > "$path"/"$dbname"_triggers_"$date".sql
mysqldump -u$user -p$password
By now you know that there is a MySQL Track during next week’s ODTUG Kaleidoscope in Washington, DC. Ronald Bradford and I organized the schedule at the last minute (Ronald did a lot of the work!). It was difficult to fill a schedule with 19 sessions that are either 1 hour or 1.5 hours long, and to do it I ended up with three presentations.
At each presentation I will be giving away a copy of The MySQL Administrator’s Bible, so be sure to show up! All MySQL track sessions are in Maryland C, and all times are Eastern.
On Monday, June 28th from 4 pm – 5:30 pm I will be presenting[Read more...]
The answer is yes – kind of.
Stored procedures are implemented in a MySQL Server and can be used regardless of the storage engine being used for a specific table. One inference from this is that they won’t work when accessing the Cluster database directly through the NDB API.
This leads to the question of whether or not that limitation actually restricts what you can achieve. This article gives a brief introduction to stored procedures and looks at how the same results can be achieved using the NDB API.
Stored procedures provide a rudimentary way of implementing functionality within the database (rather than in the application code). They are implemented by the database designer and have the ability to perform computations as well as make changes to the data in the database. A typical use of stored procedures would be to control all[Read more...]
The previous two parts have looked at some solutions offered by triggers. Let’s look now at some wishful triggers solutions, which are currently unavailable because of triggers limitations.
Triggers are slow
The overhead of adding triggers is usually an even breaker. But I would like to believe speed will improve in time!
Triggers cannot act on the same table which activated them.
A thing I would like to do is have a rotating table. A log table is a perfect example: I only want to store logs up to 7 days back, or up to 1M rows. ON INSERT, (or[Read more...]
In Triggers Use Case Compilation, Part I, I’ve demonstrated some triggers use scenarios.
We continue our examples of triggers usage.
Consider the City table: each city belongs to a certain country. Some questions we may be interested in are:
Answering any of these questions is an easy SQL excercise. But aggregation is required, and full table scan (or full index scan, if we’re lucky) is essentially part of any execution plan. What if we can’t pay the price for these queries? What if we need immediate, or near immediate[Read more...]
I’ve run by quite a few triggers lately on production systems. In previous posts, I’ve written about problems solved with triggers. So here’s a compilation of some solutions based on triggers; and some problems which are not (yet?) solvable due to current triggers limitations.
Triggers can be used to:
Triggers are not fast. In fact, they can add quite an overhead if misused. Some of the triggers presented here are known to work on real life production systems, though, and work well. But make sure you benchmark before embarking on extensive application changes.
I’ll be using[Read more...]
Web applications face constant exploitation attempts. Those with a user base must keep their users’ private data, well… private.
While the MySQL security model allows restricting users access to databases, tables and even columns, it has no built in feature for restricting the rows access within the given table.
One cannot allow a user to only update rows 0 through 99, but restrict that user from updating rows 100 to 199. Such restrictions are usually managed in the application level, by adding a necessary “… AND filtering_column = some_value…”
Many web application have the notion of an ‘admin’ account, or several such accounts, which provide greater control over the application. The ‘admin’ account is one account to which many attacks are targeted. One such attack is an attempt to[Read more...]
There’s a lot of buzz around memcached. memcached is widely used, and has clients for many programming languages and platforms. TangentOrg have developed a memcached client in the form of MySQL UDFs (User Defined Functions).
I wish to discuss the memcached functions for MySQL: if and how they should be used.
Disclaimer: I do not work with memcached functions for MySQL on a production system; all that is written here reflects my opinion on how things should be done.
With memcached functions for MySQL, we can do the following:
SELECT memc_set('mykey', 'The answer is 42'); SELECT memc_get('mykey');
(See my previous post on how to[Read more...]
|Showing entries 1 to 21|