Showing entries 1 to 3
Displaying posts with tag: memcachedmysql (reset)
MySQL Camp: Using Memcached as a Configuration Database

I was talking to Dathan today at MySQL camp and he pointed out that one could use memcached as a cheap configuration database. Instead of just using it as a cache you would over-allocate the amount of memory needed so that entries would never be garbage collected.

For example, if you have 10M of configuration data just allocate 64M and this should give you plenty of head room.

Then you could have your clients poll at regular intervals and since memcached is fast you can have it push out changes at a fairly steady rate.

Of course you'd have to use multiple memcached servers each with a full snapshot of the configuration data. This way if one server crashes you don't lose any data.

You could even have a reference install automatically update settings via crontab so that editing the files on one image will be pushed out to the entire cluster automatically.

IDEA: Hierarchy of caches for high performance AND high capacity memcached

(note: see below for updates)

This is an idea I've been kicking around for a while and wanted some feedback. Memcached does an amazing job as it is but there's always room for improvement.

Two areas that memcached could be improved are local acceleration and large capacity support (terabyte range)

I believe this could be done through a "hierarchy of caches" with a local in-process cache used to buffer the normal memcached and a disk-based memcached backed by berkeley DB providing large capacity.

The infrastructure would look like this:


in-process memcached -> normal memcached -> disk memcached

The in-process memcached would not be configured to access a larger memcached cluster. Clients would not use the network to get() objects and it would only take up a small amount of memory on the local machine. Objects would not serialize themselves before …

[Read more]
Memcached or when 70% is Full

I've been looking at a problem with our memory cache system at work for the last week and the problem finally clicked into place.

At work I developed this really awesome benchmarking library which exports statistics from various subsystems in our robot. It works really well - too well sometimes.

Its been telling me that one of my key caches is only functioning at 20% efficiency.

This is a memcached cache which means its broken out into many small servers running 256M or 512M or memory (really whatever they can spare).

Each of these servers was reporting that they are only 70% full so therefore I don't need to add any more memory right?

Wrong. I was look at the number of stored bytes vs the total maximum amount of bytes allocated to memcached.

The problem is …

[Read more]
Showing entries 1 to 3