An article I wrote was posted to the Facebook
Engineering blog, about the automation system I worked on at
Facebook for MySQL Database Provisioning.
It covers, in fairly intimate detail, a system called "Windex"
that we use to provision and re-provision our MySQL databases at
Facebook. This system basically provisioned the new Facebook
Datacenter in Luleå, Sweden, with very little human effort,
saving us loads of time.
So, if you're curious about some of what it is that has been
taking up all my time for the last year and some, or if you're
just always curious about how Facebook is doing things, go check it out.
If you need to automate backups, you might wonder about the different techniques available to you.
With regards to scheduling backups using built-in features of
MySQL, you have two main options:
- Either run mysqldump (or mysqlbackup if you have an Enterprise licence) from an operating system scheduler, for example in Linux using "cron" or in Windows using the "Task Scheduler". This is the most commonly used option.
- Alternatively, use the Event Scheduler to perform a series of SELECT ... INTO OUTFILE ... commands, one for each table you need to back up. This is a less commonly used option, but you might still find it useful.
Scheduling mysqlbackup with cron
mysqldump is a client program, so when you run it, you run it from a shell script, or at a terminal, rather than inside a MySQL statement. The following statement backs up the sakila …
[Read more]In my ongoing efforts to migrate my fun side projects and coding experiments from SVN to Git I’ve come across some of my favorite Python based apps – which are all available in their respective repos on BitBucket, as follows:
- What it does: it’s an IRC bot that takes commands and does your bidding on whichever remote server the bot is installed on.
- How it does it: the bot runs on whatever server you install it on, then it connects to the IRC server and channel you configured it to connect to and it waits for you to give it commands, then it execs the commands and returns the output to your IRC chat window.
- …
A Pattern for a Newly Hired DBA? I don’t think this experience is unique. It has been shared repeatedly among those starting a job as a DBA (database administrator) at a new company, especially when the organization has never had a dedicated DBA. The conversation usually goes something like this: – “Welcome aboard <insert name here>! Here [...] …
[Read more]
After reading a blog post about MySQL Tuning scripts I thought
about the possibility of a fully Automatic MySQL Tuner.
This is how it would work:
A daemon which would connect to your database server and then
fetch status variables, just like mysqltuner and such. Then the
daemon could decide that a parameter would need to be adjusted
and then run "SET GLOBAL …" and write a /etc/mysql/autotuner.cf
file which should be included in your my.cnf.
It should have a min/max setting for each option and some
thresholds.
Why?
- Not everyone is a DBA
- It's could better than the default settings is most cases. Luckily many defaults are updated in 5.6.
- You're not using my-huge.cf, are you?
- It could help when there are changing workloads
- It might be sufficient for …
Stockholm - October 4th 2011
Severalnines, provider of automation and
management software for easily usable, highly available and
auto-scalable cloud database platforms, today announces the
latest release of its flagship product ClusterControl™ for MySQL Replication.
Introducing ClusterControl™ for MySQL Replication v.1.1.9
ClusterControl™ for MySQL Replication enables
customers to Deploy, Manage, Monitor and Scale a clustered
database platform based on the standard MySQL Replication.
Developers and DBAs now have access to all of the features of
Severalnines' flagship product ClusterControl™ specifically
adapted to MySQL Replication.
…
It’s been some time now that we’ve been talking about devops, the pushing together of application development and application deployment via IT operations, in the enterprise. To keep up to speed on the trend, 451 CAOS attended PuppetConf, a conference for the Puppet Labs community of IT administrators, developers and industry leaders around the open source Puppet server configuration and automation software. One thing that seems clear, given the talk about agile development and operations, cloud computing, business and culture, our definition of devops continues to be accurate.
Another consistent part of devops that also emerged at PuppetConf last week was the way it tends to introduce additional stakeholders beyond software developers and IT …
[Read more]Modern internet infrastructure are complex. Components and services are prone to failure. Resiliency involves building redundancy, best practices and processes into your architecture to make you able to bend and not break.
- Migrating to cloud service providers
- Rearchitecting and refactoring applications to scale
- Scaling the database tier - MySQL and Oracle
- Building redundancy into every layer
- Deploying object caches - memcache
- Deploying page caches - varnish
- Migrating to Innodb - transactional storage engine
- Infrastructure design
- Infrastructure automation
- Disaster Recovery
- Business Continuity with cloud deployments
Call or Skype us in New York City +1-212-533-6828
I’ve been playing around with some quick system automation scripts that are handy to use when you don’t want / need to setup a chef or puppet action. I like to keep all of my hostnames and login details in a MySQL database (a cmdb actually) but for this example we’ll just use a couple of nested lists. This script executes commands in parallel across the hosts you choose in the menu system via the “pdsh” command, so make sure you have that installed before running. Alternately you can change the command call to use ssh instead of pdsh for a serialized execution, but that’s not as fun or fast. With some customizations here and there you can expand this to operate parallelized jobs for simplifying daily work in database administration, usage reporting, log file parsing, or other system automation as you see fit. Here’s the code. Comments welcome as always!
#!/usr/bin/env python ## NAME: menu_parallel_execution.py ## DATE: …[Read more]
Lately I’ve had to do some environment load testing so I wrote this quick script. It can be modified as needed but the basic idea is that it spawns $x threads (–threads) and then sends two connections (or however many you want with –per-connection=) per thread to the URL (–url=). You can have it wait a configurable time between connections as well (–wait=).
The url is appended with a 32 character randomized string so that any database/caching on the backend of the site isn’t serving data from a warm cache. You can hunt down the string length for 32 and change it to whatever you want. Feel free to change and use as needed, just keep my info at top.
#!/usr/bin/python ################################################################################ ## DATE: 2010-10-26 ## AUTHOR: Matt Reid ## MAIL: mreid@kontrollsoft.com ## SITE: http://kontrollsoft.com ## LICENSE: BSD http://www.opensource.org/licenses/bsd-license.php …[Read more]