I have blogged about prepared statements a few times, which is what most people rely on (too much) for SQL injection protection. I say too much because they do not really protect code fully against SQL injection attacks and they come with a lot of performance hurting baggage. To sum up: prepared statements do not handle all aspects of dynamic SQL creation, they add network I/O and memory overhead and they tend to generate less optimal query plans. Some of these issues can be solved by doing client side emulation, but that brings with itself its share of issues and I have to agree with Bill and not Brian that …
[Read more]I’ve recently updated my CentOS 5 x86_64 rpms for bacula-5.0.2.
Hope they are useful to you.
I love Wordpress. Really there isn’t a better blogging platform on this platform. It’s that good. And I’ve been using it to self-host my blog for the past year or so. For awhile, I hosted it using II6 on WHS v1. That was a real pain as it took several days to find a URL rewriting solution that worked with IIS6. The past few months I’ve been hosting it on my Windows 7-based media center. Since that runs IIS7, URL rewriting was easier using the standard URL rewrite module.
A few days ago I decided to test out WHS “Vail”. After installing it, I wanted to move my blog onto it so I downloaded the Microsoft Web Platform Installer. It promptly let me know that it couldn’t find any products in my selected language. Huh? This is what drives people crazy about Windows software. Crap just doesn’t make sense sometimes.
I googled and found several links …
[Read more]OK, you found the problem SQL statement that was affecting your server’s performance, now where did it originate?
The new MySQL Enterprise Plugins for Connector/J and Connector/NET send query statistics, including the source location for each query, directly to the MySQL Enterprise Monitor.
Figure 1 is a screenshot of new source location feature.
Figure 1. Source Location
Figure 2 shows the standard query statistics, which are collected in the query analyzer. In both cases, the statistics are gathered by the MySQL Connector and the Plugin, not MySQL proxy.
Figure 2. Query Analyzer
If you’re a MySQL Enterprise customer, you can …
[Read more]MySQL Dump Using Linux CRON Job
If you are a database administrator who would like to automate
you tasks. Here is a simple and very basic task that can be
automated,
MySQL Database Dumps are the very basic task every administrator
does, no matter how simple it sounds, it is most useful in
failure scenarios. Hence you would have to perform this task very
often.
It is very likely to miss on taking dumps on daily routine, hence you can come up with an alternative to dump your databases by scheduling it to run automatically. This will let you concentrate on your other task which might need more attention.
There are several ways to dump a database, you have many utilities and tools to do so. Also many tools give you the option to schedule the dumps through a GUI.
Follow the below steps to automate your MySQL dump.
Firstly, you need create a .sh file with these entries,
…
[Read more]MySQL Dump Using Linux CRON Job
If you are a database administrator who would like to automate
you tasks. Here is a simple and very basic task that can be
automated,
MySQL Database Dumps are the very basic task every administrator
does, no matter how simple it sounds, it is most useful in
failure scenarios. Hence you would have to perform this task very
often.
It is very likely to miss on taking dumps on daily routine, hence you can come up with an alternative to dump your databases by scheduling it to run automatically. This will let you concentrate on your other task which might need more attention.
There are several ways to dump a database, you have many utilities and tools to do so. Also many tools give you the option to schedule the dumps through a GUI.
Follow the below steps to automate your MySQL dump.
Firstly, you need create a .sh file with these entries,
…
[Read more]Current database market share figures are hard to come by, so here's my take on this. Included is a discussion of how we came up with these figures.
For the impatient here's the table I put together today:
1 | access | 47,000,000 |
2 | oracle | 24,900,000 |
3 | mysql | 16,765,000 |
4 | sql server | 12,320,000 |
If you follow PBXT development you may have noticed a number of
different versions of the engine have been mentioned in various
talks and blogs.
There is actually a consistent strategy behind all this, which I
would like to explain here.
PBXT 1.0 - Current: 1.0.11-3 Pre-GA
Launchpad: lp:pbxt
This is the current PBXT production release. It is stable in all
tests and environments in which it is currently in use.
The 1.0.11 version of the engine is available in MariaDB 5.1.47.
PBXT 1.1 - Stability: RC
Launchpad: lp:pbxt/1.1
PBXT 1.1 implements memory resident (MR) tables. These tables can
be used for fast, concurrent access to …
Today, I was looking for a quick way to see HTTP response codes of a bunch of urls. Naturally, I turned to the curl command, which I would usually use like this:
curl -IL "URL"
This command would send a HEAD request (-I), follow through all redirects (-L), and display some useful information in the end. Most of the time it's ideal:
curl -IL "http://www.google.com" HTTP/1.1 200 OK Date: Fri, 11 Jun 2010 03:58:55 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=ISO-8859-1 Server: gws X-XSS-Protection: 1; mode=block Transfer-Encoding: chunked
However, the server I was curling didn't support HEAD requests explicitly. Additionally, I was really only interested in HTTP status codes and not in the rest of the output. This means I would have to change my strategy and issue GET requests, ignoring HTML output completely.
Curl manual to the rescue. A few …
[Read more]I just wrote a large post on reasons for innodb main tablespace excessive growth and I thought it would make sense to explain briefly of why it is so frequently you have purge not being the problem at all and when out of no where you can see purge thread being unable to keep up and undo table space explodes and performance drops down. Here is what happens.
When you have typical OLTP system with small transactions your UNDO space is small and it fits in buffer pool. In fact most of the changes do not need to go to the disk at all - the space for undo space is allocated, used and freed without ever needing to go to the disk.
Now when you have spike in writes or long running transactions which increases your undo space size it may be evicted from buffer pool and stored on disk. This is when problems often starts …
[Read more]