Over the past four articles I've demonstrated cases where a denormalized data model is faster than a normalized one, but often not by that much, and a case where a normalized data model was a bit faster than a denormalized one. My general conclusion was with today's optimizers one should target a normalized data model and then denormalize where it makes sense, even for reporting. I'm not as much of a fan of the star schema, a heavily denormalized data model popularized by Ralph Kimball, as I used to be. Star schemas are costly to build and maintain and the the time spent creating them can often be spent better on more productive optimizations, such as the creation of summary tables and better indexing. I'm not saying the denormalization doesn't make sense in some cases, just that it doesn't make sense in all cases.
Time on move on to the topic of this article, partitioning.
There are many good …
[Read more]