Here is a list of my most useful tools that I use when doing
performance audits.
Please note, I am writing this mainly for myself, because I
sometimes end up trying to find them in my other blog post about
mastering indexing and this may save me
time as well as a few changes that have happened over the
years.
Regular Slow Log Report pt-query-digest slow_query.log
>slow.txt
All Queries (that use indexes) for a certain table
pt-query-digest slow_query.log --filter
'($event->{fingerprint} =~ m/^(!?select|update|delete)/)
&& ($event->{arg} =~ m/mytable /) ' --limit=100%
>mytable.txt
Longest Running Select Queries - most painful queries
with response time % right next to them. pt-query-digest
slow_query.log --filter '($event->{fingerprint} =~
m/^(!?select)/)' --order-by=Query_time:max >
select.txt
Filter the slow log for all Select queries for a certain
table pt-query-digest slow_query.log --filter
'($event->{fingerprint} =~ m/^(!?select)/) &&
($event->{arg} =~ m/mytable /) ' --no-report
--output=slowlog >mytable.log
Find unused indexes from pre-filtered table's
logs pt-index-usage mytable.log --host 127.0.0.1
--tables mytable >mytable_indexes.txt
Find Top 15 Largest tables on a server (use with
caution) - (from www.mysqlperformanceblog.com)
----------------------------- delimiter $$ create procedure
dba.largest() begin
SELECT CONCAT(TABLE_SCHEMA, '.', TABLE_NAME) AS TABLE_NAME,
CONCAT(ROUND(TABLE_ROWS / 1000000, 2), 'M') ROWS,
CONCAT(ROUND(DATA_LENGTH / ( 1024 * 1024 * 1024 ), 2), 'G') DATA,
CONCAT(ROUND(INDEX_LENGTH / ( 1024 * 1024 * 1024 ), 2), 'G') IDX,
CONCAT(ROUND(( DATA_LENGTH + INDEX_LENGTH ) / ( 1024 * 1024 *
1024 ), 2), 'G') TOTAL_SIZE, ROUND(INDEX_LENGTH / DATA_LENGTH, 2)
IDXFRAC FROM INFORMATION_SCHEMA.TABLES ORDER BY DATA_LENGTH +
INDEX_LENGTH DESC LIMIT 15; end $$ delimiter ;
---------------------------------
ToDo: Run filtered log against database with the percona toolkit
log player to test index improvements.
Dec
11
2014