Home |  MySQL Buzz |  FAQ |  Feeds |  Submit your blog feed |  Feedback |  Archive |  Aggregate feed RSS 2.0 English Deutsch Español Français Italiano 日本語 Русский Português 中文
Showing entries 1 to 30 of 67 Next 30 Older Entries

Displaying posts with tag: information_schema (reset)

Fun with Bugs #30 - quick review of my reports in February, 2014
+0 Vote Up -0Vote Down
I've got only one comment to my previous post about deadlock, and it was more like a hint based on a different use case, not a real explanation. So far there is nobody who wants to get free beer... Maybe this is even good, as I do not go to the conference and BOF I've submitted will be supervised by my colleague Przemysław Malkowski. But you still have entire month till the conference to get a chance for a beer from him (we'll arrange this somehow).

In the meantime I'd like to review bug reports for MySQL server (few) and fine manual (many) that I've submitted in February, 2014. 22 in total, one

  [Read more...]
innodb_stats_on_metadata and slow queries on INFORMATION_SCHEMA
+2 Vote Up -0Vote Down

INFORMATION_SCHEMA is usually the place to go when you want to get facts about a system (how many tables do we have? what are the 10 largest tables? What is data size and index size for table t?, etc). However it is also quite common that such queries are very slow and create lots of I/O load. Here is a tip to avoid theses hassles: set innodb_stats_on_metadata to OFF.

This is a topic we already talked about, but given the number of systems suffering from INFORMATION_SCHEMA slowness, I think it is good to bring innodb_stats_on_metadata back on the table.

The problem

Let’s look at a system I’ve seen recently: MySQL 5.5, working set fitting

  [Read more...]
Persistent statistics and partitions
+1 Vote Up -0Vote Down
Today when I was studying for the MySQL 5.6 exams.

I was studying for these two items:
  • Create and utilize table partitioning
  • Obtain MySQL metadata from INFORMATION_SCHEMA tables

The first step is to create a table, partition it with a hash.
mysql> CREATE TABLE pfoo (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255))
-> PARTITION BY HASH(id) PARTITIONS 4;
Query OK, 0 rows affected (0.04 sec)

mysql> INSERT INTO pfoo(name) VALUES('test01'),('test02'),('test03'),('test04'),
-> ('test05'),('test06'),('test07'),('test08'),('test09'),('test10'),('test11');
Query OK, 11 rows affected (0.00 sec)
Records: 11 Duplicates: 0 Warnings: 0

mysql> SELECT * FROM pfoo;
+----+--------+
| id | name |
+----+--------+
| 4 | test04

















  [Read more...]
common_schema & openark-kit in the media: #DBHangOps, OurSQL
+0 Vote Up -0Vote Down

#DBHangOps

I had the pleasure of joining into @DBHangOps today, and speak about common_schema and openark-kit. What was meant to be a 15 minute session turned to be 50 -- sorry, people, I don't talk as much at home, but when it comes to my pet projects...

I also realized I was missing on a great event: DBHangOps is a hangout where you can chat and discuss MySQL & related technologies with friends and colleagues, with whom you typically only meet at conferences. I will certainly want to attend future events.

Thanks to John Cesario and Geoffrey Anderson who invited me to talk, and to the friends and familiar faces who attended; I was happy to talk about my work, and very interested in

  [Read more...]
Calculating the InnoDB free space - part 2
+0 Vote Up -0Vote Down
This is part 2, you can find part 1 here.

So in part 1 we learned how to calculate the free space within InnoDB. But unfortunately that won't always work perfectly.

The first issue: the DATA_FREE column in the INFORMATION_SCHEMA.TABLES table will not show a sum of the free space of each partition. This means that if you have innodb_file_per_table disabled and are using partitioning then you must divide DATA_FREE by the number of partitions.
This is Bug #36312.

Example:
mysql> SELECT CONCAT(T.TABLE_SCHEMA,'.',T.TABLE_NAME) AS TABLE_NAME,
-> P.PARTITION_NAME AS PART,IBT.SPACE,IBD.PATH,T.DATA_FREE AS T_DATA_FREE,
-> P.DATA_FREE AS P_DATA_FREE FROM









  [Read more...]
Calculating the InnoDB free space
+2 Vote Up -0Vote Down
Recently someone asked my if it's possible to find the total free space within InnoDB. I thought this would be very easy as the INFORMATION_SCHEMA.TABLES table has a DATA_FREE column. So we could just use SELECT SUM(DATA_FREE) FROM INFORMATION_SCHEMA.TABLES couldn't we?
&nbsp
So what does the DATA_FREE column tell us? It tells us the free data within InnoDB for that particular table. A table can share a tablespace with multiple other tables.
&nbsp
The tablespace which is used by a table depends on whether the innodb_file_per_table was enabled during table creation and/or at the last time the table was rebuild (e.g. by OPTIMIZE TABLE).
&nbsp
If innodb_file_per_table was always disabled then this query probably reports the correct free space:
SELECT DATA_FREE FROM INFORMATION_SCHEMA.TABLES WHERE ENGINE='InnoDB' LIMIT 1;

This is







  [Read more...]
MySQL 5.5's new features
+0 Vote Up -0Vote Down
The recently released MySQL 5.6 gets a lot of attention, but for those who are still on 5.5 there is also good news: There are two new features in 5.5.

The first feature is that there are more INFORMATION_SCHEMA tables for InnoDB. This means that it's possible to 'see' what's in the buffer pool. It also makes it possible to get more information about the LRU list.

From the 5.5.28 changelog:
InnoDB: Certain information_schema tables originally introduced in MySQL 5.6 are now also available in MySQL 5.5 and MySQL 5.1: INNODB_BUFFER_PAGE, INNODB_BUFFER_PAGE_LRU, and INNODB_BUFFER_POOL_STATS. (Bug #13113026)

This is in the "Bugs Fixed" section instead of the "Functionality Added or Changed" section, which is a bit weird in my opinion.

The second feature is a








  [Read more...]
Hierarchical data in INFORMATION_SCHEMA and derivatives
+5 Vote Up -0Vote Down

Just how often do you encounter hierarchical data? Consider a table with some parent-child relation, like the this classic employee table:

CREATE TABLE employee (
  employee_id INT UNSIGNED PRIMARY KEY,
  employee_name VARCHAR(100),
  manager_id INT UNSIGNED,
  CONSTRAINT `employee_manager_fk` FOREIGN KEY (manager_id) REFERENCES employee (employee_id)
) engine=innodb
;
+-------------+---------------+------------+
| employee_id | employee_name | manager_id |
+-------------+---------------+------------+
|           1 | Rachel        |       NULL |
|           2 | John          |          1 |
|           3 | Stan          |          1 |
|           4 | Naomi         |          2 |
  [Read more...]
Killing InnoDB idle transactions
+0 Vote Up -0Vote Down

The issue of terminating long-time idle open InnoDB transaction has been discussed recently by many. I wish to add my share, by proposing a quick and clean solution via common_schema.

common_schema 1.2 provides with the innodb_transactions view, which relies on INNODB_TRX - one of the InnoDB Plugin views in INFORMATION_SCHEMA - as well as on PROCESSLIST, and so is able to determine with certainty that a transaction has been idle for a long time.

innodb_transactions offers us with a sql_kill_query column, which produces a 'KILL QUERY 12345' type of value. So we

  [Read more...]
Analysing WHER-clauses in INFORMATION_SCHEMA table implemtations
Employee +1 Vote Up -0Vote Down

The MySQL Server has a quite simple interface for plugins to create tables inside INFORMATION_SCHEMA. A minimal plugin for creating a table with nothing but a counter might look like this:

static int counter_fill_table(THD *thd, TABLE_LIST *tables, Item *cond)
{
  ulonglong value= 0;
  
  while (1)
  {
    table->field[0]->store(value++, true);
  }
  
  return 0;
}

static ST_FIELD_INFO counter_table_fields[]=
{
  {"COUNT", 20, MYSQL_TYPE_LONGLONG, 0, MY_I_S_UNSIGNED, 0, 0},
  {0, 0, MYSQL_TYPE_NULL, 0, 0, 0, 0}
};

static int counter_table_init(void *ptr)
{
  ST_SCHEMA_TABLE *schema_table= (ST_SCHEMA_TABLE*)ptr;

  schema_table->fields_info= counter_table_fields;
  schema_table->fill_table= counter_fill_table;
  return 0;
}

static struct st_mysql_information_schema counter_table_info =
{
  [Read more...]
How common_schema split()s tables - internals
+1 Vote Up -0Vote Down

This post exposes some of the internals, and the SQL behind QueryScript's split. common_schema/QueryScript 1.1 introduces the split statement, which auto-breaks a "large" query (one which operates on large tables as a whole or without keys) into smaller queries, and executes them in sequence.

This makes for easier transactions, less locks held, potentially (depending on the user) more idle time released back to the database. split has similar concepts to oak-chunk-update and

  [Read more...]
Table split(...) for the masses
+1 Vote Up -0Vote Down

(pun intended)

common_schema's new split statement (see release announcement) auto-splits complex queries over large tables into smaller ones: instead of issuing one huge query, split breaks one's query into smaller queries, each working on a different set of rows (a chunk).

Thus, it is possible to avoid holding locks for long times, allowing for smaller transactions. It also makes for breathing space for the RDBMS, at times boosting operation speed, and at times prolonging operation speed at will.

In this post I show how split exposes itself to the user, should the user wish so.

split can

  [Read more...]
Performance improvements for big INFORMATION_SCHEMA tables
Employee +2 Vote Up -0Vote Down
A short while after I fixed the legacy bug that prevented temporary MyISAM tables from using the dynamic record format, I got an email from Davi Arnaut @ Twitter. It turned out that Twitter needed to fix the very same problem, but for the case when INFORMATION_SCHEMA temporary tables use MyISAM.

In short, INFORMATION_SCHEMA tables provide access to database metadata. Despite their name, they are more like views than tables: when you query them, relevant data is gathered from the dictionary and other server internals, not from tables. The gathered data is stored in a temporary table (memory or MyISAM depending on size) and then returned to the user.

The reason Davi emailed me was to let me know that he had further improved the fix for



  [Read more...]
Implementing mysqldump –ignore-database
Employee +4 Vote Up -0Vote Down

Ronald Bradford and Giuseppe Maxia (hey guys!) wrote about different ways to ignore a database when using mysqldump –all-databases over the past couple of days.

Whilst the solutions are interesting, I wondered why not attack it from the proper approach, and add the option to mysqldump itself? Honestly, the patch is trivial, and doing anything against INFORMATION_SCHEMA with lots of databases and tables … well let’s just say … group_concat_max_len is the least of your worries..

15 minutes later I had a working solution:

To my surprise, I also

  [Read more...]
A few hacks to simulate mysqldump --ignore-database
+4 Vote Up -0Vote Down

A few days ago, Ronald Bradford asked for a mysqldump –ignore-database option.

As a workaround, he proposes:
mysqldump --databases `mysql --skip-column-names \
-e "SELECT GROUP_CONCAT(schema_name SEPARATOR ' ') \
FROM information_schema.schemata WHERE schema_name \
NOT IN ('mysql','performance_schema','information_schema');" \
>` >/mysql/backup/rds2.sql

It's a clever solution, but unfortunately it only works if you have a handful of schemas. If your databases happens to have several dozens (or hundreds or thousands) of schemas (which is where you need this option more), then the output will be truncated to the length of group_concat_max_len (by default, 1024.)

There are two






  [Read more...]
Auto caching INFORMATION_SCHEMA tables: seeking input
+1 Vote Up -0Vote Down

The short version

I have it all working. It's kind of magic. But there are issues, and I'm not sure it should even exist, and am looking for input.

The long version

In Auto caching tables I presented with a hack which allows getting cached or fresh results via a simple SELECT queries.

The drive for the above hack was INFORMATION_SCHEMA tables. There are two major problems with INFORMATION_SCHEMA:

  • Queries on schema-oriented tables such as TABLES, COLUMNS, STATISTICS, etc. are heavyweight. How heavyweight? Enough to make a lockdown of your database. Enough to crash down your database in some cases.
  • The data is always generated on-the-fly, as you request it. Query
  •   [Read more...]
    INFORMATION_SCHEMA Optimizations: still crashing my servers
    +0 Vote Up -0Vote Down

    [Update: need to take more breaks: now NOT crashing my servers! See clarifications below]

    INFORMATION_SCHEMA Optimizations are meant to make your INFORMATION_SCHEMA queries lighter and safer.

    For example, if you're going to query the COLUMNS table for just the columns of a single table, then the following:

    SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA='sakila' AND TABLE_NAME='rental'

    makes for an optimization: specifying a literal on TABLE_SCHEMA avoid scanning the directories of other schemata. Specifying a literal on TABLE_NAME avoids checking up on other tables. So it's a

      [Read more...]
    More MySQL foreach()
    +4 Vote Up -0Vote Down

    In my previous post I've shown several generic use cases for foreach(), a new scripting functionality introduced in common_schema.

    In this part I present DBA's handy syntax for schema and table operations and maintenance.

    Confession: while I love INFORMATION_SCHEMA's power, I just hate writing queries against it. It's just so much typing! Just getting the list of tables in a schema makes for this heavy duty query:

    SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='sakila' AND TABLE_TYPE='BASE TABLE';

    When a join is involved this really becomes a nightmare. I think it's

      [Read more...]
    Reading results of SHOW statements, on server side
    +5 Vote Up -0Vote Down

    SHOW statements are show stoppers on server side. While clients can get a SHOW statement as a result set just as any normal SELECT, things are not as such on server side.

    On server side, that is, from within MySQL itself, one cannot:

    SELECT `Database` FROM (SHOW DATABASES);

    One cannot:

    DECLARE show_cursor CURSOR FOR SHOW TABLES;

    One cannot:

    SHOW TABLES INTO OUTFILE '/tmp/my_file.txt';

    So it is impossible to get the results with a query; impossible to get the results from a stored routine; impossible to get the results by file reading...

    Bwahaha! A hack!

    For some SHOW

      [Read more...]
    Advanced InnoDB Deadlock Troubleshooting – What SHOW INNODB STATUS Doesn’t Tell You, and What Diagnostics You Should be Looking At
    +5 Vote Up -0Vote Down

    One common cause for deadlocks when using InnoDB tables is from the existence of foreign key constraints and the shared locks (S-lock) they acquire on referenced rows.

    The reason I want to discuss them though is because they are often a bit tricky to diagnose, especially if you are only looking at the SHOW ENGINE INNODB STATUS output (which might be a bit counter-intuitive since one would expect it to contain this info).

    Let me show a deadlock error to illustrate (below is from SHOW ENGINE INNODB STATUS\g):

    ------------------------
    LATEST DETECTED DEADLOCK
    ------------------------
    111109 20:10:03
    *** (1) TRANSACTION:
    TRANSACTION 65839, ACTIVE 19 sec, OS thread id 4264 starting index read
    mysql tables in use 1, locked 1
    LOCK WAIT 6 lock struct(s), heap size 1024, 3 row lock(s), undo log entries 1
    MySQL thread id 3, query id 74 localhost 127.0.0.1 root Updating
    UPDATE parent SET age=age+1
      [Read more...]
    Monitoring Related OpenWorld Talks
    Employee_Team +0 Vote Up -0Vote Down

    I gave two monitoring related talks at OpenWorld, thanks to all that came along!

    Both were monitoring related, the first an introduction to MySQL Enterprise Monitor, and the second a look at some of the new instrumentation that is getting developed, primarily within the MySQL 5.6 release. 

    If you'd like to get a quick overview of how MySQL Enterprise Monitor works, then take a look through the "Getting to Know MySQL Enterprise Monitor" talk. This gives you a high level view of how the different pieces fit together, and then each of the important factors within the user interface:

    Getting to Know MySQL Enterprise Monitor

    And if you are interested in seeing how the instrumentation and monitoring

      [Read more...]
    Monitoring Related OpenWorld Talks
    Employee_Team +0 Vote Up -0Vote Down

    I gave two monitoring related talks at OpenWorld, thanks to all that came along!

    Both were monitoring related, the first an introduction to MySQL Enterprise Monitor, and the second a look at some of the new instrumentation that is getting developed, primarily within the MySQL 5.6 release. 

    If you'd like to get a quick overview of how MySQL Enterprise Monitor works, then take a look through the "Getting to Know MySQL Enterprise Monitor" talk. This gives you a high level view of how the different pieces fit together, and then each of the important factors within the user interface:

    Getting to Know MySQL Enterprise Monitor

    And if you are interested in seeing how the instrumentation and monitoring

      [Read more...]
    Finding tables without primary keys
    +6 Vote Up -0Vote Down
    I was checking a third party server, and I needed to find if there were tables without primary keys. This is important to know, not only because the lack of primary keys affects performance and data accuracy in general, but also because in row-based replication performance can degrade beyond belief when updating tables without primary keys.Anyway, I did not remember off the bat any method to get this information from a server with thousands of tables, and thus I went to find a solution on my own.My first instinct called for using the COLUMNS table from the INFORMATIOn_SCHEMA, and so I came up with this query, where I sum the number of columns that are inside either a PRIMARY or UNIQUE key and filter only the ones where such sum is zero (i.e. no primary or unique  [Read more...]
    MySQL Global status difference using single query
    +1 Vote Up -0Vote Down

    Have just read MySQL Global status difference using MySQL procedures / functions, by Andres Karlsson. Have commented, but realized I did not provide with a direct answer. In the comment, I suggested checking out a solution based on views, found in common_schema. But the solution in common_schema is split into two views, due to the fact views cannot handle derived tables subqueries.

    Well, here's a single query to do that: it checks GLOBAL_STATUS twice, 10 seconds apart in the following sample. It uses SLEEP() to actually wait between the

      [Read more...]
    Announcing common_schema: common views & routines for MySQL
    +1 Vote Up -0Vote Down

    Today I have released common_schema, a utility schema for MySQL which includes many views and functions, and is aimed to be installed on any MySQL server.

    What does it do?

    There are views answering for all sorts of useful information: stuff related to schema analysis, data dimensions, monitoring, processes & transactions, security, internals... There are basic functions answering for common needs.

    Some of the views/routines simply formalize those queries we tend to write over and over again. Others take the place of external tools, answering complex questions via SQL and metadata. Still others help out with SQL generation.

    Here are a few highlights:

    • Did you know you can work out
      [Read more...]
    ROUTINE_PRIVILEGES implementation
    +1 Vote Up -0Vote Down

    Following up on MySQL security: inconsistencies, and on MySQL bug #61596, I was thinking it may take a long time till the non-existent ROUTINE_PRIVILEGES view is implemented. Here’s my own implementation of the view.

    I’ve followed the somewhat strange conventions used in the *_PRIVILEGES tables in INFORMATION_SCHEMA, where the IS_GRANTABLE is a separate column, although in 2nd 1st normal form.

    I present it here as a query, using session variables, rather than a view definition:

    SELECT STRAIGHT_JOIN
      CONCAT('\'', User, '\'@\'', Host, '\'') AS GRANTEE,
      NULL AS ROUTINE_CATALOG,
      Db AS ROUTINE_SCHEMA,
      Routine_name AS
      [Read more...]
    MySQL security: inconsistencies
    +2 Vote Up -0Vote Down

    Doing some work with MySQL security, I’ve noticed a few inconsistencies. They’re mostly not-too-terrible for daily work, except they get in my way right now.

    The ALL PRIVILEGES inconsistency

    The preferred way of assigning account privileges in MySQL is by way of using GRANT.

    With GRANT, one assigns one or more privileges to an account, such as SELECT, UPDATE, ALTER, SUPER ,etc. Sometimes it makes sense for an account to have complete control over a domain. For example, the root account is typically assigned with all privileges. Or, some user may require all possible privileges on a certain schema.

    Instead of listing the entire set of privileges, the ALL

      [Read more...]
    Finding and killing long running InnoDB transactions with Events
    Employee +5 Vote Up -0Vote Down

    I’ve seen a number of solutions for finding long running transactions or sessions within InnoDB / MySQL now. Every single one of them has (in the past by necessity) been implemented as a script (other than one, more on that one later) that is either

      [Read more...]
    Checking for AUTO_INCREMENT capacity with single query
    +3 Vote Up -0Vote Down

    Darn! This means oak-show-limits becomes redundant. Am I not supposed to speak about it on my coming presentation? Bad timing!

    You have AUTO_INCREMENT columns. How far are you pushing the limits? Are you going to run out of AUTO_INCREMENT values soon? Perhaps you wonder whether you should ALTER from INT to BIGINT?

    The answer is all there in INFORMATION_SCHEMA. The TABLES table shows the current AUTO_INCREMENT value per table, and the COLUMNS table tells us all about a column’s data type.

    It takes some

      [Read more...]
    Drizzle metadata tables
    +4 Vote Up -0Vote Down

    Giuseppe has a great post about the Evolution of MySQL metadata, and I thought I’d have a look at what we have in Drizzle. It’s pretty easy to work out how many tables are in each schema, we just query the standard INFORMATION_SCHEMA.TABLES view:

    drizzle> select table_schema,count(table_name)
        ->  from information_schema.tables
        -> group by table_schema;
    +--------------------+-------------------+
    | table_schema       | count(table_name) |
    +--------------------+-------------------+
    | DATA_DICTIONARY    |                53 |
    | INFORMATION_SCHEMA |                20 |
    +--------------------+-------------------+
    2 rows in set (0 sec)

    In Drizzle it’s important to note that

      [Read more...]
    Showing entries 1 to 30 of 67 Next 30 Older Entries

    Planet MySQL © 1995, 2014, Oracle Corporation and/or its affiliates   Legal Policies | Your Privacy Rights | Terms of Use

    Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.