Showing entries 1 to 10 of 22
10 Older Entries »
Displaying posts with tag: hack (reset)
Showing the hidden tables in MySQL 8 data dictionary

The freshly released MySQL 8.0 includes a data dictionary, which makes MySQL much more reliable. Thanks to this features, we don't have any '.frm' files, and querying the information_schema is 30x to 100x faster than previous versions.

One drawback of the implementation is that the data dictionary tables are hidden by design.

While the reason is fully understandable (they don't want to commit on an interface that may change in the future) many curious users are disappointed, because openness is the basis of good understanding and feedback.

The problem to access the dictionary tables can be split in three parts:

  • Finding the list of tables; …
[Read more]
Making UUID() and RAND() replication safe

MySQL's UUID() and RAND() functions both provide with (pseudo) indeterministic result. UUID()'s result is moreover bound to the host on which it executes. For this reason, both are unsafe to replicate with STATEMENT binlog format. As an example, consider:

master> create table test.uuid_test (id int, u varchar(64));

master> insert into test.uuid_test values (1, UUID());
Query OK, 1 row affected, 1 warning (0.03 sec)

master> select * from test.uuid_test;
+------+--------------------------------------+
| id   | u                                    |
+------+--------------------------------------+
|    1 | …
[Read more]
Cheating mysqlsandbox to install MariaDB 10.0

mysqlsandbox is version-aware. The new 5.6 version, for example, requires special care because of the system InnoDB tables or otherwise modified system tables.

At this moment, it will refuse to install MariaDB 10.0 (alpha):

bash$ make_sandbox /tmp/mariadb-10.0.1-linux-x86_64.tar.gz 
unpacking /tmp/mariadb-10.0.1-linux-x86_64.tar.gz
unsupported version 10.0

This is perfectly legitimate, and I have no quarrel with this fact. However, I did want to setup MariaDB 10.0 as a sandbox.

As it turns out mysqlsandbox relies on MySQL package naming conventions to detect the version: the fact that a …

[Read more]
Looking for a hack: share data between MySQL sessions

I'm looking for a way to share data between two MySQL connections/sessions. Obviously tables are the trivial answer, but for reasons of security (possibly insufficient privileges) I wish to avoid that.

The type of data to be passed can vary. Ideally I would be able to pass multiple pieces of information (dates, texts, integers, etc.) around. If impossible, I would do with texts only, and if impossible yet, I could do with a single text (but reasonably long).

There is a way to do so: by writing to the file system (SELECT INTO OUTFILE + LOAD_FILE()). However I wish to avoid it, since writing to files from within MySQL requires creation of a new file each time; no overwrite and no purging; this litters the file system.

So: any other tricks? Is there some way to pass data via GET_LOCK()/RELEASE_LOCK() (none that I can see other than Morse code)?

[Read more]
SQL: selecting top N records per group, another solution

A while back I presented SQL: selecting top N records per group, a "give me the top 5 countries in each continent" type of query, and which used an external numbers table and a lot of tedious casting.

Here's another solution I came up with (*). Still using GROUP_CONCAT (how else?), but no external table and no casting. The query outputs the largest 5 countries (by surface area) per continent.

SELECT
  Continent,
  Name,
  SurfaceArea,
  Population
FROM
  world.Country,
  (
    SELECT 
      GROUP_CONCAT(top_codes_per_group) AS top_codes
    FROM
      (
        SELECT 
          SUBSTRING_INDEX(GROUP_CONCAT(Code ORDER BY SurfaceArea DESC), ',', 5) AS top_codes_per_group …
[Read more]
One-way Password Crypting Flaws

I was talking with a client and the topic of password crypting came up. From my background as a C coder, I have a few criteria to regard a mechanism to be safe. In this case we’ll just discuss things from the perspective of secure storage, and validation in an application.

  1. use a digital fingerprint algorithm, not a hash or CRC. A hash is by nature lossy (generates evenly distributed duplicates) and a CRC is intended to identify bit errors in transmitted data, not compare potentially different data.
  2. Store/use all of the fingerprint, not just part (otherwise it’s lossy again).
  3. SHA1 and its siblings are not ideal for this purpose, but ok. MD5 and that family of “message digests” has been proven flawed long ago, they can be “freaked” to create a desired outcome. Thus, it is possible to …
[Read more]
Getting rid of huge ibdata file, no dump required, part II

This post continues Getting rid of huge ibdata file, no dump required, part I, where I describe way of converting your single-tablespace InnoDB database into a file-per-table one, without the pain of exporting and importing everything at once.

In previous part we put aside the issue of foreign keys. We address this issue now.

What if my InnoDB tables have foreign keys?

MyISAM does not support them, so you can't just ALTER an InnoDB table to MyISAM and back into InnoDB, and expect everything to work.

Alas, this calls for additional steps (i.e. additional ALTER commands). However, these still fall well under the concept of "do it one table at a time, then take time to recover your breath and replication lag".

Save , drop and restore your …

[Read more]
Getting rid of huge ibdata file, no dump required

You have been told (guilty as charged), that the only way to get rid of the huge InnoDB tablespace file (commonly named ibdata1), when moving to innodb_file_per_table, is to do a logical dump of your data, completely erase everything, then import the dump.

To quickly reiterate, you can only delete the ibdata1 file when no InnoDB tables exist. Delete this file with an existing InnoDB table, even a table in its own tablespace, and nothing ever works anymore.

The problem with the dump-based solution

The impact of doing a logical dump is often overwhelming. Well, the dump may be tolerable, but the restore is much longer. The real pain is that you can't do …

[Read more]
Auto caching INFORMATION_SCHEMA tables: seeking input

The short version

I have it all working. It's kind of magic. But there are issues, and I'm not sure it should even exist, and am looking for input.

The long version

In Auto caching tables I presented with a hack which allows getting cached or fresh results via a simple SELECT queries.

The drive for the above hack was INFORMATION_SCHEMA tables. There are two major problems with INFORMATION_SCHEMA:

  1. Queries on schema-oriented tables such as TABLES, COLUMNS, STATISTICS, etc. are heavyweight. How heavyweight? Enough to make a lockdown of your database. Enough to crash down your database in some cases.
  2. The data is always generated on-the-fly, as you request it. Query the COLUMNS table twice, and risk two …
[Read more]
Auto caching tables

Is there a way to create a caching table, some sort of a materialized view, such that upon selecting from that table, its data is validated/invalidated?

Hint: yes.

But to elaborate the point: say I have some table data_table. Can I rewrite all my queries which access data_table to read from some autocache_data_table, but have nothing changed in the query itself? No caveats, no additional WHEREs, and still have that autocache_data_table provide with the correct data, dynamically updated by some rule of our choice?

And: no crontab, no event scheduler, and no funny triggers on data_table? In such way that invalidation/revalidation occurs upon SELECT?

Well, yes.

This post is long, but I suggest you read it through to understand the …

[Read more]
Showing entries 1 to 10 of 22
10 Older Entries »