Planet MySQL Planet MySQL: Meta Deutsch Español Français Italiano 日本語 Русский Português 中文
Showing entries 1 to 10 of 20 10 Older Entries

Displaying posts with tag: hack (reset)

Cheating mysqlsandbox to install MariaDB 10.0
+3 Vote Up -0Vote Down

mysqlsandbox is version-aware. The new 5.6 version, for example, requires special care because of the system InnoDB tables or otherwise modified system tables.

At this moment, it will refuse to install MariaDB 10.0 (alpha):

bash$ make_sandbox /tmp/mariadb-10.0.1-linux-x86_64.tar.gz 
unpacking /tmp/mariadb-10.0.1-linux-x86_64.tar.gz
unsupported version 10.0

This is perfectly legitimate, and I have no quarrel with this fact. However, I did want to setup MariaDB 10.0 as a sandbox.

As it turns out mysqlsandbox relies on MySQL package

  [Read more...]
Looking for a hack: share data between MySQL sessions
+1 Vote Up -0Vote Down

I'm looking for a way to share data between two MySQL connections/sessions. Obviously tables are the trivial answer, but for reasons of security (possibly insufficient privileges) I wish to avoid that.

The type of data to be passed can vary. Ideally I would be able to pass multiple pieces of information (dates, texts, integers, etc.) around. If impossible, I would do with texts only, and if impossible yet, I could do with a single text (but reasonably long).

There is a way to do so: by writing to the file system (SELECT INTO OUTFILE + LOAD_FILE()). However I wish to avoid it, since writing to files from within MySQL requires creation of a new file each time; no overwrite and no purging; this litters the file system.

So: any other tricks? Is there some way to pass data via

  [Read more...]
SQL: selecting top N records per group, another solution
+1 Vote Up -0Vote Down

A while back I presented SQL: selecting top N records per group, a "give me the top 5 countries in each continent" type of query, and which used an external numbers table and a lot of tedious casting.

Here's another solution I came up with (*). Still using GROUP_CONCAT (how else?), but no external table and no casting. The query outputs the largest 5 countries (by surface area) per continent.

SELECT
  Continent,
  Name,
  SurfaceArea,
  Population
FROM
  world.Country,
  (
    SELECT 
      GROUP_CONCAT(top_codes_per_group) AS top_codes
    FROM
      (
        SELECT 
         
  [Read more...]
One-way Password Crypting Flaws
+0 Vote Up -0Vote Down

I was talking with a client and the topic of password crypting came up. From my background as a C coder, I have a few criteria to regard a mechanism to be safe. In this case we’ll just discuss things from the perspective of secure storage, and validation in an application.

  • use a digital fingerprint algorithm, not a hash or CRC. A hash is by nature lossy (generates evenly distributed duplicates) and a CRC is intended to identify bit errors in transmitted data, not compare potentially different data.
  • Store/use all of the fingerprint, not just part (otherwise it’s lossy again).
  • SHA1 and its siblings are not ideal for this purpose, but ok. MD5 and that family of “message digests” has been proven flawed long ago, they can be “freaked” to create a
  •   [Read more...]
    Getting rid of huge ibdata file, no dump required, part II
    +1 Vote Up -1Vote Down

    This post continues Getting rid of huge ibdata file, no dump required, part I, where I describe way of converting your single-tablespace InnoDB database into a file-per-table one, without the pain of exporting and importing everything at once.

    In previous part we put aside the issue of foreign keys. We address this issue now.

    What if my InnoDB tables have foreign keys?

    MyISAM does not support them, so you can't just ALTER an InnoDB table to MyISAM and back into InnoDB, and expect everything to work.

    Alas, this calls for additional steps (i.e. additional ALTER commands). However, these still fall well under the concept of "do it one table at a time, then take time to recover your breath

      [Read more...]
    Getting rid of huge ibdata file, no dump required
    +4 Vote Up -1Vote Down

    You have been told (guilty as charged), that the only way to get rid of the huge InnoDB tablespace file (commonly named ibdata1), when moving to innodb_file_per_table, is to do a logical dump of your data, completely erase everything, then import the dump.

    To quickly reiterate, you can only delete the ibdata1 file when no InnoDB tables exist. Delete this file with an existing InnoDB table, even a table in its own tablespace, and nothing ever works anymore.

    The problem with the dump-based solution

    The impact of doing a logical dump is often overwhelming. Well, the dump may be tolerable, but the restore is

      [Read more...]
    Auto caching INFORMATION_SCHEMA tables: seeking input
    +1 Vote Up -0Vote Down

    The short version

    I have it all working. It's kind of magic. But there are issues, and I'm not sure it should even exist, and am looking for input.

    The long version

    In Auto caching tables I presented with a hack which allows getting cached or fresh results via a simple SELECT queries.

    The drive for the above hack was INFORMATION_SCHEMA tables. There are two major problems with INFORMATION_SCHEMA:

  • Queries on schema-oriented tables such as TABLES, COLUMNS, STATISTICS, etc. are heavyweight. How heavyweight? Enough to make a lockdown of your database. Enough to crash down your database in some cases.
  • The data is always generated on-the-fly, as you request it. Query
  •   [Read more...]
    Auto caching tables
    +2 Vote Up -0Vote Down

    Is there a way to create a caching table, some sort of a materialized view, such that upon selecting from that table, its data is validated/invalidated?

    Hint: yes.

    But to elaborate the point: say I have some table data_table. Can I rewrite all my queries which access data_table to read from some autocache_data_table, but have nothing changed in the query itself? No caveats, no additional WHEREs, and still have that autocache_data_table provide with the correct data, dynamically updated by some rule of our choice?

    And: no crontab, no event scheduler, and no funny triggers on data_table? In such way that invalidation/revalidation occurs upon SELECT?

    Well, yes.

    This post is long, but I suggest

      [Read more...]
    More MySQL foreach()
    +4 Vote Up -0Vote Down

    In my previous post I've shown several generic use cases for foreach(), a new scripting functionality introduced in common_schema.

    In this part I present DBA's handy syntax for schema and table operations and maintenance.

    Confession: while I love INFORMATION_SCHEMA's power, I just hate writing queries against it. It's just so much typing! Just getting the list of tables in a schema makes for this heavy duty query:

    SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='sakila' AND TABLE_TYPE='BASE TABLE';

    When a join is involved this really becomes a nightmare. I think it's

      [Read more...]
    MySQL foreach()
    +0 Vote Up -0Vote Down

    A new routine is now available in common_schema, which makes for an easier execution syntax for some operations:

    foreach(collection_to_iterate_over, queries_to_execute_per_iteration_step);

    To illustrate what it can do, consider:

    call foreach('table in sakila', 'ALTER TABLE ${schema}.${table} ENGINE=InnoDB ROW_FORMAT=COMPACT');
    
    call $('schema like shard_%', 'CREATE TABLE ${schema}.messages (id INT)');
    
    call $('2000:2009', 'INSERT IGNORE INTO report (report_year) VALUES (${1})');

    $() stands as a synonym to foreach(). I suspect it should look familiar to web programmers.

    The idea for foreach() was introduced by Giuseppe Maxia during a

      [Read more...]
    Showing entries 1 to 10 of 20 10 Older Entries

    Planet MySQL © 1995, 2014, Oracle Corporation and/or its affiliates   Legal Policies | Your Privacy Rights | Terms of Use

    Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.