Showing entries 11503 to 11512 of 44105
« 10 Newer Entries | 10 Older Entries »
MySQL & NoSQL – Best of Both Worlds. Upcoming webinar

On Thursday 22nd May I’ll be hosting a webinar explaining how you can get the best from the NoSQL world while still getting all of the benefits of a proven RDBMS. As always the webinar is free but please register here.

There’s often a lot of excitement around NoSQL Data Stores with the promise of simple access patterns, flexible schemas, scalability and High Availability. The downside can come in the form of losing ACID transactions, consistency, flexible queries and data integrity checks. What if you could have the best of both worlds?

This webinar shows how MySQL Cluster provides simultaneous SQL and native NoSQL access to your data, with a simple key-value API (Memcached), REST, JavaScript, …

[Read more]
Introduction to the Percona Server Audit Log feature

Percona has developed an Audit Log feature that is now included in Percona Server since the recent 5.5 and 5.6 releases. This implementation is alternative to the MySQL Enterprise Audit Log Plugin: Percona re-implemented the Audit Plugin code as GPL as Oracle’s code was closed source. This post is a quick introduction to this plugin.

Installation
There are two ways to install the Percona MySQL Audit Plugin:

INSTALL PLUGIN audit_log SONAME 'audit_log.so';

or in my.cnf

[mysqld]
plugin-load="audit_log=audit_log.so"

Verify installation

mysql> SHOW PLUGINS\G
...
*************************** 38. row ***************************
  Name: …
[Read more]
Archival and Analytics - Importing MySQL data into Hadoop Cluster using Sqoop

May 16, 2014 By Severalnines

We won’t bore you with buzzwords like volume, velocity and variety. This post is for MySQL users who want to get their hands dirty with Hadoop, so roll up your sleeves and prepare for work. Why would you ever want to move MySQL data into Hadoop? One good reason is archival and analytics. You might not want to delete old data, but rather move it into Hadoop and make it available for further analysis at a later stage. 

 

In this post, we are going to deploy a Hadoop Cluster and export data in bulk from a Galera Cluster using Apache Sqoop. Sqoop is a well-proven approach for bulk data loading from a relational database into Hadoop File System. There is also Hadoop Applier available from …

[Read more]
Cross your Fingers for Tech14, see you at OSCON

So I’ve submitted my talks for the Tech14 UK Oracle User Group conference which is in Liverpool this year. I’m not going to give away the topics, but you can imagine they are going to be about data translation and movement and how to get your various databases talking together.

I can also say, after having seen other submissions for talks this year (as I’m helping to judge), that the conference is shaping up to be very interesting. There’s a good spread of different topics this year, but I know from having talked to the organisers that they are looking for more submissions in the areas of Operating Systems, Engineered Systems and Development (mobile and cloud).

If you’ve got a paper, presentation, or idea for one that you think would be useful, …

[Read more]
New Tungsten Replicator 2.2.1 now available

New Continuent Tungsten Replicator 2.2.1 is now available for download at www.continuent.com/software and https://code.google.com/p/tungsten-replicator/wiki/Downloads?tm=2.Tungsten Replicator is a high performance, open source, data replication engine for MySQL and Oracle, released under a GPL V2 license. Tungsten Replicator has all the features you expect from enterprise-class data replication

MySQL May Newsletter is Available!

Here comes the MySQL May Newsletter! As always it's packed with latest product news, technical articles, and not-to-be missed webinars and events where you'll get first-hand information directly from the MySQL experts at Oracle. The highlights in this edition include:

  • Join Us at MySQL Central @ OpenWorld 2014
  • Dr. Dobb's: NoSQL with MySQL
  • Live Webinar: Upgrading to MySQL 5.6 - Best Practices
  • Featured Video: MySQL for Excel Introduction
  • Blog: Why VividCortex Uses MySQL
  • Blog: Importing Raster-Based Spatial Data into MySQL 5.7

You can read it online or …

[Read more]
Re-factoring some internals of prepared statements in 5.7

When the MySQL server receives a SELECT query, the query goes through several consecutive phases:

  • parsing: SQL words are recognized, the query is split into different parts following the SQL grammar rules: a list of selected expressions, a list of tables to read, a WHERE condition, …
  • resolution: the output of the parsing stage contains names of columns and names of tables. Resolution is about making sense out of this. For example, in “WHERE foo=3“, “foo” is a column name without a table name; by applying SQL name resolution rules, we discover the table who contains “foo” (it can be complicated if subqueries or outer joins are involved).
  • optimization: finding the best way to read tables: the best order of tables, and for each table, the best way to access it (index lookup, index scan, …). The output …
[Read more]
Proposal to deprecate COM_REFRESH packet

In the MySQL team we are proposing to deprecate the COM_REFRESH packet in favor of specific queries to execute FLUSH commands. To provide a bit of context:

  • The MySQL server protocol allows for clients to speak API commands via both a query and binary protocol interface. The set of the API commands can be seen in the MySQL Client/Server Protocol internals documentation, or very simply as they appear in a single switch statement:
    # ./sql/sql_parse.cc:1009 (simplified view)
    
      switch (command) {
    
      case COM_REGISTER_SLAVE:
      {
        /* do stuff */
        break;
      }
      case COM_QUERY:
      {
        /* parse query, do stuff */
        break;
      }
      case COM_REFRESH:
      {
       /* equivalent to running a FLUSH command */
        break;
      }
      case COM_SHUTDOWN:
      {
        kill_mysql();
        break;
      }
    }
    
  • The historical advantage of having a binary …
[Read more]
High Availability with MySQL Fabric: Part I

In our previous post, we introduced the MySQL Fabric utility and said we would dig deeper into it. This post is the first part of our test of MySQL Fabric’s High Availability (HA) functionality.

Today, we’ll review MySQL Fabric’s HA concepts, and then walk you through the setup of a 3-node cluster with one Primary and two Secondaries, doing a few basic tests with it. In a second post, we will spend more time generating failure scenarios and documenting how Fabric handles them. (MySQL Fabric is an extensible framework to manage large farms of MySQL servers, with support for high-availability and sharding.)

Before we begin, we recommend you read this post by Oracle’s …

[Read more]
MySQL Tech Day @Paris, 22/May-2014


The next MySQL TechDay is taking place in Paris, 22/May (the next week!!!) - if you're MySQL lover and will be in Paris area this day - hurry up to register on the event page and attend it - trust me, you'll not regret ;-))

We're continuing to follow our TechDay tradition:

  • the event is completely free (but places are limited, so you have to be registered to attend)
  • the content is pure technical and directly from Oracle engineering, no marketing ;-)
  • this is a true full day event, and we're reserving enough time to go in depth for each presented stuff..
  • the event is taking place in Oracle office in a pretty wide and comfortable amphitheater, covered by WiFi, so you may twit live about #mysqltechday and remain "connected" if this is a part of your constrains …
[Read more]
Showing entries 11503 to 11512 of 44105
« 10 Newer Entries | 10 Older Entries »