Showing entries 12501 to 12510 of 44120
« 10 Newer Entries | 10 Older Entries »
Percona Cloud Tools: Making MySQL performance easy

One of our primary focuses at Percona is performance. Let me make some statements on what is “performance.”

In doing so I will refer to two pieces of content:

I highly recommend that you familiarize yourself with both of them.

Performance

Performance is about tasks and time.
We say that the system is performing well if it executes a task in an acceptable period of time, or that the …

[Read more]
XtraBackup Complains of Missing perl-DBD-MySQL

I was busy testing a PXC cluster today when suddenly was buffled with a confusing error:

140108 23:33:39 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' (using password: NO).
innobackupex: Error: Failed to connect to MySQL server as DBD::mysql module is not installed at /usr/bin/innobackupex line 2913.

OK, so my first though was DBD::mysql is missing, but as I checked:

[root@pxc03 keepalived]# yum list installed| grep perl-DBD-MySQL
perl-DBD-MySQL.x86_64 4.013-3.el6 @base

After some digging, it’s not actually the module that is missing – it was one of the module’s dependency which is the mysql client library.

[root@pxc03 keepalived]# yum deplist perl-DBD-MySQL.x86_64|grep mysql
dependency: libmysqlclient.so.16()(64bit)
provider: mysql-libs.x86_64 5.1.71-1.el6
dependency: libmysqlclient.so.16(libmysqlclient_16)(64bit)
provider: mysql-libs.x86_64 …
[Read more]
Stream Processors and DBMS Persistence

High-Velocity Data—AKA Fast Data or Streaming Data—seems to be all the rage these days. With the increased adoption of Big Data tools, people have recognized the value contained in this data and they are looking to get that value in real-time instead of a time-shifted batch process that can often introduce a 6-hour (or more) delay in time-to-value.


High-velocity data has all of the earmarks of a big technological wave. The technology leaders are building stream processors. Venture firms are investing money in stream processing companies. And existing tech companies are jumping on the bandwagon and associating their products with this hot trend; making them buzzword compliant.


Some have asked whether high-velocity data will complement or replace Big Data. Big Data addresses pooled data, or data at rest. History tells us that there are …

[Read more]
Stream Processors and DBMS Persistence

High-Velocity Data—AKA Fast Data or Streaming Data—seems to be all the rage these days. With the increased adoption of Big Data tools, people have recognized the value contained in this data and they are looking to get that value in real-time instead of a time-shifted batch process that can often introduce a 6-hour (or more) delay in time-to-value.

High-velocity data has all of the earmarks of a big technological wave. The technology leaders are building stream processors. Venture firms are investing money in stream processing companies. And existing tech companies are jumping on the bandwagon and associating their products with this hot trend; making them buzzword compliant.

Some have asked whether high-velocity data will complement or replace Big Data. Big Data addresses pooled data, or data at rest. History tells us that there are different use cases and …

[Read more]
Finding a good IST donor in Percona XtraDB Cluster 5.6

Gcache and IST

The Gcache is a memory-based cache of recent Galera transactions that is local to each node in a cluster.  If a node leaves and rejoins the cluster, it can use the gcache from another node that stayed in the cluster (i.e., its donor node) to fetch the transactions it missed (IST) as opposed to doing a full state snapshot transfer (SST).  However, there are a few nuances that are not obvious to the beginner:

  • The Gcache is lost when a node restarts
  • The Gcache is fixed size and implemented as a LRU.  Once it is full, older transactions roll off.
  • Donor selection is made irregardless of the gcache state
  • If the given donor for a restarting node doesn’t have all transactions needed, a full SST (read: full backup) is done instead
  • Until recent developments, there was no way to tell what, precisely, was in the Gcache.

So, with (somewhat) …

[Read more]
Some myths on Open Source, the way I see it

The Open Source movement is full of myths, there are different myths from inside the movement (i.e. those who live and breath Open Source or at least thinks it's a good thing) and outside (i.e. those who do not think Open Source really is a good idea to those who thinks Open Source is the work of the devil in a bad way (as I think some people think that something that is the work of the devil has to be a good thing. And they have a point, but let's not get into that right now).

I want to cover some of those myths here and how I look at them here. Also, I am aware that I might be stomping on one or another sensitive Open Source toe here.
Open Source software creates better codeNo, I don't think this is so. But I do think that using an Open Source model properly can potentially help us create better code. But this is not same as saying that OSS always means better code. I am old enough to have been a developer when C was a "hot" …

[Read more]
A Close Encounter with MaxScale


MaxScale is the new proxy server from the SkySQL/MariaDB team. It provides Connection Load Balancing (CLB) and Statement Load Balancing (SLB) out of the box. This post is a [relatively] quick “how to” install, configure and test SLB with the read/write splitting module.

Step 1 - Server preparationIf you do not have many HW resources, you may run everything on a single Linux instance, but the best way to test MaxScale is to use at least 4 servers: one for MaxScale and for the client apps, one as Master and two as slaves - so, 4 in total. In this post I am going a bit further, I will use 5 servers:
Max 0 - For client apps (192.168.56.20)
Max 1 - The master server (192.168.56.21)
Max 2 - The first slave (192.168.56.22)
Max 3 - The second slave (192.168.56.23)
Max 4 - The third slave (192.168.56.24)
Max 6 - The MaxScale server (192.168.56.26)

In order to do proper tests …

[Read more]
Fedora Install of MySQL

I built a new image on VMWare Fusion for my class, which required installing MySQL 5.6 on Fedora, Version 20. If you don’t know how to add your user to the sudoers list, you should check this older and recently updated blog post.

  1. Download the MySQL Yum Repository and launch the downloaded RPM.
  1. Install MySQL on Fedora, Version 20, which you can find with the following command:
shell> rpm -qa | grep mysql
mysql-community-release-fc20-5.noarch

The fc20-5 changes with point releases, but assuming that you’re installing the fc20-5 release:

      shell> sudo yum localinstall …
[Read more]
Difference between DISTINCT and GROUP BY

Today we had an interesting situation where the same query was executed significantly slower when it was written with GROUP BY instead of DISTINCT and I saw many people still had the assumption that these two types of queries are actually equivalent which is simply not true. Although DISTINCT queries can be implemented using GROUP BY but not every GROUP BY query can be translated to DISTINCT. Depending on the brand and the optimizer the database server may actually use group by internally for the execution of distinct but that won’t make them equivalent. Let’s see why…

GROUP BY as the name suggest groups the result by some set of parameters and evaluate the whole result set. In most databases group by is implemented based on sorting and the same rules applies to it as well.

DISTINCT will make sure that the same row won’t be returned in the result set twice. Distinct doesn’t necessary …

[Read more]
Tungsten Replicator 2.2 Is Now Available

New Continuent Tungsten Replicator 2.2 is now available for download at www.continuent.com/software and http://code.google.com/p/tungsten-replicator/downloads/list.Tungsten Replicator is a high performance, open source, data replication engine for MySQL and Oracle, released under a GPL V2 license. Tungsten Replicator has all the features you expect from enterprise-class data replication products

Showing entries 12501 to 12510 of 44120
« 10 Newer Entries | 10 Older Entries »