I believe the future of SQL is to take a lot of primitives that
are Computer Science fundamentals and add them as datatypes to
expand on the allowed columns today. The idea is of the ilk of a
merging of noSQL and SQL for solving problems to make it
easier for a new person to develop.
For instance, what would be awesome is a LIST type, where the
list contains a distinct number of string items mapped to a bit,
much like SET yet you don't need to predefine all the items in a
set.
Here is a good example as how I would use a list type:
Imagine you need permissions on a per row basis. Some rows are
public, some are private, some are viewable by a small set of
people. (Less than 64).
Let's take the example of Finding all rows that are public or are
viewable by only me.
When creating a row
INSERT INTO resource_permissions (resource_id, …
In addition to creating new indexes to improve performance, you can improve database performance with additional schema optimizations. These optimizations include using specific data types and/or column types. The benefit is a smaller disk footprint producing less disk I/O and results in more index data being packed in available system memory.
Data Types
Several data types can be replaced or modified with little or no impact to an existing schema.
BIGINT vs. INT
When a primary key is defined as a BIGINT AUTO_INCREMENT data type, there is generally no requirement why this datatype is required. An INT UNSIGNED AUTO_INCREMENT datatype is capable of supporting a maximum value of 4.3 billion. If the table holds more than 4.3 billion rows, other architecture considerations are generally necessary before this requirement.
The impact of modifying a BIGINT data type to an INT data type is a 50 percent reduction …
[Read more]The management of indexes—how they are created and maintained—can impact the performance of SQL statements.
Combining Your DDL
An important management requirement when adding indexes to MySQL is the blocking nature of a DDL statement. Historically, the impact of an ALTER statement required that a new copy of the table be created. This could be a significant operation for time and disk volume when altering large tables. With the InnoDB plugin, first available in MySQL 5.1, and with other third party storage engines, various ALTER statements are now very fast, as they do not perform a full table copy. You should refer to the system documentation for the specific storage engine and MySQL version to confirm the full impact of your ALTER statement.
Combining multiple ALTER statements into one SQL statement is an easy optimization improvement. For example, if you needed to add a new index, modify an index, and add a new …
[Read more]This tutorial describes a replicated MySQL setup (Mater/Master replication) with 2 nodes where data can be read and written to both nodes at the same time. MySQL takes care to replicate the data to the other node and ensures that primary auto increment keys don't collide.
For companies that live and die by their databases, a one-shot backup isn’t really the perfect solution. Typically for such companies (think Yahoo! or Google), database access is a near-constant process, and database content changes continually, often on a second-by-second basis. Data replication, which involves continual data transfer between two (or more) servers to maintain a replica of the original database, is a better backup solution for these situations.
MySQL supports two (or three, depending on how you look at it) different methods of replicating databases from master to slave. All of these methods use the binary log; however, they differ in the type of data that is written to the master’s binary log.
- Statement-based replication Under this method, the binary log stores the SQL statements used to …
Replication in MySQL is the dynamic process of synchronizing data between a primary (master) database server and one or more secondary (slave) database servers in near-real time. Using this process, it’s possible to create copies of one or more databases so that even if the primary server fails, data can still be recovered from one of the secondary servers.
Replication is essential for many applications, and the lack of replication support was a major drawback to MySQL compared to other relational database management systems (RDBMSs). MySQL 3.23 was the first version to introduce replication support, and support has improved continually in subsequent versions. However, MySQL is still best suited for one-way replication, where you have one master and one or more slaves.
TIP
As much as possible, try to use the same version of MySQL for both the master and slave server(s). A version …
[Read more]Equally important as backing up data in a MySQL database is the ability to restor data.
Most books on the subject emphasize the importance of backing up your data regularly (and rightly so), but restoring the data is an often-overlooked aspect of this process. Backed-up files are useless if they can’t be accessed. Accordingly, you should regularly restore your files from backup to make certain they can be used in an emergency. In fact, it might not be too much to say that a backup job isn’t complete until you’ve confirmed that the backup files can be restored. Besides the peace of mind you’ll achieve, it pays to be thoroughly familiar with the process, because you certainly don’t want to waste time learning the restore procedure after the system goes down.
In the preceding section, you learned that the output of …
[Read more]You might need to restore corrupted tables (or even an entire database) from your backups and use the update logs if a table gets damaged or deleted by accident. In case of relatively minor damage, however, MySQL provides several options for table repair. This next section deals with what you can do if this is the case.
Checking Tables for Errors
The first thing to do if you suspect something is wrong is to check the table for errors. The myisamchk utility is one way to check a table. To invoke this utility, execute the command myisamchk table-file.
Because myisamchk requires exclusive access to the tables, a good idea is to take the server offline before running it. This way, you needn’t worry about coordinating access between clients. In addition, you can run several options when you check a table for errors, as shown in Table 1.
The following example runs myisamchk with the …
[Read more]Recently we announced MySQL Data Recovery Portal. Here we automated some data recovery routines and published web interface to them.
First, it was a tool to estimate recovery chances from corrupted InnoDB tablespaces.
Now it’s even more exciting and useful feature – recovery table structure from FRM files online.
There are several methods to recover MySQL table structure from FRM files.
On MySQL Data Recovery Portal we utilize a combination of them.
Fake table method
The idea is to create a dummy InnoDB table, replace its .frm file
with .frm file we want to recover and run SHOW CREATE
TABLE.
Let’s say we have actor.frm and we want to get
structure of table actor.
Algorithm is following:
Create a dummy table with the same name, actor. The …
cPanel is a very popular Linux hosting control panel, used not only by hosting providers but also enterprise and goverment bodies. For large scale hosting environments hosting mission critical databases, how do you integrate cPanel with a high availability database?
In this blog post, we will show you how to integrate Percona XtraDB Cluster into cPanel to provide a high availability database hosting solution. By default, cPanel uses localhost to host the MySQL databases created by cPanel users. It is not uncommon to find MySQL to be the culprit when it comes to overloading the server. By having a remote MySQL server or cluster, we can offload some of the burden and increase the availability of other important services on the server.
Setting up a Galera cluster to integrate with cPanel requires you to meet the following requirements:
- skip-name-resolve must be turned OFF, as some cPanel services authenticate …