Showing entries 101 to 110 of 1066
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: Uncategorized (reset)
Getting started MySQL Group Replication on Ubuntu with MySQL Sandbox

Welcome to next topic dedicated to Group Replication plugin. Today’s topic is to get started with Group Replication. As you remember we set up our environment using MySQL Sandbox on Ubuntu 14.04 and also we have compiled plugin with MySQL 5.7.8-rc on Ubuntu 14.04. Please refer to this topics respectively:

  1. MySQL Sandbox creating test environment
  2. Compiling MySQL Group replication plugin

So we have already our 3 nodes of MySQL 5.7.8-rc + group replication plugin.
Before starting group replication. We need to play with Corosync. Here is very dedicated article to this ->

[Read more]
AWS CloudFormation Now Supports Aurora, Amazon’s MySQL Compatible Database

AWS CloudFormation now supports Amazon Aurora!

Announcement: https://forums.aws.amazon.com/ann.jspa?annID=3286

Documentation:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbcluster.html

Amazon Aurora is a MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. https://aws.amazon.com/rds/aurora/

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and …

[Read more]
Compiling MySQL Group Replication plugin with MySQL 5.7.8-rc2 on Ubuntu

MySQL Group Replication plugin is in labs.mysql.com and is available for EL6 x86_64 version Linux. But most of us have Ubuntu desktops where it should be easier to test this new thing, especially with MySQL Sandbox. After getting source code we should have compile this plugin with MySQL from source. So let’s begin. Extract both mysql group replication archive and mysql source archive:

sh@shrzayev:~/Sandboxes$ ls -l
total 650732
drwxr-xr-x 34 sh sh      4096 İyl 20 17:25 mysql-5.7.8-rc
-rw-rw-r--  1 sh sh  49762480 Avq 20 16:19 mysql-5.7.8-rc.tar.gz
drwxrwxr-x  3 sh sh      4096 Sen 28 12:08 mysql-group-replication-0.5.0-dmr
-rw-rw-r--  1 sh sh    251687 Sen 28 11:57 mysql-group-replication-0.5.0-labs.tar.gz

You will have 2 directories as above. Then, go to mysql-group-replication folder:

sh@shrzayev:~/Sandboxes$ cd mysql-group-replication-0.5.0-dmr/
sh@shrzayev:~/Sandboxes/mysql-group-replication-0.5.0-dmr$ …
[Read more]
Simplifying SQL Statements to Improve MySQL Performance

When it is not possible to eliminate an SQL statement to improve performance, it might be possible to simplify the statement. Consider the following questions:

  • Are all columns required?
  • Can a table join be removed?
  • Is a join or WHERE restriction necessary for additional SQL statements in a given function?

Column Improvement

An important requirement of simplification is to capture all SQL statements in order for a given executed function. Using a sampling process will not identify all possible improvements. Here is an example of a query simplification:

mysql> SELECT fid, val, val
    -> FROM table1
    -> WHERE fid = X;

This query returned 350,000 rows of data that was cached by the application server during system startup. For this query, …

[Read more]
Caching SQL Results to Improve MySQL Performance

When it’s not possible to remove SQL statements that are unnecessary and the rate of change of common data is relatively low, caching SQL results can provide a significant performance boost to your application and enable additional scalability of your database server.

MySQL Caching

The MySQL query cache can provide a boost in performance for a high read environment and can be implemented without any additional application overhead. The following is an example using the profiling functionality to show the execution time and the individual complexity of a regular SQL statement and a subsequent cached query:

SET GLOBAL query_cache_size=1024*1024*16;
SET GLOBAL query_cache_type=1;
SET PROFILING=1;
SELECT name FROM firms WHERE id=727;
SELECT name FROM firms WHERE id=727;
SHOW PROFILES;

[Read more]
Removing Duplicate, Repeating or Unnecessary SQL Statements in MySQL Improves Performance

Eliminating overhead that adds unnecessary load to database servers when SQL statements are unnecessary can improve MySQL performance, including removing duplicate, repeating or unnecessary statements.

Removing Duplicate SQL Statements

Capture of all SQL statements for a given function or process will highlight any duplicate SQL statements that are executed to complete a specific request. The best practice is to enable the general query log in development environments. Analysis of all SQL statements should be the responsibility of the developer to ensure that only necessary SQL statements are executed. Adding instrumentation to your application to report the number of SQL statements and provide debugging for dynamic viewing of all SQL statements easily enables more information to identify duplicate statements. The use of application frameworks can be a primary cause of unnecessary duplicate SQL statements.

Removing …

[Read more]
Tips to Improve MySQL Performance

Adding indexes can provide significant performance benefits. However, the most effective SQL optimization for a relational database is to eliminate the need to execute the SQL statement completely. For a highly tuned application, the greatest amount of time for the total execution of the statement is the network overhead.

Removing SQL statements can reduce the application processing time. Additional steps necessary for each SQL statement include parsing, permission security checks, and generation of the query execution plan.

These are all overheads that add unnecessary load to the database server when statements are unnecessary. You can use the profiling functionality to get detailed timing of steps within the execution of a query.

Here is an example:

mysql> show profile source for query 7;

[Read more]
Improving Performance with MySQL Index Columns

In addition to creating new indexes to improve performance, you can improve database performance with additional schema optimizations. These optimizations include using specific data types and/or column types. The benefit is a smaller disk footprint producing less disk I/O and results in more index data being packed in available system memory.

Data Types

Several data types can be replaced or modified with little or no impact to an existing schema.

BIGINT vs. INT

When a primary key is defined as a BIGINT AUTO_INCREMENT data type, there is generally no requirement why this datatype is required. An INT UNSIGNED AUTO_INCREMENT datatype is capable of supporting a maximum value of 4.3 billion. If the table holds more than 4.3 billion rows, other architecture considerations are generally necessary before this requirement.

The impact of modifying a BIGINT data type to an INT data type is a 50 percent reduction …

[Read more]
Optimizing MySQL Indexes

The management of indexes—how they are created and maintained—can impact the performance of SQL statements.

Combining Your DDL

An important management requirement when adding indexes to MySQL is the blocking nature of a DDL statement. Historically, the impact of an ALTER statement required that a new copy of the table be created. This could be a significant operation for time and disk volume when altering large tables. With the InnoDB plugin, first available in MySQL 5.1, and with other third party storage engines, various ALTER statements are now very fast, as they do not perform a full table copy. You should refer to the system documentation for the specific storage engine and MySQL version to confirm the full impact of your ALTER statement.

Combining multiple ALTER statements into one SQL statement is an easy optimization improvement. For example, if you needed to add a new index, modify an index, and add a new …

[Read more]
What You Should Know About MySQL Replication Types

For companies that live and die by their databases, a one-shot backup isn’t really the perfect solution. Typically for such companies (think Yahoo! or Google), database access is a near-constant process, and database content changes continually, often on a second-by-second basis. Data replication, which involves continual data transfer between two (or more) servers to maintain a replica of the original database, is a better backup solution for these situations.

MySQL supports two (or three, depending on how you look at it) different methods of replicating databases from master to slave. All of these methods use the binary log; however, they differ in the type of data that is written to the master’s binary log.

  • Statement-based replication Under this method, the binary log stores the SQL statements used to …
[Read more]
Showing entries 101 to 110 of 1066
« 10 Newer Entries | 10 Older Entries »