Adding good content to Twitter can be a pain. I can’t do it during working hours, and I don’t have much time at night. But, the more content you have, the more followers you can gain, and the more your original tweets can be seen (hopefully). I have written several posts about using the latest Perl-Twitter API – Net::Twitter::Lite::WithAPIv1_1, so you might want to check these out as well.[Read more]
People at Intel started the pmem library project some time ago, it’s open to the broader community at GitHub and other developers, including Linux kernel devs, are actively involved.
While the library does allow interaction with an SSD using a good-old-filesystem, we know that addressing SSD through SATA or SAS is very inefficient. That said, the type of storage architecture that SSD uses does require significant management for write levelling and verifying so that the device as a whole actually lasts, and your data is kept safe: in theory you could write to an NVRAM chip, and not know when it didn’t actually store your data properly.[Read more]
Welcome to next topic dedicated to Group Replication plugin. Today’s topic is to get started with Group Replication. As you remember we set up our environment using MySQL Sandbox on Ubuntu 14.04 and also we have compiled plugin with MySQL 5.7.8-rc on Ubuntu 14.04. Please refer to this topics respectively:
So we have already our 3 nodes of MySQL 5.7.8-rc + group
Before starting group replication. We need to play with Corosync. Here is very dedicated article to this -> …
AWS CloudFormation now supports Amazon Aurora!
Amazon Aurora is a MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. https://aws.amazon.com/rds/aurora/
AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and …[Read more]
MySQL Group Replication plugin is in labs.mysql.com and is available for EL6 x86_64 version Linux. But most of us have Ubuntu desktops where it should be easier to test this new thing, especially with MySQL Sandbox. After getting source code we should have compile this plugin with MySQL from source. So let’s begin. Extract both mysql group replication archive and mysql source archive:
sh@shrzayev:~/Sandboxes$ ls -l total 650732 drwxr-xr-x 34 sh sh 4096 İyl 20 17:25 mysql-5.7.8-rc -rw-rw-r-- 1 sh sh 49762480 Avq 20 16:19 mysql-5.7.8-rc.tar.gz drwxrwxr-x 3 sh sh 4096 Sen 28 12:08 mysql-group-replication-0.5.0-dmr -rw-rw-r-- 1 sh sh 251687 Sen 28 11:57 mysql-group-replication-0.5.0-labs.tar.gz
You will have 2 directories as above. Then, go to mysql-group-replication folder:
sh@shrzayev:~/Sandboxes$ cd mysql-group-replication-0.5.0-dmr/ sh@shrzayev:~/Sandboxes/mysql-group-replication-0.5.0-dmr$ …
When it is not possible to eliminate an SQL statement to improve performance, it might be possible to simplify the statement. Consider the following questions:
- Are all columns required?
- Can a table join be removed?
- Is a join or WHERE restriction necessary for additional SQL statements in a given function?
An important requirement of simplification is to capture all SQL statements in order for a given executed function. Using a sampling process will not identify all possible improvements. Here is an example of a query simplification:
mysql> SELECT fid, val, val -> FROM table1 -> WHERE fid = X;
This query returned 350,000 rows of data that was cached by the application server during system startup. For this query, …[Read more]
When it’s not possible to remove SQL statements that are unnecessary and the rate of change of common data is relatively low, caching SQL results can provide a significant performance boost to your application and enable additional scalability of your database server.
The MySQL query cache can provide a boost in performance for a high read environment and can be implemented without any additional application overhead. The following is an example using the profiling functionality to show the execution time and the individual complexity of a regular SQL statement and a subsequent cached query:
SET GLOBAL query_cache_size=1024*1024*16; SET GLOBAL query_cache_type=1; SET PROFILING=1; SELECT name FROM firms WHERE id=727; SELECT name FROM firms WHERE id=727; SHOW PROFILES;
Eliminating overhead that adds unnecessary load to database servers when SQL statements are unnecessary can improve MySQL performance, including removing duplicate, repeating or unnecessary statements.
Removing Duplicate SQL Statements
Capture of all SQL statements for a given function or process will highlight any duplicate SQL statements that are executed to complete a specific request. The best practice is to enable the general query log in development environments. Analysis of all SQL statements should be the responsibility of the developer to ensure that only necessary SQL statements are executed. Adding instrumentation to your application to report the number of SQL statements and provide debugging for dynamic viewing of all SQL statements easily enables more information to identify duplicate statements. The use of application frameworks can be a primary cause of unnecessary duplicate SQL statements.
Removing …[Read more]
Adding indexes can provide significant performance benefits. However, the most effective SQL optimization for a relational database is to eliminate the need to execute the SQL statement completely. For a highly tuned application, the greatest amount of time for the total execution of the statement is the network overhead.
Removing SQL statements can reduce the application processing time. Additional steps necessary for each SQL statement include parsing, permission security checks, and generation of the query execution plan.
These are all overheads that add unnecessary load to the database server when statements are unnecessary. You can use the profiling functionality to get detailed timing of steps within the execution of a query.
Here is an example:
mysql> show profile source for query 7;[Read more]
In addition to creating new indexes to improve performance, you can improve database performance with additional schema optimizations. These optimizations include using specific data types and/or column types. The benefit is a smaller disk footprint producing less disk I/O and results in more index data being packed in available system memory.
Several data types can be replaced or modified with little or no impact to an existing schema.
BIGINT vs. INT
When a primary key is defined as a BIGINT AUTO_INCREMENT data type, there is generally no requirement why this datatype is required. An INT UNSIGNED AUTO_INCREMENT datatype is capable of supporting a maximum value of 4.3 billion. If the table holds more than 4.3 billion rows, other architecture considerations are generally necessary before this requirement.
The impact of modifying a BIGINT data type to an INT data type is a 50 percent reduction …[Read more]