Showing entries 4976 to 4985 of 44082
« 10 Newer Entries | 10 Older Entries »
Webinar Thursday Aug 9: Migrating to AWS Aurora, Monitoring AWS Aurora with PMM

Please join Autodesk’s Senior Database Engineer, Vineet Khanna, and Percona’s Sr. MySQL DBA, Tate McDaniel as they present Migrating to Aurora and Monitoring with PMM on Thursday, August 9th, 2018, at 10:00 AM PDT (UTC-7) / 1:00 PM EDT (UTC-4).

Register Now

Amazon Web Services (AWS) Aurora is one of the most popular cloud-based RDBMS solutions. The main reason for Aurora’s success is because it’s based on InnoDB storage engine.

In this session, we will talk about how you can efficiently plan for migration to Aurora using Terraform and Percona products and solutions. We will share our Terraform code for launching AWS Aurora clusters, look at tricks for checking data consistency, verify migration paths and effectively monitor the environment using PMM.

The …

[Read more]
Optimising scan filter for checkpoints in NDB

When loading massive amounts of data into NDB when testing the new
adaptive checkpoint speed I noted that checkpoints slowed down as the
database size grew.

I could note in debug logs that the amount of checkpoint writes was
dropping significantly at times. After some investigation I discovered
the root cause.

The checkpoint algorithm in NDB requires all changed rows to be written
to the checkpoint even if it is not a part that is fully checkpointed.
This means that each row has to be scanned to discover if it has been
written.

When loading 600 GByte of DBT2 data we have more than two billion rows
in the database. Scanning two billion rows takes around 15-20 seconds
when simultaneously handling lots of inserts.

This slowed down checkpoints and in addition it uses a lot of CPU.
Thus we wanted a more efficient …

[Read more]
How to Change Settings for PMM Deployed via Docker

When deployed through Docker Percona Monitoring and Management (PMM) uses environment variables for its configuration

For example, if you want to adjust metrics resolution you can pass

-e METRICS_RESOLUTION=Ns

  as  an option to the

docker run

  command:

docker run -d \
  -p 80:80 \
  --volumes-from pmm-data \
  --name pmm-server \
  --restart always \
  -e METRICS_RESOLUTION=2s \
  percona/pmm-server:latest

You would think if you want to change the setting for existing installation you can just stop the container with

docker stop

  and …

[Read more]
Scheduling challenges of checkpoints in NDB

The NDB data nodes are implemented using asynchronous programming. The model is
quite simple. One can send asynchronous messages on two priority levels, the
A-level is high priority messages that are mainly used for various management
actions. The B-level represents the normal priority level where all normal
messages handling transactions are executed.

It is also possible to send delayed signals that will wait for a certain
number of milliseconds before being delivered to the receiver.

When developing MySQL Cluster 7.4 we noted a problem with local checkpoints. If
transaction load was extremely high, the checkpoints almost stopped. If such a
situation stays for too long, we will run out of REDO log.

To handle this we introduced a special version of delayed signals. This new
signal will be scheduled such that at most around 75 messages are …

[Read more]
Loading data into TByte sized NDB data nodes

One of the main design goals with MySQL Cluster 7.6 was to
support much larger data sets in each data node. The
checkpoint algorithm makes it hard to manage data nodes in
7.5 and earlier versions with many hundreds of GBytes of
data. Using MySQL Cluster 7.6.7 the algorithms scale to
multiple TBytes per data node.

However the largest machines currently at my disposal have
1 TByte of RAM. Thus I went about testing to load data into
a cluster on two such machines. There is no comparison to
older version, it is possible to load data into earlier
versions, but it is not very practical with such large
data nodes in earlier versions.

7.6.7 comes with both partial checkpoints and adaptive
control of disk write speed. This means that we can load
terabytes of data into the cluster even with moderately
sized REDO logs.

To experiment with …

[Read more]
Analysis of restart improvements in MySQL Cluster 7.6.7

To test restart times I am using the DBT2 test suite that
I developed based on DBT2 0.37 since 2006.

The following test setup is used:
DataMemory: 100 GByte (90 Gbyte in 7.5 and earlier)
IndexMemory: 10 GByte in 7.5 and earlier versions)
NoOfFragmentLogParts=8
NoOfFragmentLogFiles=50
FragmentLogFileSize=256M
MinDiskWriteSpeed=20M (two times the default)
MaxDiskWriteSpeed=40M (two times the default)

I load the database using LOAD DATA FROM INFILE with
precreated CSV files that contains the data for one
warehouse and table. This means that there are 8 CSV
files per warehouse. I load 600 warehouses into NDB.
This means a database size of around 60 GByte with
around 280 million rows.

Next I run the DBT2 from 2 MySQL servers using 16 threads
each for two minutes. This creates a load of almost
100k TPM.

[Read more]
More automated control in MySQL Cluster 7.6.7

Apart from bug fixes the 7.6.7 version of MySQL Cluster also brings
a major improvement of restart times through adaptively controlling
checkpoint speed.

Many DBMSs work hard on automating management of the database nodes.
In NDB automated management was a design point from the very first
version. This means that nodes crash and restart without operator
assistance.

For the last years we have also worked on developing algorithms that
require less configuration. This will greatly simplify the configuration
of NDB Cluster.

In 7.6.7 we have made it much easier to configure handling of checkpoints
(LCPs) and REDO logging.

In earlier versions of NDB the checkpoint speed has been controlled by
two things. The first is based on the following configuration variables:

MinDiskWriteSpeed: This is the minimum disk …

[Read more]
Monitoring NDBCluster Copying Alter Progress

Tweet

MySQL NDB Cluster has great support for online (inplace) schema changes, but it is still sometimes necessary to perform an offline (copying) ALTER TABLE. These are relatively expensive to make as the entire table is copied into a new table which eventually replace the old table.

One example where a copying ALTER TABLE is required is when upgrading from MySQL NDB Cluster 7.2 or earlier to MySQL NDB Cluster 7.3 or later. The format used for temporal columns changed between these version (corresponding to MySQL Server 5.5 to 5.6). In order to take advantage of the new temporal format, a table rebuild is required.

Note: Support for the old temporal format has been removed in MySQL 8.0. So, you must …

[Read more]
Resource Usage Improvements in Percona Monitoring and Management 1.13

In Percona Monitoring and Management (PMM) 1.13 we have adopted Prometheus 2, and with this comes a dramatic improvement in resource usage, along with performance improvements!

What does it mean for you? This means you can have a significantly larger number of servers and database instances monitored by the same PMM installation. Or you can reduce the instance size you use to monitor your environment and save some money.

Let’s look at some stats!

CPU Usage

We can see an approximate 5x and 8x reduction of CPU usage on these two PMM Servers. Depending on the workload, we see CPU usage reductions to range between 3x and 10x.

Disk Writes

There is also less disk write bandwidth required:

On this instance, the bandwidth reduction is “just” 1.5x times. …

[Read more]
Replicating from MySQL 8.0 to MySQL 5.7

In this blog post, we’ll discuss how to set a replication from MySQL 8.0 to MySQL 5.7. There are some situations that having this configuration might help. For example, in the case of a MySQL upgrade, it can be useful to have a master that is using a newer version of MySQL to an older version slave as a rollback plan. Another example is in the case of upgrading a master x master replication topology.

Officially, replication is only supported between consecutive major MySQL versions, and only from a lower version master to a higher version slave. Here is an example of a supported scenario:

5.7 master –> 8.0 slave

while the opposite is not supported:

8.0 master –> 5.7 slave

In this blog post, I’ll walk through how to overcome the …

[Read more]
Showing entries 4976 to 4985 of 44082
« 10 Newer Entries | 10 Older Entries »