Showing entries 4926 to 4935 of 44037
« 10 Newer Entries | 10 Older Entries »
This Week in Data with Colin Charles 48: Coinbase Powered by MongoDB and Prometheus Graduates in the CNCF

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

The call for submitting a talk to Percona Live Europe 2018 is closing today, and while there may be a short extension, have you already got your talk submitted? I suggest doing so ASAP!

I’m sure many of you have heard of cryptocurrencies, the blockchain, and so on. But how many of you realiize that Coinbase, an application that handles cryptocurrency trades, matching book orders, and more, is powered by MongoDB? With the hype and growth in interest in late 2017, Coinbase has had to scale. They gave an excellent talk at MongoDB World, titled MongoDB & Crypto Mania (the video is worth a watch), and they’ve also written a blog post, …

[Read more]
InnoDB Progress Information

Tweet

MySQL has since version 5.7 had support for progress information for some queries. As promised in my previous post, I will here discuss how you can use that to get progress information for ALTER TABLE on InnoDB tables.

Background and Setup

Progress information is implemented through the Performance Schema using the stage events. In version 8.0.12 there are currently seven stages that can provide this information for  ALTER TABLE statements on InnoDB tables. In MySQL 8, it is easy to list the stages capable of reporting progress information by using the

[Read more]
Amazon Aurora Serverless - Features, Limitations, Glitches

Amazon Aurora Serverless — Features, Limitations, Glitches

Finally after an year AWS announced the AWS Aurora Serverless MySQL compatibility. I was expecting that they will release this in the reInvent 2018, but it live now. So now I’m expecting more new features for Aurora servers in the reInvent 2018. I have played with this baby and understood whats is doing and what we can do with that.

Here is the blog from AWS about Aurora Serverless

Aurora Serverless MySQL Generally Available | Amazon Web Services

What is Aurora Serverless?

Aurora serverless provides seamlessly scale up and down its compute and memory. You can pay for how much resources that you have been used. No needs of creating and managing the Read Replica and HA(Multi-AZ).

How Aurora Serverless works? Source: AWS

[Read more]
Watch the webinar replay: How Bluefin ensures 24/7 operation and application availability with Continuent Clustering

Watch the relay of this webinar and learn how Bluefin Payment Systems provides 24/7/365 operation and application availability for their PayConex payment gateway and Decryptx decryption-as-a-service, essential to point-of-sale (POS) solutions in retail, mobile, call centers and kiosks.

We discuss why Bluefin uses Continuent Clustering, and how Bluefin runs two co-located data centers with multimaster replication between each cluster in each data center, with full failover within the cluster and between clusters, handling 350 million records each month.

Watch this webinar replay at https://youtu.be/crlgsflH7Gw

Lock Down: Enforcing AppArmor with Percona XtraDB Cluster

Recently, I wrote a blog post showing how to enforce SELinux with Percona XtraDB Cluster (PXC). The Linux distributions derived from RedHat use SELinux. There is another major mandatory discretionary access control (DAC) system, AppArmor. Ubuntu, for example, installs AppArmor by default. If you are concerned by computer security and use PXC on Ubuntu, you should enforce AppArmor. This post will guide you through the steps of creating a profile for PXC and enabling it. If you don’t want to waste time, you can just grab my profile, it seems to work fine. Adapt it to your environment if …

[Read more]
Webinar Thursday Aug 9: Migrating to AWS Aurora, Monitoring AWS Aurora with PMM

Please join Autodesk’s Senior Database Engineer, Vineet Khanna, and Percona’s Sr. MySQL DBA, Tate McDaniel as they present Migrating to Aurora and Monitoring with PMM on Thursday, August 9th, 2018, at 10:00 AM PDT (UTC-7) / 1:00 PM EDT (UTC-4).

Register Now

Amazon Web Services (AWS) Aurora is one of the most popular cloud-based RDBMS solutions. The main reason for Aurora’s success is because it’s based on InnoDB storage engine.

In this session, we will talk about how you can efficiently plan for migration to Aurora using Terraform and Percona products and solutions. We will share our Terraform code for launching AWS Aurora clusters, look at tricks for checking data consistency, verify migration paths and effectively monitor the environment using PMM.

The …

[Read more]
Optimising scan filter for checkpoints in NDB

When loading massive amounts of data into NDB when testing the new
adaptive checkpoint speed I noted that checkpoints slowed down as the
database size grew.

I could note in debug logs that the amount of checkpoint writes was
dropping significantly at times. After some investigation I discovered
the root cause.

The checkpoint algorithm in NDB requires all changed rows to be written
to the checkpoint even if it is not a part that is fully checkpointed.
This means that each row has to be scanned to discover if it has been
written.

When loading 600 GByte of DBT2 data we have more than two billion rows
in the database. Scanning two billion rows takes around 15-20 seconds
when simultaneously handling lots of inserts.

This slowed down checkpoints and in addition it uses a lot of CPU.
Thus we wanted a more efficient …

[Read more]
How to Change Settings for PMM Deployed via Docker

When deployed through Docker Percona Monitoring and Management (PMM) uses environment variables for its configuration

For example, if you want to adjust metrics resolution you can pass

-e METRICS_RESOLUTION=Ns

  as  an option to the

docker run

  command:

docker run -d \
  -p 80:80 \
  --volumes-from pmm-data \
  --name pmm-server \
  --restart always \
  -e METRICS_RESOLUTION=2s \
  percona/pmm-server:latest

You would think if you want to change the setting for existing installation you can just stop the container with

docker stop

  and …

[Read more]
Scheduling challenges of checkpoints in NDB

The NDB data nodes are implemented using asynchronous programming. The model is
quite simple. One can send asynchronous messages on two priority levels, the
A-level is high priority messages that are mainly used for various management
actions. The B-level represents the normal priority level where all normal
messages handling transactions are executed.

It is also possible to send delayed signals that will wait for a certain
number of milliseconds before being delivered to the receiver.

When developing MySQL Cluster 7.4 we noted a problem with local checkpoints. If
transaction load was extremely high, the checkpoints almost stopped. If such a
situation stays for too long, we will run out of REDO log.

To handle this we introduced a special version of delayed signals. This new
signal will be scheduled such that at most around 75 messages are …

[Read more]
Loading data into TByte sized NDB data nodes

One of the main design goals with MySQL Cluster 7.6 was to
support much larger data sets in each data node. The
checkpoint algorithm makes it hard to manage data nodes in
7.5 and earlier versions with many hundreds of GBytes of
data. Using MySQL Cluster 7.6.7 the algorithms scale to
multiple TBytes per data node.

However the largest machines currently at my disposal have
1 TByte of RAM. Thus I went about testing to load data into
a cluster on two such machines. There is no comparison to
older version, it is possible to load data into earlier
versions, but it is not very practical with such large
data nodes in earlier versions.

7.6.7 comes with both partial checkpoints and adaptive
control of disk write speed. This means that we can load
terabytes of data into the cluster even with moderately
sized REDO logs.

To experiment with …

[Read more]
Showing entries 4926 to 4935 of 44037
« 10 Newer Entries | 10 Older Entries »