Showing entries 1 to 10 of 30
10 Older Entries »
Displaying posts with tag: aurora (reset)
Failover comparison in Aurora MySQL 2.10.0 using proxySQL vs Aurora’s cluster endpoint

 

Aurora cluster promises a high availability solution and seamless failover procedure. However, how much is actually the downtime when a failover happens? And how proxySQL can help in minimizing the downtime ? A little sneak peek on the results ProxySQL achieves up to 25x less downtime and the impressive up to ~9800x less errors during unplanned failovers. How proxySQL achieves this: 

  1. Less downtime
  2. “Queueing” feature when an instance in a hostgroup becomes unavailable.

So what is ProxySQL? ProxySQL is a middle layer between the database and the application. ProxySQL protects databases from high traffic spikes, prevents databases from having high number of connections due to the multiplexing feature and minimizes the impact during planned/unexpected failovers or crashes of DBs. 

This blog will continue with measuring the impact of an unexpected …

[Read more]
Upgrading to AWS Aurora MySQL 8

With Aurora MySQL 8 now generally available to all, you may want to consider the plan for an upgrade path if you would like to take advantage of the new features for your application, for example, Common Table Expressions (CTE). This new major release has a much improved and streamlined upgrade progress from Aurora MySQL 5.7.

This tutorial will provide all the steps to allow you to try out setting up an Aurora cluster and performing an upgrade without the impact on your existing AWS environment. The two pre-requisites to getting started are:

  • An AWS account. The Free 1 year AWS account provides many of the services used in these tutorials at no or little cost.
  • The awscli. See …
[Read more]
Querying Archived RDS Data Directly From an S3 Bucket

A recommendation we often give to our customers is along the lines of “archive old data” to reduce your database size. There is a tradeoff between keeping all our data online and archiving part of it to cold storage.

There could also be legal requirements to keep certain data online, or you might want to query old data occasionally without having to go through the hassle of restoring an old backup.

In this post, we will explore a very useful feature of AWS RDS/Aurora that allows us to export data to an S3 bucket and run SQL queries directly against it.

Archiving Data to S3

Let’s start by describing the steps we need to take to put our data into an S3 bucket in the required format, which is called Apache Parquet.

Amazon states the Parquet format is up to 2x faster to export and consumes up to 6x less storage in S3, compared to other text formats.

1. Create a snapshot of the database (or …

[Read more]
#WDILTW – What can I run from my AWS Aurora database

When you work with AWS Aurora you have limited admin privileges. There are some different grants for MySQL including SELECT INTO S3 and LOAD FROM S3 that replace the loss of functionality to SELECT INTO OUTFILE and mysqldump/mysqlimport using a delimited format. While I know and use lambda capabilities, I have never executed anything with INVOKE LAMDBA directly from the database.

This week I found out about INVOKE COMPREHEND (had to look that product up), and …

[Read more]
Creating an External Replica of AWS Aurora MySQL with Mydumper

Oftentimes, we need to replicate between Amazon Aurora and an external MySQL server. The idea is to start by taking a point-in-time copy of the dataset. Next, we can configure MySQL replication to roll it forward and keep the data up-to-date.

This process is documented by Amazon, however, it relies on the mysqldump method to create the initial copy of the data. If the dataset is in the high GB/TB range, this single-threaded method could take a very long time. Similarly, there are ways to improve the import phase (which can easily take 2x the time of the export).

Let’s explore some tricks to significantly improve the speed of this process.

Preparation Steps

The first step is to enable binary logs in Aurora. Go to the Cluster-level parameter group and make sure binlog_format

[Read more]
Debezium MySQL Snapshot For AWS RDS Aurora From Backup Snaphot

I have published enough Debezium MySQL connector tutorials for taking snapshots from Read Replica. To continue my research I wanted to do something for AWS RDS Aurora as well. But aurora is not using binlog bases replication. So we can’t use the list of tutorials that I published already. In Aurora, we can get the binlog file name and its position from its snapshot of the source Cluster. So I used a snapshot for loading the historical data, and once it’s loaded we can resume the CDC from the main cluster.

Requirements:

  1. Running aurora cluster.
  2. Aurora cluster must have binlogs enabled.
  3. Make binlog retention period to a minimum 3 days(its a best practice).
  4. Debezium connector should be able to access both the clusters.
  5. Make sure you have different security …
[Read more]
Handling Bi-Directional Replication between Tungsten Clusters and AWS Aurora

Overview The Skinny

In this blog post, we explore the correct way to implement bi-directional Tungsten Replication between AWS Aurora and Tungsten Clustering for MySQL databases.

Background The Story

When we are approached by a prospect interested in using our solutions, we are proud of our pre-sales process by which that we engage at a very deep technical level to ensure the we provide the best possible solution to meet with the prospect’s requirements. This involves an in-depth hands-on POC, in addition to the significant time and effort we spend building and testing the solution architectures in our lab environment as part of the proposal process.

From time to time, we are presented with requirements that are not always quite so straight forward. Just recently we faced such a situation. A …

[Read more]
Adaptive Hash Index on AWS Aurora

Recently I had a case where queries against Aurora Reader were 2-3 times slower than on the Writer node. In this blog post, we are going to discuss why.

I am not going to go into the details of how Aurora works, as there are other blog posts discussing that. Here I am only going to focus on one part.

The Problem

My customer reported there is a huge performance difference between the Reader and the Writer node just by running selects. I was a bit surprised, as the select queries should run locally on the reader node, the dataset could fit easily in memory, there were no reads on disk level, and everything looked fine.

I was trying to rule out every option when one of my colleagues mentioned I should have a look at the InnoDB_Adaptive_Hash_Indexes. He was right – it …

[Read more]
Supercharge your Reporting: Aurora Autoscaling and Custom Endpoints

Whenever we talk about the scalability of databases its end up with a lot of discussions and effort to implement. Some of you may even argue that it is a bad idea to auto scale the transactional databases. But the pace of innovation in databases — particularly on a world with public cloud, is breathtaking. AWS Aurora is a game changer database engine in DBaaS for Open Source Databases. It provides performance, reliability, availability, and Scalability. With Aurora features like custom endpoints and loadbalancing across replicas, on can explore some interesting use cases. In this post, we will discuss how we solved a customer’s problem by using scalability features of Aurora. We will focus on provisioned Aurora on AWS, not Aurora Serverless.

We have a customer who was doing all the reporting and massive read …

[Read more]
Create Aurora Read Replica With AWS CLI/Lambda Python

Today I was working for a scaleable solution in Aurora. Im going to publish that blog post soon in Searce Blog. As a part of this solution, I want to create Aurora read replicas programmatically. So we have done the create aurora read replica with AWS CLI and Lambda with Python. If you refer the …

The post Create Aurora Read Replica With AWS CLI/Lambda Python appeared first on SQLgossip.

Showing entries 1 to 10 of 30
10 Older Entries »