Showing entries 5856 to 5865 of 44045
« 10 Newer Entries | 10 Older Entries »
Monitoring RDS MySQL Performance Metrics

Amazon Web Services (AWS) is a cloud platform that offers a wide variety of services including computing power, database storage, content delivery and other functionality that targets businesses of all sizes. One of their database solutions includes the Amazon Relational Database Service. Amazon RDS includes a number of popular RDBMSes, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, and Microsoft SQL Server, as well as tools to manage your databases and monitor their performance.

Despite the wide range of metrics available within the Amazon RDS console, there are some very good reasons for using your own monitoring tool(s) instead or in addition to those offered by Amazon RDS. For example, familiarity with your own tool(s) or access to features that Amazon RDS does not provide would constitute two persuasive reasons for employing a local tool.

With traditional software monitoring platforms such as Monyog still enjoying …

[Read more]
JSON_TABLE

JSON data is a wonderful way to store data without needing a schema but what about when you have to yank that data out of the database and apply some sort of formatting to that data?  Well, then you need JSON_TABLE.

JSON_TABLE takes free form JSON data and applies some formatting to it.  For this example we will use the world_x sample database's countryinfo table.  What is desired is the name of the country and the year of independence but only for the years after 1992.  Sound like a SQL query against JSON data, right? Well that is exactly what we are doing.

We tell the MySQL server that we are going to take the $.Name and $.IndepYear key's values from the JSON formatted doc column in  the table, format them into a string and a integer respectively, and alias the key value's name to a table column name that we can use for qualifiers in an SQL statement.

[Read more]
Moving data in real-time into Amazon Redshift – The power of heterogeneous Tungsten Replication

Amazon Redshift has been providing scalable, quick-to-access analytics platforms for many years, but the question remains: how do you get the data from your existing datastore into Redshift for processing? Tungsten Replicator provides real-time movement of data from Oracle and MySQL into Amazon Redshift, including flexible data handling, translation and long-term change data capture.

In our webinar, Wednesday, December 13th, we will review:

  • How Amazon Redshift replication works
  • Deployment from MySQL or Oracle
  • Deployment from Amazon RDS
  • Provisioning/seeding the original information
  • Filtering and accounting for data differences
  • Data concentration/aggregation with single-schema …
[Read more]
MariaDB Connector/C 2.3.4 now available

MariaDB Connector/C 2.3.4 now available dbart Mon, 12/04/2017 - 12:37

The MariaDB project is pleased to announce the immediate availability of MariaDB Connector/C 2.3.4. See the release notes and changelogs for details and visit mariadb.com/downloads/connector to download.

Download MariaDB Connector/C 2.3.4

Release Notes Changelog About MariaDB Connector/C

[Read more]
Internal Temporary Tables in MySQL 5.7

In this blog post, I investigate a case of spiking InnoDB Rows inserted in the absence of a write query, and find internal temporary tables to be the culprit.

Recently I was investigating an interesting case for a customer. We could see the regular spikes on a graph depicting “InnoDB rows inserted” metric (jumping from 1K/sec to 6K/sec), however we were not able to correlate those spikes with other activity. The

innodb_row_inserted

 graph (picture from PMM demo) looked similar to this (but on a much larger scale):

Other graphs (Com_*, Handler_*) did not show any spikes like that. I’ve examined the logs (we were not able to enable general log or change the threshold of the slow log), performance_schema, triggers, stored procedures, prepared statements and even reviewed the binary logs. However, I was not able to find any single …

[Read more]
Percona Monitoring and Management 1.5: QAN in Grafana Interface

In this post, we’ll examine how we’ve improved the GUI layout for Percona Monitoring and Management 1.5 by moving the Query Analytics (QAN) functions into the Grafana interface.

For Percona Monitoring and Management users, you might notice that QAN appears a little differently in our 1.5 release. We’ve taken steps to unify the PMM interface so that it feels more natural to move from reviewing historical trends in Metrics Monitor to examining slow queries in QAN.  Most significantly:

  1. QAN moves from a stand-alone application into Metrics Monitor as a dashboard application
  2. We updated the color scheme of QAN to match Metrics Monitor (but you can toggle a button if you prefer to still see QAN in white!)
  3. Date picker and host selector now use the same methods as Metrics Monitor

[Read more]
This Week in Data with Colin Charles 17: AWS Re:Invent, a New Book on MySQL Cluster and Another Call Out for Percona Live 2018

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

The CFP for Percona Live Santa Clara 2018 closes December 22, 2017: please consider submitting as soon as possible. We want to make an early announcement of talks, so we’ll definitely do a first pass even before the CFP date closes. Keep in mind the expanded view of what we are after: it’s more than just MySQL and MongoDB. And don’t forget that with one day less, there will be intense competition to fit all the content in.

A new book on MySQL Cluster is out: Pro MySQL NDB Cluster by Jesper Wisborg Krogh and Mikiya Okuno. At 690 pages, it is a weighty tome, and something I fully plan on reading, considering I haven’t played with …

[Read more]
Deleting huge number of records in MySQL

This is a short post about DELETE data from huge table in MySQL. Most of us experienced, deleting huge record from MySQL tables taking long time, sometimes hours to complete deleting millions of records. Also, on production servers it locks the other table operations as well. Recently, I deleted around 70 million record from a production database in less than an hour. There are multiple workarounds to do this, however I am writing about the two methods which are frequently used by me for this operation. 

  • Using intermediate table.
  • Delete data in small chunks.

Before we proceed with using any of these methods, make sure the table has required indexes on where clause and you have a copy of the table as backup.
Using intermediate table:In this method, create a new table with similar data structure and copy only required data. Rename the original table as archive or backup table and Rename the …

[Read more]
MySQL 8.0 Collations: Migrating from older collations, Part 2

In my blog MySQL 8.0 Collations: Migrating from older collations I showed a query that could identify the values that might break a unique constraint when migrate your data. That query was not very efficient due to the self join of the converted values.…

How to Transfer a MySQL Database Between Two Servers?

Migrating a MySQL database usually requires only few simple steps, but can take quite some time, depending on the amount of data you would like to migrate.

The following steps will guide through how to export the MySQL database from the old server, secure it, copy it to the new server, import it successfully and make sure the data is there.

Exporting MySQL database to a dump file

Oracle provides a utility named mysqldump which allows to easily export the database structure and data to an SQL dump file. Use the following command:

mysqldump -u root -p --opt [database name] > [database name].sql

Few notes:

[Read more]
Showing entries 5856 to 5865 of 44045
« 10 Newer Entries | 10 Older Entries »