Showing entries 1 to 10 of 239
10 Older Entries »
Displaying posts with tag: monitoring (reset)
MySQL 8: Performance Schema Digests Improvements

Tweet

Since MySQL 5.6, the digest feature of the MySQL Performance Schema has provided a convenient and effective way to obtain statistics of queries based on their normalized form. The feature works so well that it has almost completely (from my experience) replaced the connector extensions and proxy for collecting query statistics for the Query Analyzer (Quan) in MySQL Enterprise Monitor (MEM).

MySQL 8 adds further improvements to the digest feature in the Performance Schema including a sample query with statistics for each digest, percentile information, and a histogram summary. This blog will explore these new features.

[Read more]
How To design a better Ansible Role for MySQL Environment ?

In our earlier stage of Ansible, we just wrote simple playbook and ad-hoc command with very long ansible hosts file. When we plan to use Ansible extensively in our daily production use case, we understand that simple playbooks don’t help to scale up to our expectation.

Even though we had options for separate variables, handlers and template files according to our requirements, this un-organized way didn’t help. It looked very messy and made me unhappy when I saw the code too.  That’s the place we decided to use Ansible Role.

My understanding of Ansible Roles?

The role is the primary mechanism for breaking a playbook into multiple files, we can simply refer to the Python Package. Roles help to group multiple tasks, Jinja2 template file, variable file and handlers into a clean directory structure. This will help us to reduce the syntax error while developing and also …

[Read more]
Monitoring NDBCluster Copying Alter Progress

Tweet

MySQL NDB Cluster has great support for online (inplace) schema changes, but it is still sometimes necessary to perform an offline (copying) ALTER TABLE. These are relatively expensive to make as the entire table is copied into a new table which eventually replace the old table.

One example where a copying ALTER TABLE is required is when upgrading from MySQL NDB Cluster 7.2 or earlier to MySQL NDB Cluster 7.3 or later. The format used for temporal columns changed between these version (corresponding to MySQL Server 5.5 to 5.6). In order to take advantage of the new temporal format, a table rebuild is required.

Note: Support for the old temporal format has been removed in MySQL 8.0. So, you must …

[Read more]
What Does I/O Latencies and Bytes Mean in the Performance and sys Schemas?

Tweet

The Performance Schema and sys schema are great for investigating what is going on in MySQL including investigating performance issues. In my work in MySQL Support, I have a several times heard questions whether a peak in the InnoDB Data File I/O – Latency graph in MySQL Enterprise Monitor (MEM) or some values from the corresponding tables and view in the Performance Schema and sys schema are cause for concern. This blog will discuss what these observations means and how to use them.

The Tables and Views Involved

[Read more]
How NOT to Monitor Your Database

Do you have experience putting out backend database fires? What were some things you wished you had done differently? Proactive database monitoring is more cost efficient, manageable, and sanity-saving than reactive monitoring. We reviewed some of the most common mistakes - too many log messages, metric “melting pots,” retroactive changes, incomplete visibility, undefined KPIs - and put together an action plan on how to prevent them. From our experience, we've listed out the top 5 biggest (and preventable!) database monitoring pitfalls.

Log Levels

There never seem to be enough logging levels to capture the desired granularity and relevance of a log message accurately. Is it INFO, TRACE, or DEBUG? What if it’s DEBUG but it’s for a condition we should WARN about? Is there really a linear hierarchy here? If you’re like most people, you’ve seen …

[Read more]
How NOT to Monitor Your Database

Do you have experience putting out backend database fires? What were some things you wished you had done differently? Proactive database monitoring is more cost efficient, manageable, and sanity-saving than reactive monitoring. We reviewed some of the most common mistakes - too many log messages, metric “melting pots,” retroactive changes, incomplete visibility, undefined KPIs - and put together an action plan on how to prevent them. From our experience, we've listed out the top 5 biggest (and preventable!) database monitoring pitfalls.

Log Levels

There never seem to be enough logging levels to capture the desired granularity and relevance of a log message accurately. Is it INFO, TRACE, or DEBUG? What if it’s DEBUG but it’s for a condition we should WARN about? Is there really a linear hierarchy here? If you’re like most people, you’ve seen …

[Read more]
Monitor Critical Databases Confidently with the Sensitive Data Vault

Building extremely deep monitoring as a SaaS product has a drawback: we capture too much data for some customers’ compliance requirements. As a result, some companies have been unable to deploy us, or have had to redact data before sending it to our cloud platform. To address this, we built the Sensitive Data Vault, a highly secure, completely on-premises storage module for the most critically private data that must never leave the customer’s firewall.

 

What is it?

The VividCortex Sensitive Data Vault is a new component of the overall VividCortex solution that you deploy inside your firewall. It ensures that the data never leaves your servers and never enters the VividCortex cloud environment. It consists of:

  • a Go service that the VividCortex collector agent communicates with
  • a customer-maintained MySQL or PostgreSQL database that the Go application uses
[Read more]
Monitor Critical Databases Confidently with the Sensitive Data Vault

Building extremely deep monitoring as a SaaS product has a drawback: we capture too much data for some customers’ compliance requirements. As a result, some companies have been unable to deploy us, or have had to redact data before sending it to our cloud platform. To address this, we built the Sensitive Data Vault, a highly secure, completely on-premises storage module for the most critically private data that must never leave the customer’s firewall.

 

What is it?

The VividCortex Sensitive Data Vault is a new component of the overall VividCortex solution that you deploy inside your firewall. It ensures that the data never leaves your servers and never enters the VividCortex cloud environment. It consists of:

  • a Go service that the VividCortex collector agent communicates with
  • a customer-maintained MySQL or PostgreSQL database that the Go application uses
[Read more]
How SendGrid Ships Better Code Faster with VividCortex

VividCortex CEO Baron Schwartz and SendGrid DBA Silvia Botros teamed up to discuss how performance monitoring leads to better, faster code deployment. This 30-minute webinar covers: 

  • How developers can deploy code faster and more safely.
  • A close-up view of a health- and monitoring-focused work environment.
  • How database monitoring fits into a culture of DevOps and lean, agile development. 

"Now, with VividCortex, whenever we have an issue that's impacting the mail processing throughput, we can very quickly go and answer, "What was running at that time? What was the most expensive query? What was taking up all the load?" Within an hour, we can typically figure out exactly which JIRA WAR to go to."  —Silvia Botros, Lead MySQL DBA

Take a look:

 

 

How SendGrid Ships Better Code Faster with VividCortex

VividCortex CEO Baron Schwartz and SendGrid DBA Silvia Botros teamed up to discuss how performance monitoring leads to better, faster code deployment. This 30-minute webinar covers: 

  • How developers can deploy code faster and more safely.
  • A close-up view of a health- and monitoring-focused work environment.
  • How database monitoring fits into a culture of DevOps and lean, agile development. 

"Now, with VividCortex, whenever we have an issue that's impacting the mail processing throughput, we can very quickly go and answer, "What was running at that time? What was the most expensive query? What was taking up all the load?" Within an hour, we can typically figure out exactly which JIRA WAR to go to."  —Silvia Botros, Lead MySQL DBA

Take a look:

 

 

Showing entries 1 to 10 of 239
10 Older Entries »