Showing entries 1 to 10 of 193
10 Older Entries »
Displaying posts with tag: monitoring (reset)
New monitoring replication features and more!

The new release of MySQL is packed with exciting features that help detecting and analyzing replication lag. In this post, you will be able to learn all about the new replication timestamps, the new useful information that is now reported by performance schema tables, and how delayed replication was improved.…

Bonanza Cuts Load in Half with VividCortex

Working with our users at Bonanza earlier this week, we saw their team demonstrate a great example of how monitoring insights can lead to a relatively simple — but impactful —  MySQL system tweak. In this case, the adjustment Bonanza made resulted in huge improvements to their total query time.

By looking at the mysql.innodb.queued_queries metric in VividCortex, it became clear to Bonanza's team there was an issue within InnoDB that was preventing otherwise runnable threads from executing. Often, when queries begin to queue, it's indicative of a problem; it's a good idea to regularly look for states like queuing, pending, or waiting as signs of potential issues. In this case, the innodb_thread_concurrency parameter had been configured to 8. Once …

[Read more]
Prophet: Forecasting our Metrics (or Predicting the Future)

In this blog post, we’ll look at how Prophet can forecast metrics.

Facebook recently released a forecasting tool called Prophet. Prophet can forecast a particular metric in which we have an interest. It works by fitting time-series data to get a prediction of how that metric will look in the future.

For example, it could be used to:

  • Predict how much HTTP traffic we will get, and scale accordingly when needed
  • See if a particular feature of our application will have success or if its usage will decline
  • Get an approximate date when our database server’s resources will be exhausted
  • Forecast new customer’s sign up and resize the staff accordingly
  • See what next year’s Black Friday or Cyber Monday will look like, and if we have the resources to handle them
[Read more]
Monitoring Databases: A Product Comparison

In this blog post, I will discuss the solutions for monitoring databases (which includes alerting) I have worked with and recommended in the past to my clients. This survey will mostly focus on MySQL solutions. 

One of the most common issues I come across when working with clients is monitoring and alerting. Many times, companies will fall into one of these categories:

  • No monitoring or alerting. This means they have no idea what’s going on in their environment whatsoever.
  • Inadequate monitoring. Maybe people in this camp are using a platform that just tells them the database is up or connections are happening, but there is no insight into what the database is doing.
  • Too much monitoring and alerting. Companies in this camp have tons of dashboards filled with graphs, and their inbox is full of alerts that get promptly ignored. This type of monitoring is just as useful as the first option. Alert …
[Read more]
Services Monitoring with Probabilistic Fault Detection

In this blog post, we’ll discuss services monitoring using probabilistic fault detection.

Let’s admit it, the task of monitoring services is one of the most difficult. It is time-consuming, error-prone and difficult to automate. The usual monitoring approach has been pretty straightforward in the last few years: setup a service like Nagios, or pay money to get a cloud-based monitoring tool. Then choose the metrics you are interested in and set the thresholds. This is a manual process that works when you have a small number of services and servers, and you know exactly how they behave and what you should monitor. These days, we have hundred of servers with thousands of services sending us millions of metrics. That is the first problem: the manual approach to configuration doesn’t work.

That is not the only problem. We know that no two servers perform the same because no two servers have exactly the …

[Read more]
With 500+ VividCortex Users, Shopify Eliminates High Latency Queries From Redis and MySQL

As intuitive and streamlined as ecommerce technology might seem from the user's perspective, it involves so much data that engineering ingenuity and smart database management must constantly deliver in order to keep up. At organizations like Shopify—responsible for the easy and reliable transactions at top brands around the world—that excellence of performance involves deep monitoring of their MySQL core and their Redis caching infrastructure, plus insightful query profiling, packet captures, and the admittance of developers to platforms that measure database performance.

Shopify’s motto is “Make commerce better for everyone.” That mantra applies whether the shopping's done online, on mobile, or in-store. For Shopify's engineering team, better means a fast, reliable application that delivers a positive …

[Read more]
A Metric for Tuning Parallel Replication in MySQL 5.7

MySQL 5.7 introduced the LOGICAL_CLOCK type of multi-threaded slave (MTS).  When using this type of parallel replication (and when slave_parallel_workers is greater than zero), slaves use information from the binary logs (written by the master) to run transactions in parallel.  However, enabling parallel replication on slaves might not be enough to get a higher replication throughput (VividCortex

Etsy's Use of Performance Testing in Development

In a recent case study, we profiled Etsy and learned about how a high-performance data platform helps keep Etsy's global community engaged. In that study, Etsy's engineering team provided some key examples of how they monitor their database in order to ensure good system performance in development. In this post, we want to highlight a few of those specific uses.

Performance Testing in Development

Performance monitoring is key to good DevOps principles--to understand how to improve system performance, one must first understand how the system is performing. Engineering teams need performance analytics in order to do fact-based evaluations, improve code quality, and ensure system stability. By putting performance …

[Read more]
Monitoring ProxySQL using Datadog

ProxySQL is a high performance proxy for MySQL and its forks. One of the key features is its ability to handle hundreds of thousands of connections with very low overhead. Datadog is a monitoring service for cloud-scale applications, bringing together data from servers, databases, tools, and services to present a unified view of an entire stack.

Datadog does not yet provide an integration for ProxySQL. So I decided to write an integration by forking the Datadog agent. Read my detailed blog post on TwinDB Blog to learn how to use the ProxySQL-Datadog integration.

The post Monitoring ProxySQL using Datadog appeared first on ovais.tariq.

ProxySQL Monitoring with Datadog

Introduction

ProxySQL is a high performance proxy for MySQL and its forks. One of the key features is its ability to handle hundreds of thousands of connections with very low overhead. Some of the other key features are query caching, traffic mirroring, query routing and pluggable architecture. It is also the only open source proxy that correctly handles transactions and sessions.

What is Datadog?

Quoting Wikipedia:
Datadog is a monitoring service for cloud-scale applications, bringing together data from servers, databases, tools, and services to present a unified view of an entire stack. These capabilities are provided on a SaaS-based data analytics platform.”

We use Datadog to collect metrics of key systems of our customers. These metrics are used to analyze, alert …

[Read more]
Showing entries 1 to 10 of 193
10 Older Entries »