On Wednesday, February 25 at 18:00 CET (9 am Pacific Time), I
will do webinar on how to analyze and tune MySQL queries for
better performance.
The webinar covers how the MySQL optimizer chooses a specific
plan to execute SQL queries. I will also show you how to use
tools such as EXPLAIN (including the new JSON-based output) and
Optimizer Trace to analyze query plans. We will also review how
the Visual Explain functionality available in MySQL Workbench
helps us visualize these plans. The webinar will also contain
several examples of how to take advantage of the query analysis
to improve performance of MySQL queries.
The presentation will be approximately 60 minutes long followed
by Q&A.
For details on how to register for the webinar
visit …
Thanks Mark, this makes it so much easier!
Hi Keith,
Thanks for blogging about MEM and the bulk add feature. I thought you might like to know, that for an environment like yours, there’s an even *easier* way to configure this. MySQL Enterprise Monitor also has an “automatically monitor after process discovery” feature that works as long as you have credentials that are the same on all of your instances (which in your case, they are). Go to Configuration -> Advisors, then open the “Monitoring and Support Services” category, and expand it. Click on the drop-down menu for the “MySQL Process Discovery” entry, and choose “Edit Advisor Configuration”. Answer “Yes” for the “Attempt Connection” choice, and the rest is similar to the add/edit connection dialog. Once this is configured, any time a MEM agent discovers a MySQL process that MEM isn’t monitoring, it will first attempt to use this stored set of credentials to setup monitoring of that instance.
Carrying on with my MySQL 5.7 Labs Multi Source Replication scenario, I wanted to evaluate performance impact via MySQL Enterprise Monitor.
Whilst I opened my environment, I remember that I had generated lots of different skeleton scripts that allowed me to deploy the 50 servers quickly, and I didn’t want to add each of my targets 1 by 1 in MEM. So, I used one of the many features available, “Add Bulk MySQL Instances”.
So, I’ve got 50 (3001-3050) masters but only 1 slave (3100).
By default, MEM monitors it’s own repository, i.e. the 1/1 server being monitored in the All group.
I want to add my slave in first, because that’s how I’m organizing things, and I’ll take the opportunity to create the monitoring group I want to …
[Read more]We’re delighted to announce a ClusterControl Template for Zabbix, so Zabbix users can now get information about the status of their database clusters, backups and alarms. We have previously published integrations with other monitoring systems including Nagios and PagerDuty.
This template is built using the ClusterControl REST API to retrieve monitoring data. Thus, you need to have a ClusterControl API token and URL configured in the template’s configuration file. This simplifies the initial configuration, and allows users to extend the current monitoring data.
…
[Read more]WebAssign develops higher education online instructional tools for faculty and students, and more than one million students at over 2,300 educational institutions use WebAssign each year. Last year, WebAssign experienced a large outage that impacted their ability to provide timely online grading and assessments; they took this opportunity to look for new tools that would improve insights into their database activities and ensure service level requirements.
“MEM for MySQL wasn’t able to provide per-query metrics or fault diagnosis, which is a level of detail that we value,” said Valerie Parham, DBA, WebAssign. “Instead, releases would appear fine, but then a problem would pop up and we’d have to do a hot-fix so it didn’t turn into an outage, which placed undue stress on everyone.”
WebAssign looked at a number of solutions, including Percona Cloud Tools, but none offered the same level of full query insight and …
[Read more]As a MySQL DBA I already know the data changes that happen on my system. I have logs for that.
However, it’s a common problem that several years into the life of an application, the current developers won’t know where in the codebase queries come from. It’s often hard for them to find the location in the code if queries are formed dynamically; the pattern I show them to optimize doesn’t match anything in the code.
I stumbled on a trick a couple years ago that has been invaluable in tracking down these problematic queries: query comments.
Here’s an example:
When a query generally shows up in a slow query log, it might look something like this:
# Time: 150217 10:26:01 # User@Host: comments[comments] @ localhost [] Id: 13 # Query_time: 0.000231 Lock_time: 0.000108 Rows_sent: 3 Rows_examined: 3 SET timestamp=1424186761; select * from cars;
That logging shows me who executed the query …
[Read more]#DBHangOps 02/19/15 -- Long Query Time, Operational TokuDB, and more!
Hello everybody!
Join in #DBHangOps this Thursday, February, 19, 2015 at 11:00am pacific (19:00 GMT), to participate in the discussion about:
- Learnings from operating TokuDB
- What's a good
long_query_time
? - Testing your backups
- MySQL 5.7 defaults suggestions
You can check out the event page at https://plus.google.com/events/cohut2qncrbkrrmbs868kjorvbo on Thursday to participate.
As always, you can still watch the #DBHangOps twitter search, the @DBHangOps twitter feed, or this blog post to get a link for the google hangout on …
[Read more]
There were so many valuable articles already written by others
over past years explaining all details about InnoDB transaction
isolation modes and how to deal with this. So, I'll avoid to
repeat what was already said ;-) -- my attention attracted the
performance study made by PeterZ and published in the following
article: http://www.percona.com/blog/2015/01/14/mysql-performance-implications-of-innodb-isolation-modes/
-- the article is very good and providing a good analyze of the
observed problem which is solved by using READ-COMMITTED
transaction isolation instead of REPEATABLE-READ (which is
default in InnoDB).. The natural question is coming then: why
don't we have then the READ-COMMITTED mode by default?.. Is there
any danger?..
Let's then investigate together..
First of all, you should …
MySQL meets NoSQL with JSON UDF
I recently got back from FOSDEM, in Brussels, Belgium. While I was there I got to see a great talk by Sveta Smirnova, about her MySQL 5.7 Labs release JSON UDF functions. It is important to note that while the UDF come in a 5.7 release it is absolutely possible to compile and use the UDF with earlier versions of MySQL because the UDF interface has not changed for a long time. However, the UDF should still be considered alpha/preview level of quality and should not be used in production yet! For this example I am using Percona Server 5.6 with the UDF.
That being said, the proof-of-concept that I’m about to present here uses only one JSON function (JSON_EXTRACT) and it has worked well enough in my testing to present my idea here. The JSON functions will probably be GA sometime soon anyway, and this is a useful test of the JSON_EXTRACT function. …
[Read more]