Showing entries 6081 to 6090 of 22258
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: MySQL (reset)
Rotating MySQL Slow Logs

While working with different clients I happen to run across very large slow log files from time to time.  While several opinions on how they should be rotated exist. Many of these opinions use log rotate and the flush logs command,  I prefer not to flush my binary logs though. This is why I agree with Ronald Bradford's blog post from years ago on  how to do this.

I have taken it a little further and scripted the steps. The bash script is built with MySQL 5.6 and the mysql_config_editor in mind it can be used on older versions of MySQL as well.

The script …

[Read more]
MariaDB 10.1: Better query optimization for ORDER BY … LIMIT

For some reason, we’ve been getting a lot of issues with ORDER BY optimization recently. The fixes have passed Elena Stepanova’s scrutiny and I’ve pushed them to MariaDB 10.1. Now, MariaDB’s ORDER BY ... LIMIT optimizer:

  • Doesn’t make stupid choices when several multi-part keys and potential range accesses are present (MDEV-6402)
  • Always uses “range” and (not full “index” scan) when it switches to an index to satisfy ORDER BY … LIMIT (MDEV-6657)
  • Tries hard to be smart and use cost/number of records estimates from other parts of the optimizer (MDEV-6384, MDEV-465, MySQL …
[Read more]
MariaDB 5.5.40 Overview and Highlights

MariaDB 5.5.40 was recently released (it is the latest MariaDB 5.5), and is available for download here:

https://downloads.mariadb.org/mariadb/5.5.40/

This is a maintenance release, and so there are not too many big changes of note, just a number of normal bug fixes. However, there are a few items worth mentioning:

[Read more]
A discovery - Index Condition Pushdown can cause a slowdown after all

MariaDB 5.5 and then MySQL 5.6 got Index Condition Pushdown (ICP) optimization (initially coded by yours truly). The idea of ICP is simple: after reading the index record, check the part of WHERE condition that can be computed using index columns, and only then read the table record. That way, we avoid reading table rows that don’t satisfy index condition:

It seems apparent that ICP can never make things slower. The WHERE clause has to be checked anyway, and not reading certain records can only make things faster.

That was what I thought, too, until recently Joffrey Michaie observed the contrary “in the wild”: we’ve got a real-world case where using Index Condition Pushdown was slower than not using it: …

[Read more]
Creating PivotTables when importing MySQL data using MySQL for Excel

In a previous blog post (Importing related MySQL tables into an Excel Data Model using MySQL for Excel) we covered in detail how an Excel Data Model can be created containing tables and their relationships so the data can be analyzed in Excel via a PivotTable. In this blog post we are going to talk about one of the features included since MySQL for Excel 1.3.0 that allows you to create PivotTables for data imported from MySQL tables, views or stored procedures, or more importantly for the whole Excel Data Model if it is created.

Remember you can install the latest GA or maintenance version using the MySQL Installer or optionally you can download directly any GA or non-GA version from the MySQL Developer …

[Read more]
MySQL compression: Compressed and Uncompressed data size

MySQL has information_schema.tables that contain information such as “data_length” or “avg_row_length.” Documentation on this table however is quite poor, making an assumption that those fields are self explanatory – they are not when it comes to tables that employ compression. And this is where inconsistency is born. Lets take a look at the same table containing some highly compressible data using different storage engines that support MySQL compression:

TokuDB:

mysql> select * from information_schema.tables where table_schema='test' G
*************************** 1. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: test
TABLE_NAME: comp
TABLE_TYPE: BASE TABLE
ENGINE: TokuDB
VERSION: 10
ROW_FORMAT: tokudb_zlib
TABLE_ROWS: 40960
AVG_ROW_LENGTH: 10003
DATA_LENGTH: 409722880
MAX_DATA_LENGTH: …
[Read more]
Database Automation - Private DBaaS for MySQL, MariaDB and MongoDB with ClusterControl

October 9, 2014 By Severalnines

Installing, configuring, deploying databases and performing repetitive administrative tasks are all part of a DBA’s or sysadmin’s job. This can get pretty repetitive and overwhelming if you are part of a centralized IT team, running multiple databases for your organization’s different departments, or a managed hosting provider responsible for setting up and operating databases for external clients. One way to get out of this ‘manual, repetitive task’ business is through a Database as a Service (DBaaS).

DBaaS is a way of delivering database functionality as a service to one or more consumers. A DBaaS platform would provide automated procedures for database deployment, monitoring, backups, recovery/repair, scaling, security/multi-tenancy, etc. This type of automation is especially useful where agility is needed, e.g. for systems that require elasticity by scaling out or scaling back at short …

[Read more]
libAttachSQL Second Beta, After the Sledgehammer

Last week I blogged about getting sysbench working with libAttachSQL. This was not only an exercise in performance but also the first real test for libAttachSQL.

Before I had done this testing the most the early Alpha and Beta releases of libAttachSQL had gone through is a few basic queries. So, the first thing I did when I got the sysbench driver working was slap it with 1,000,000 queries. It pretty much exploded instantly on that. Over the course of this release I have probably hit it with over 100,000,000 queries and things run a lot smoother.

This has led to today's release of libAttachSQL 0.5.0. As far as changes go this release has the biggest changelog so …

[Read more]
Indeed, MySQL 5.7 rocks : OLTP_RO/RW 1-table Benchmarks

This is the next part of the stories about MySQL 5.7 Performance..

So far, the previous story was about reaching 645K QPS with SQL queries, while in reality it's only a half of the full story ;-) -- because when last year we've reached 500K QPS due a huge improvement on the TRX-list code, the same improvement made a negative impact on the all single-table test workloads..

What happened finally :

  • the new code changes dramatically lowered contention on TRX-list (trx_sys mutex)
  • which is made MDL related locking much more hot..
  • and if one table becomes hot on a workload, MDL lock contention then is hitting its highest level..


So far, it was clear that MDL is needed a fix. Specially seeing that on 8-tables workload we're …

[Read more]
Testing the Fastest Way to Import a Table into MySQL (and some interesting 5.7 performance results)

As I mentioned on my last post, where I compared the default configurations options in 5.6 and 5.7, I have been doing some testing for a particular load in several versions of MySQL. What I have been checking is different ways to load a CSV file (the same file I used for testing the compression tools) into MySQL. For those seasoned MySQL DBAs and programmers, you probably know the answer, so you can jump over to my 5.6 versus 5.7 results. However, the first part of this post is dedicated for developers and MySQL beginners that want to know the answer to the title question, in a step-by-step fashion. I must say I also learned something, as I under- and over-estimated some of the effects of certain …

[Read more]
Showing entries 6081 to 6090 of 22258
« 10 Newer Entries | 10 Older Entries »