Showing entries 17163 to 17172 of 44061
« 10 Newer Entries | 10 Older Entries »
On queries with many values in the IN clause

A few customers with rather extreme needs have contacted us about a performance issue with the range optimizer. Our solution to the problem is to introduce a new variable in MySQL 5.6, eq_range_index_dive_limit, which can be used to control whether or not the range optimizer will a) do index dives, or b) use index statistics when estimating the number of rows in the ranges of the query. The former method gives a far more accurate estimate while the latter costs a lot less to compute.

This is what the help text has to tell about the variable:

The optimizer will use existing index statistics instead of doing index dives for equality ranges if the number of equality ranges for the index is larger than or equal to [the value of variable]. If set to 0, index dives are always used."Equality range" means predicates using operators IN() or =, and it's important to notice that the number of such ranges is counted on a per index …

[Read more]
Global Transaction Identifiers are in MySQL 5.6.5 DMR

Global Transaction Identifiers are in!
I am very happy and especially proud to announce that the replication team has delivered global transaction identifiers to MySQL 5.6.5 Development Milestone Release (DMR). It is a very useful, big, impressive and game-changing feature that will make life easier for many of our users. With this feature in place, it is much simpler to track replication progress through the replication topology thus it removes part of the burden of deploying and administering complex multi-tier replication topologies. Actually, as stated before, this feature is an enabler, as it gives the user so much more flexibility when it comes to deploying replication and tracking the data that is replicated. In particular, it is a foundation for reliable and automated fail/switch over with slave promotion, reducing the need for 3rd party HA infrastructure (which adds cost and complexity). In fact, …

[Read more]
Twitter, Facebook MySQL trees online – pushing MySQL forward

Just yesterday, I’m sure many saw Twitter opensourcing their MySQL implementation. It is based on MySQL 5.5 and the code is on Github.

For reference, the database team at Facebook has always been actively blogging, and keeping up their code available on Launchpad. Its worth noting that the implementation there is based on MySQL 5.0.84 and 5.1.

At Twitter, most of everything persistent is stored in MySQL – interest graphs, timelines, user data and those precious tweets themselves! At Facebook, its pretty similar – all user interactions like likes, shares, status updates, requests, etc. are all stored in MySQL ( …

[Read more]
MySQL Performance: Game Of Contentions..

Recently running yet one of series of benchmark on MySQL 5.6, I've met the following case:

  • the workload is Read+Write, 256 concurrent user sessions running non-stop..
  • Performance Schema (PFS) is enabled and Synch events are monitored (both counted and timed)..
  • accidentally I've not fully disabled the query cache (QC) in MySQL config (query_cache_type=0 was missed)
  • and at once PFS reported it to be the top wait event during this workload:



Test #1 -- @5.6 with PFS=on, default query_cache_type:
NOTE: we're reaching in this configuration ~100K QPS..

So, of course, I've expected to see much more higher QPS once I've set query_cache_type=0, don' you?.. ;-)
The second graphs are representing the result obtained with these changes:


Test #2 -- @5.6 with PFS=on, query_cache_type=0:


[Read more]
Agile software delivery and schema changes - how do you do it

I work for realestate.com.au and we're an agile delivery shop. We're trying to catch up to some of the best of the best web-shops like Etsy and so on through continuous delivery. Millions of deploys a day!

The big kicker with continuous delivery is around schema changes. I'm curious to know how everyone is performing MySQL (InnoDB) schema changes in a agile continuous delivery shop. So please comment and share your thoughts.

How do we do ours? First off we don't have a good story yet about schema changes though I think we're ahead of the pack

  • Limit schema changes to column additions / index changes / new tables
  • Perform our schema changes online using Master Master Active Standby replication (using hardware load balancer out the front of our databases - Citrix Netscaler)
  • Statement based replication 
  • Application fault tolerance (to a degree) - so …
[Read more]
Stock Centos6 Ruby and a MySQL database far away

I wanted to share a recent issue with stock Ruby on Centos6.

We run a Rails application that connects to a MySQL datastore. We're in a datacenter transformation and we deployed the application to our new datacenter though the MySQL datastore is at the other datacenter. As you can see there is about a 35ms distance between the application and the MySQL instance.



[root@host]# ping 172.x.y.19
PING 172.x.y.19 (172.x.y.19) 56(84) bytes of data.
64 bytes from 172.x.y.19: icmp_seq=1 ttl=253 time=32.6 ms
64 bytes from 172.x.y.19: icmp_seq=2 ttl=253 time=38.1 ms
64 bytes from 172.x.y.19: icmp_seq=3 ttl=253 time=36.0 ms
^C
--- 172.x.y.19 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2751ms
rtt min/avg/max/mdev = 32.656/35.600/38.101/2.250 ms


We started the Rails application up though it just was not …

[Read more]
What is the proper size of InnoDB logs?

In one of my previous posts, “How to resize InnoDB logs?”, I gave the advice on how to safely change the size of transaction logs. This time, I will explain why doing it may become necessary.

A brief introduction to InnoDB transaction logs

The transaction logs handle REDO logging, which means they keep the record of all recent modifications performed by queries in any InnoDB table. But they are a lot more than just an archive of transactions. The logs play important part in the process of handling writes. When a transaction commits, InnoDB synchronously makes a note of any changes into the log, while updating the actual table files happens asynchronously and may take place much later. Each log entry is assigned a Log Sequence Number – an incremental value that always uniquely identifies a change.

[Read more]
Benchmarking MySQL Replication with Multi-Threaded Slaves

The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6.

As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to:

· Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data)

· Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog)

The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application …

[Read more]
Twitter open-sources its MySQL tweaks

Microblogging site reveals some of the enhancements it's made to the database technology it uses to store user-generated data.

Twitter MySQL published

It’s been a long time coming, but we’ve just published our Twitter-internal MySQL development branch onto GitHub. The initial publication and announcement isn’t meant to be groundbreaking—we’re setting up some groundwork to be able to collaborate publicly. It comes with an initial set of changes we’ve made to support our production workloads and make DBAs lives easier, which you can read about in the README.


Showing entries 17163 to 17172 of 44061
« 10 Newer Entries | 10 Older Entries »