The InnoDB Mutex, part 3

I repeated tests from part 1 and part 2 using a system that has an older/smaller CPU but more recent versions of glibc & Linux. In this case the CPU has dual sockets with 6 cores per socket and 24 vCPUs with HT enabled. The host uses Fedora, glibc 2.17 and Linux kernel 3.11.9-200. Tests were run up to 512 threads. There was an odd segfault at 1024 threads with the TTASFutexMutex that I chose not to debug. And tests were run for 1, 4 and 12 mutexes rather than 1, 4 and 16 because of the reduced vCPU count. The lock hold duration was ~6000 nsecs rather than ~4000 because of the different CPU.

My conclusions from this platform were less strong than from the previous tests.

  • The InnoDB syncarray mutex still has lousy behavior at high concurrency but it is less obvious here because I did not run a test for 1024 threads.
  • pthread default is usually better than pthread adaptive, but in at least one case pthread adaptive was much better
  • The new custom InnoDB mutex, TTASFutexMutex, was occasionally much worse than the alternatives. From looking at code in 5.7, it looks like the choice to use it is a compile time decision. If only to figure out the performance problems this choice should be a my.cnf option and it isn't clear to me that TTASFutexMutex is ready for prime time.

Graphs for 0 nsec lock hold

Graphs for 1000 nsec lock hold


Graphs for 6000 nsec lock hold