In the same time of passionate debat of the best distribution
MySQL vs MariaDB on a 120K Euro server(here), i wanted to share the story of drop of
InnoDB insert performance for one of our client having a tables
between 10 to 100 Billions rows.
In most case this size of table is just a no go
in relationnel database.
So why it used to work so far ?
c1 int, c2 tinyint, c3 mediumint, c4 binary(1), c5 float .
c2 avg cardinality 1000
c3 avg cardinality 10000
Very simple log table with optimized column type :
- Partition by range on c1 where c1 increase with loading date
- InnoDB compression 8K block size
A first good metric when inserting 1 row
- Into a 1G partition we touch about 400 …