Showing entries 1 to 10 of 82
10 Older Entries »
Displaying posts with tag: troubleshooting (reset)
Troubleshooting XA transactions in MySQL

This is one of the exciting troubleshooting related to XA transactions let’s dive in. Let me provide a few backgrounds.

For one of our clients, when we try to drop a table, it is waiting for metadata lock. On debugging it is one of the XA transactions is holding the shared write lock and causing metadata lock on all the other local transactions.

Let us view the metadata locks from the Performance Schema.

mysql> select OBJECT_TYPE,OBJECT_SCHEMA,OBJECT_NAME, LOCK_TYPE,LOCK_STATUS,SOURCE from performance_schema.metadata_locks\G
******************* 1. row *********************
OBJECT_TYPE: TABLE
OBJECT_SCHEMA: sbtest
OBJECT_NAME: sbtest_table1
LOCK_TYPE: SHARED_WRITE
LOCK_STATUS: GRANTED

SOURCE: xa.cc:284
******************* 2. row *********************
OBJECT_TYPE: TABLE …
[Read more]
MySQL “No space left on device from storage engine”

We have planned for archiving the data to improve the DB performance and to reclaim the space. We were evaluating Compression in InnoDB and TokuDB. To find out the best compression method. We started benchmarking the compression ratio between InnoDB and TokuDB.

Everything goes well for some time, but after a few hours got an error message that can’t able to insert the data into the TokuDB table due to storage being full. It is so strange and the host has enough free space.


Table structure:-

mysql> show create table mydbops.tokudb\G
*************************** 1. row ***************************
       Table: tokudb
Create Table: CREATE TABLE `tokudb` (
  `ID` int DEFAULT NULL,
  `Name` longtext,
  `Image` blob
) ENGINE=TokuDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
1 row in set (2.18 sec)

mysql> show create table mydbops.innodb\G
*************************** 1. row …
[Read more]
Handling case sensitive column in pt-archiver

To copy the data of the particular column of the table to another table/server, We have an option to export the data as CSV and import the data back to a different table. But when the table size is large and we need to copy the data only for the required data to the target table will cause the load in the server since the table scanning is huge.

To overcome this, we have the pt-archiver copy the data from the source table to the destination as a whole or only for required columns. And also we can do this in a controlled manner as well. So there will be no performance impact even on the production time.

Source table structure :

mysql> show create table source\G
*************************** 1. row ***************************
       Table: source
Create Table: CREATE TABLE `source` (
  `id` int unsigned NOT NULL …
[Read more]
Troubleshooting an unique key addition during pt-online-schema-change

We all tried various alternative methods for modifying the table structure, but pt-online-schema-change (pt-osc) is the most convenient and preferred method for performing the alter online. It has more granular control too. But it may lead to data loss if proper precautionary steps are not taken care of.

In this blog, we are going to modify a column to a unique key using pt-osc, below I have shared the table structure.

mysql> show create table test\G
* 1. row *
Table: test
Create Table: CREATE TABLE test (
Personid int(11) NOT NULL AUTO_INCREMENT,
LastName varchar(255) NOT NULL,
FirstName varchar(255) DEFAULT NULL,
Age int(11) DEFAULT NULL,
PRIMARY KEY (Personid)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

I have inserted the data of 1000 rows using …

[Read more]
MySQL ERROR Log Table Explained

Over the decades we have been reading the MySQL error log from the server system file, if there are any issues in MySQL or any unknown restart happened , generally we look at the mysql error log.

By default MySQL error log can be found in the default path /var/log/mysqld.log , or it can be explicitly configured using the variable log_error.

Few drawbacks using MySQL error log as FILE

  • Possibility of missing genuine errors while reading lengthy information.
  • Filtering of errors for the particular date and timeframes.
  • Cannot provide the DB server access to developers because of fear of mishandling DB servers.

To overcome the above issues , from MySQL …

[Read more]
Say Hello to Libcoredumper – A New Way to Generate Core Dumps, and Other Improvements

In a perfect world, we expect all software to run flawlessly and never have problems such as bugs and crashes. We also know that this perfect world doesn’t exist and we better be as prepared as possible to troubleshoot those types of situations. Historically, generating core dumps has been a task delegated to the kernel. If you are curious about how to enable it via Linux kernel, you can check out Getting MySQL Core file on Linux. There are a few drawbacks that pose either a limitation or a huge strain to get it working, such as:

  • System-wide configuration required. This is not something DBA always has access to.
  • Inability or very difficult to enable it for a specific binary only. Standards ways enable it for every software running on the box.
  • Nowadays, with cloud and containers, this task has become even …
[Read more]
Row scanned equals to 1, Is the query is optimally tuned ?

A few days ago one of our intern @mydbops reached me with a SQL query. The query scans only a row according to the execution plan. But query does not seems optimally performing.

Below is the SQL query and its explain plan. ( MySQL 5.7 )

select username, role from user_roles where username= '9977223389' ORDER BY role_id DESC LIMIT 1;

Execution plan and table structure

*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: user_roles
   partitions: NULL
         type: index
possible_keys: NULL
          key: PRIMARY
      key_len: 4
          ref: NULL
         rows: 1
     filtered: 10.00
        Extra: Using where
1 row in set, 1 warning (0.00 sec)

show create table user_roles\G
*************************** 1. row ***************************
       Table: …
[Read more]
MySQL Group Replication and its Memory consumption (troubleshooting).

This blog is about one of the issues encountered by our Remote DBA Team in one of the production servers. We have a setup of MySQL 5.7 Single Primary (Writer) GR with cluster size of  3 . Due to OOM, the MySQL process in the primary node got killed, this repeated over the course of the time.

We all know about the OOM (out of memory), theoretically, it is a process that the Linux kernel employs when the system is critically low on memory.

In a dedicated DB server, when the OOM triggers the direct impact will be on mysqld process since it will be the most memory consuming one.

Going forward will look into the detailed analysis made to tackle the issue of OOM.

DB Environment:-

  • Service – Group Replication Cluster
  • Cluster Nodes – 3
  • GR mode – Single Primary …
[Read more]
Get the most IOPS out of your physical volumes using LVM.

Hope everyone aware about known about LVM(Logical Volume Manager) an extremely useful tool for handling the storage at various levels. LVM basically functions by layering abstractions on top of physical storage devices as mentioned below in the illustration.

Below is a simple diagrammatic expression of LVM

         sda1  sdb1   (PV:s on partitions or whole disks)
           \    /
            \  /
          Vgmysql      (VG)
           / | \
         /   |   \
      data  log  tmp  (LV:s)
       |     |    |
      xfs  ext4  xfs  (filesystems)

IOPS is an extremely important resource, when it comes to storage it defines the performance of disk. Let’s not forget PIOPS(Provisioned IOPS) one of the major selling points for AWS and other cloud vendors for production machines …

[Read more]
TaskMax limit affects MySQL connections

Recently we had been bitten by a Systemd limitation at the “Tasks” created per-unit ie., process. This includes both the kernel threads and user-space threads, with each thread counting individually.

Am writing this blog as a reference for someone who might come across this limitation.

We have been actively working on migration DB instances, from one DC to the newly built DC .The instances on the newer DC were provisioned with the latest hardware and latest Debian OS. Below is the detailed spec of the system.

RAM             : 244G
Core             : 44Core
HardDisk.   : SSD
IOPS             : 120K
OS         …

[Read more]
Showing entries 1 to 10 of 82
10 Older Entries »