Wednesday, April 6, 2011

Tracking long running transactions in MySQL

 Sometimes I want to trace transactions that have taken long time to execute. If a transaction is holding row locks for 30 seconds, all other clients requesting locks for the same rows will be blocked for up to 30 seconds (or will get Lock Wait Timeout errors if you set innodb_lock_wait_timeout lower). This is serious.

 The important step to solve such issues is identifying what kinds of queries are executed by problematic transactions. But tracking long running transactions is not as easy as tracking just slow queries. Suppose you execute the following transaction.
[client1]$ mysql --host=mysql_server db1
mysql> BEGIN;
mysql> SELECT * FROM t1 WHERE id=2 FOR UPDATE;
... (do nothing for 10 seconds)
mysql> UPDATE t1 SET value=20 WHERE id=2;
mysql> COMMIT;
 This transaction takes more than 10 seconds to complete, but each query finishes very quickly. So nothing is written to the slow query log. Analyzing slow query logs is not helpful in this case.

 If you enable general query log, all queries including above are written. But this hurts performance (writing to general query log is serialized inside MySQL), and logging all queries makes log file size huge. Analyzing too big log files is not so fun. In addition, neither query execution time nor error code is written to the general query log. So just analyzing general query log is not enough to verify whether the above SELECT .. FOR UPDATE took 10 seconds or finished quickly.

 How about analyzing statement based binary logs? Unfortunately this is not always helpful. First, uncommitted transactions are not written to the binary log. There are many cases that transactions take long time and finally do rollback. Second, SELECT statements (including SELECT ... FOR UPDATE/LOCK IN SHARE MODE) are not written to the binary log. When you use locking reads, lots of transactions will be likely to be blocked. You probably want to identify which SELECT statement has caused the problem.

 Using SHOW ENGINE INNODB STATUS? SHOW ENGINE INNODB STATUS prints active transactions like below.
---TRANSACTION 1B43B50, ACTIVE 24 sec, process no 7388, OS thread id 1235609920 starting index read
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 376, 1 row lock(s)
MySQL thread id 23, query id 140396660 client2.example.com 192.168.0.2 root Updating
update t1 set value=100 where id=2
------- TRX HAS BEEN WAITING 24 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 0 page no 213041 n bits 80 index `PRIMARY` of table `test`.`t1`
trx id 1B43B50 lock_mode X locks rec but not gap waiting
Record lock, heap no 3 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
0: len 4; hex 80000002; asc ;;
1: len 6; hex 000001b43b4e; asc ;N;;
2: len 7; hex 810000012d011c; asc - ;;
3: len 4; hex 80000002; asc ;;
------------------

---TRANSACTION 1B43B4F, ACTIVE 34 sec, process no 7388, OS thread id 1096673600
2 lock struct(s), heap size 376, 1 row lock(s)
MySQL thread id 22, query id 140396658 client1.example.com 192.168.0.1 root
 You can identify that a client from 192.168.0.2 was executing "update t1 set value=100 where id=2" and was waiting for 24 seconds to get some locks. But there is no information about what queries held the lock. A client from 192.168.0.1 (1B43B4F) was running a transaction for 34 seconds, so this was probably the one holding the lock. But there is no information about why it was running for 34 seconds. In this case I want to get all queries that the 1B43B4F executed so far.

 If you can modify application programs, tracing long running transactions is not so difficult. Adding below logics will help.

1. Checking transaction start time (t1)
2. Pushing queries (and current timestamp) by the transaction into some in-memory queue
3. Checking transaction end time (t2)
4. If (t2 - t1) exceeds N seconds, printing all queries that the transaction executed
5. Deleting the in-memory queue

 The problem is that this approach is not feasible in many cases. Because long running transactions may be executed from every client, you have to implement tracing logic on all application servers. Modifying core database access codes and re-deplying on all application servers is not fun. If you use O/R mappers, things become more difficult. Modifying database driver programs on all application servers? I don't want to do that.


 Based on the above reasons, I thought that the most practical approach is tracing slow transactions on MySQL servers, without modifying any existing program (including client programs and MySQL servers). Hopefully there is a way to do that: capturing MySQL packets and tracking transactions and queries. The above 1-5 tracing algorithm should work.
I implemented the tool "MySlowTranCapture" and published at GitHub. Here is an example output.
[mysql-server]# myslowtrancapture -i eth0
Monitoring eth0 interface..
Listening port 3306..
Logging transactions that take more than 4000 milliseconds..

From 192.168.0.1:24441
2011/02/23 09:12:17.258307 ->
begin
2011/02/23 09:12:17.258354 <-
GOT_OK
2011/02/23 09:12:17.264797 ->
select * from diary where diary_id=100 for update
2011/02/23 09:12:17.265087 <-
GOT_RES
2011/02/23 09:12:17.277622 ->
select 1
2011/02/23 09:12:17.277713 <-
GOT_RES
2011/02/23 09:13:01.232620 ->
update diary set diary_date=now() where diary_id=100
2011/02/23 09:13:01.232960 <-
GOT_OK
2011/02/23 09:13:17.360993 ->
commit

From 192.168.0.2:24442
2011/02/23 09:12:20.969288 ->
begin
2011/02/23 09:12:20.969483 <-
GOT_OK
2011/02/23 09:12:20.977699 ->
update diary set diary_date=now() where diary_id=100
2011/02/23 09:13:11.300935 <-
GOT_ERR:Lock wait timeout exceeded; try restarting transaction
2011/02/23 09:13:13.136967 ->
rollback
--------------------
 It is easy to identify that the first transaction caused problems. The first transaction held an exclusive lock for diary_id=100 at 09:12:17.264797, and didn't release until 09:13:17.360993. All queries by the transaction completed within a millisecond, so the first transaction was probably sleeping, or took long time to do other logics (i.e. accessing to remote servers and taking long time there) before closing the transaction.

 MySlowTranCapture uses libpcap to capture network packets, like tcpdump. Since libpcap loses packets sometimes, it is not guaranteed to capture 100% transactions.
 MySlowTranCapture approximately uses 10-30% CPU resources of mysqld process. This is not bad for short-term analyzing purposes (when I tested tcpdump, it used 140% CPU resources of mysqld, and most of CPU time seemed to be spent for writing network packets to local files).
 I sometimes use this tool for debugging, too. Sometimes developers execute unnecessarily many SQL statements within single transaction. MySlowTranCapture is useful to trace such transactions on development servers, by setting -t lower (i.e. 50ms).

No comments:

Post a Comment