Tuesday, April 1, 2014

Semi-Synchronous Replication at Facebook

 After intensive testing and hack, we started using Semi-Synchronous MySQL Replication at Facebook production environments. Semi-Synchronous Replication itself was ready since MySQL 5.5 (GA was released 3.5 years ago!), but I'm pretty sure not many people have used in production so far. Here are summary of our objective, enhancements and usage patterns. If you want to hear more in depth, please feel free to ask me at Percona Live this week.

Objective / Why Semisync?

  The objective of the Semi-Synchronous Replication is simple -- Master Failover without data loss, without full durability.

 First, let me describe why the objective is difficult without semisync.

 There are a couple of fast slave promotion (master failover) solutions. My own MHA covers both fully automated and semi-automated MySQL failover solution. Fully automated means both failure detection and slave promotion are done automatically. Semi automated means failure detection is not done but slave promotion is done by one command. Time to detect failure is approximately 10 seconds, and actual failover is taking around 5 to 20 seconds, depending on what you are doing during failover (i.e. forcing power off of the crashed master will take at least a few seconds). Total downtime can be less than 30 seconds, if failover works correctly. I'm using term "Fast Failover" in this post, which includes both automated and semi-automated master failover.
 In MySQL 5.6, GTID based failover is also possible. Oracle's official tool mysqlfailover automates MySQL master failover using GTID. The latest version of MHA also supports GTID.

 Both mysqlfailover and MHA rely on MySQL replication. MySQL replication is asynchronous. So there is a very serious disadvantage -- potential data loss risk on master failover. If you use normal MySQL replication and do automated master failover with MHA/mysqlfailover, you can do failover quickly (a few seconds with MHA), but you always have risks of losing recently committed data.

 If you don't want to take any risk of losing data, you can't do fast master failover with normal MySQL replication. You have to do the following steps in case of master failure.

- Always set fully durable settings on master. By fully durable I mean setting innodb_flush_log_at_trx_commit=1 and sync_binlog=1.
- On master crash, wait for a while (10~30 minutes) until the crashed master recovers. Recovery takes long time because it involves OS reboot, storage and filesystem recovery, and InnoDB crash recovery.
- If the crashed master recovers, you can continue services without losing any data. Since all data exist on the master, slaves can continue replication. BTW official 5.6 had a bug causing all slaves broken in this scenario, but this bug was fixed in 5.6.17.
- If the crashed master doesn't recover (H/W failure etc), you need to promote one of slaves to a new master. There is a risk of losing some data but you don't have any other choice.

 This "safer" approach has two issues.
- Longer downtime. This is because you have to wait for master's recovery.
- You can't eliminate risks of losing data. If master is dead and never recovers, your risk of losing data is the same as doing fast failover.

 So, in bad cases, you have to suffer from both longer down time and losing data.

 Semi-Synchronous Replication is helpful to prevent from losing data.

 If you do not care about data loss risk, there is no reason to use Semi-Synchronous replication. You can use normal MySQL replication and do fast failover with mysqlfailover or MHA. Facebook is one of the companies to care about data loss risk with MySQL, so that's why we were interested in Semi-Synchronous replication a lot.

 Semisync replication was originated from Google in 2007. Official MySQL supported from 5.5. Actual implementation algorithm was substantially different from Google's.

 MySQL Cluster and Galera offers synchronous replication protocol in different ways. I do not cover them in this blog post.

 Semi-Synchronous Replication currently has two types of different algorithms -- Normal Semisync and Loss-Less Semisync. Let me explain the differences.

Differences between Normal Semisync and Loss-Less Semisync

 Loss-Less Semisync is a new Semisync feature supported in official MySQL 5.7. Original implementation was done by Zhou Zhenxing as "Enhanced Semisync" project, and also filed as a bug report. Oracle implemented based on his idea, and named Loss-Less semisync for it. So Enhanced Semisync and Loss-Less Semisync have same meanings. I say Loss-Less semisync in this post.

 Normal semisync and loss-less semisync work as below.

1. binlog prepare (doing nothing)
2. innodb prepare (fsync)
3. binlog commit (writing to fscache)
4. binlog commit (fsync)
5. loss-less semisync wait (AFTER_SYNC)
6. innodb commit (releasing row locks, changes are visible to other users)
7. normal semisync wait (AFTER_COMMIT)

 On normal semisync(AFTER_COMMIT), committing to InnoDB is done before waiting for ack from semisync slave, so the committed rows are visible from applications, even though semisync slaves may not have received the data. If master is crashed and none of the slaves received the data, the data is lost but applications may have seen them. This is called phantom reads, and in many cases it's problematic.

 Loss-less semisync (AFTER_SYNC) avoids the problem. Loss-less semisync commits InnoDB after getting ack from one of semisync slaves. So when committed data is visible from applications, one of the semisync slaves have received that. Phantom read risk is much smaller: if both master and the latest semisync slave are down at the same time, data is lost. But it's much less likely to happen compared to normal semisync.

 To avoid data loss and phantom reads, Normal Semisync can't meet your expectations. Using Loss-Less Semisync is needed.
 With Loss-Less Semi-Synchronous replication, committed data should be on one of the slaves, so you can recover from the latest slave. You can always do fast failover here.

Reduced Durability

 When you do fast failover, you can set reduced durable settings on master as well as slaves. Reduced durability means innodb_flush_log_at_trx_commit != 1 and sync_binlog != 1. With Semi-Synchronous replication, you can immediately start failover when master is down. When promoting a slave to the new master, identify the latest slave (highly likely one of the Semi-Synchronous slaves but not guaranteed) and apply differential logs to the new master. Master's durability does not matter here, because there is no way to access master's data during failover. So you can safely reduce durability. Reducing durability has a lot of benefits.
- Reducing latency on (group) commit because it doesn't wait for fsync().
- Reducing IOPS because the number of fsync() calls is significantly reduced: from every commit to every second. Overall disk workloads can be reduced. This is especially helpful if you can't rely on battery/flash backed write cache.
- Reducing write amplification. Write volume can be reduced a lot, even less than half in some cases. This is important especially when using flash devices, because less write volume increases flash life expectancy.

Requirements for Semisync Deployment

 To make Semisync work, you need at least one semisync reader (slave with semisync enabled) within the same (or very close) datacenter as the master. This is for latency. When semisync is enabled, round-trip time(RTT) between master and one of the semisync slaves is added to transaction commit latency. If none of the semisync slave is located within close datacenter, RTT many take tens or hundreds of milliseconds, which means you can commit only 10~100 times from single client. For most environments, this will not work. You need a slave within close datacenter.

 To make fast failover work without data loss, you need to make sure Semi-Synchronous Replication is always enabled. MySQL Semisync has a couple of points where optionally semisync is disabled:
- Exceeding timeout (exceeding rpl_semi_sync_master_timeout milliseconds to get ACK from all of the semisync slaves)
- No semisync slave (can be controlled via rpl_semi_sync_master_wait_no_slave)
- Executing SET GLOBAL rpl_semi_sync_master_enabled=0

 If you want to enable semisync always, you make sure these scenario won't happen. Set infinite or very long timeout, and have at least two semisync readers.
 

 Facebook Enhancements to Semi-Synchronous Replication


 We spent a lot of time for testing Semi-Synchronous replication in 2013. We found some S1 bugs, serious performance problems, and some administration issues. Our MySQL Engineering team and Performance team worked for fixing issues and finally our Operations team deployed Semisync in production.

 Here are our major enhancements.

Backporting Loss-Less Semisync from 5.7

 As described above, Loss-Less Semisync is needed to prevent data loss and phantom reads, so we backported Loss-Less Semisync patch from official MySQL 5.7 to our Facebook MySQL 5.6 branch. It will be merged to WebScaleSQL branch soon.

 Interestingly, when we tested semisync performance, Loss-less semisync gave better throughput than normal semisync, especially when the number of clients is large. Normal semisync caused more mutex contentions, which was alleviated with loss-less semisync. Since Loss-less semisync has better data protection mechanism, we concluded there is no reason to use normal semisync here.

Semisync mysqlbinlog

 Starting from MySQL 5.6, mysqlbinlog supported remote binlog backups, by using --raw and --read-from-remote-server. On remote binlog backups, mysqlbinlog works like a MySQL slave. mysqlbinlog connects to a master, executing BINLOG DUMP command, then receiving binlog events via MySQL replication protocol. This is useful when you want to take backups of the master's binary logs. Slave's relay logs and binary logs are not identical to master's binary logs, so they can't directly be used as backups of the master's binary logs.

 We extended mysqlbinlog to speak Semisync protocol. The reason of the enhancement is that we wanted to use "semisync mysqlbinlog" as a replacement of local semisync slaves. We usually run slaves on remote datacenters, and we don't always need local slaves to serve read requests / redundancy. On the other hand, as described at above "Requirements for Semisync Deployment" section, in practice at least two local semisync readers are needed to make semisync work. We didn't like to run additional two dedicated slaves per master just for semisync. So we invented semisync mysqlbinlog and use it instead of semisync slaves, as shown in the below figure.




 Compared to semisync slave, semisync mysqlbinlog has a lot of efficiency wins.

- semisync slave has lots of CPU overheads such as query parsing, making optimizer plans. semisync mysqlbinlog does not have such overhead.
- semisync slave writes 2x (relay log and binary log). semisync mysqlbinlog writes binary log only.
- For semisync slave, the way to write to relay log is not efficient. IO thread writes to kernel buffer per each binlog event. For regular auto-committed transactions, it consists of three binlog events (query BEGIN, query body, and commit XID). When using InnoDB only, writing to kernel buffer for every XID event is enough (though it does not cover DDL). By writing to kernel buffer for every XID event, it makes the frequency of kernel buf flush by less than 1/3. semisync mysqlbinlog could easily do such optimizations. We have not done yet, but it is even possible to make mysqlbinlog send back ACK before writing, to a file, and the extension is very easy.
-  Slave causes contention between SQL thread and I/O thread, so IO thread itself slows down, which slows down semisync master throughput too. Semisync binlog does not have such overhead because there is no SQL thread.

 With mysqlbinlog reader, master failover step becomes a bit tricky. This is because mysqlbinlog is not mysqld process so it doesn't accept any MySQL command, such as CHANGE MASTER. When doing master failover, it is highly likely that one of local mysqlbinlog has the latest binary log events, and the events should be applied to a new master. New MHA version (0.56) supported the feature.

 In this configuration, mysqlbinlog processes need to be highly available. If all semisync mysqlbinlog processes are down, semisync is stopped or suffering from long wait time..


Reducing plugin_lock mutex contention

  Prior to MySQL 5.6.17, there was a performance bug that transaction commit throughput dropped significantly when there were non-semisync many slaves or binlog readers, even if there was only a few semisync readers. On typical deployments, there are two or three semisync readers and multiple non-semisync readers, so performance drop with many non-semisync readers was annoying.
 The performance drop was caused by "plugin_lock" MySQL internal mutex on master. For those who don't know, semisync is a plugin in MySQL, and it's not installed by default. The plugin_lock mutex was needed by semisync binlog dump threads only, but actually the mutex was held by all binlog dump threads. We looked into the problem further.
 First we tried replacing plugin_lock mutex with read/write mutex. It actually did not help much. But Linux profiling tools showed that plugin_lock still caused contentions. During profiling, we learned that most/all glibc rw-locks had an internal lock (mutex-like thing) on which threads could stall. The pattern was get lock, get exclusive access to cache line to modify data, release lock. This was relatively expensive for plugin_lock mutex, since it doesn't do any expensive I/O inside.

 So switching plugin_lock to read/write lock was actually a bad idea. It was needed to remove below plugin related locks as long as possible. There are four major plugin related mutexes in MySQL.
- plugin_lock
- plugin_lock_list
- plugin_unlock
- plugin_unlock_list

 We also noticed that Delegate classes had read/write locks and they caused very hot contentions (especially Binlog_transmit_delegate::lock). The read/write lock protects a list, so probably switching to lock-free list was possible. BTW we noticed that performance schema did not collect mutex statistics on the mutexes on Delegate classes (bug#70577).

 The real problem was all of the above locks were held not only by semisync binlog readers, but also non-semisync binlog readers.

 Based on the above factors, we concluded removing all plugin mutexes was not easy, then we decided to optimize to hold these locks by semisync binlog readers only, and not holding by non-semisync binlog readers. The below is a benchmark result.



 x-axis was the number of non-semisync binlog readers, y-axis was concurrent INSERT throughput from 100 clients. The number of semisync binlog readers was always 1 to 3. Detailed benchmark conditions were described in a bug report.
 Hopefully our patches were finally merged to 5.6.17 and 5.7 so everybody can get benefits easily.


 With all of the enhancements, we could get pretty good benchmark results with semisync.


 This is a mysqlslap insert benchmark on the master, with one semisync slave/mysqlbinlog running. x-axis is the number of clients, y-axis is the number of inserts on the master. Enhanced means loss-less semisync.
 Normal slave is traditional (non-semisync) slave. Enhanced mysqlbinlog is our semisync usage pattern. As you can see, loss-less semisync beats normal semisync due to internal mutex contention reductions. semisync mysqlbinlog also beats semisync slave because of much less overheads. This shows that loss-less semisync scales pretty well.
 

Conclusion and Future Plans

 After several performance improvements, Semi-Synchronous replication became good enough for us. From performance point of view, I expect that single-threaded application performance will be next low-hanging fruits. On our benchmarks, we got around ~2500 transaction commits per second with semisync (0.4ms per commit). Without semisync, it was easy to get ~10000 transaction commits  per second (0.1ms per commit). Of course semisync adds RTT overhead, but on local datacenter network, RTT is much lower than 0.3ms. I think there is another semisync overhead here, so will revisit this issue and will work with Oracle Replication team and outside experts.


Monday, March 31, 2014

MHA 0.56 is now available

I released MHA version 0.56 today. Downloads are available here. MHA 0.56 includes below features.

  • Supporting MySQL 5.6 GTID. If GTID and auto position is enabled, MHA automatically does failover with GTID SQL syntax, not using traditional relay log based failover. You don't need any explicit configuration within MHA to use GTID based failover.
  • Supporting MySQL 5.6 Multi-Threaded slave
  • Supporting MySQL 5.6 binlog checksum
  • MHA supports new section [binlogN]. In binlog section, you can define mysqlbinlog streaming servers. When MHA does GTID based failover, MHA checks binlog servers, and if binlog servers are ahead of other slaves, MHA applies differential binlog events from the binlog server to the new master before recovery. When MHA does non-GTID based (traditional) failover, MHA ignores binlog servers. More details can be found on documentation.
  • Supporting custom mysql and mysqlbinlog location
  • Adding ping_type=INSERT for checking connectivity for the master. This is useful if master does not accept any writes (i.e. disk error)
  • Added --orig_master_is_new_slave, --orig_master_ssh_user and --new_master_ssh_user for master_ip_online_change_script 
  • Added --skip_change_master,  --skip_disable_read_only, --wait_until_gtid_in_sync on masterha_manager and masterha_master_switch (failover mode).



Thursday, March 27, 2014

Speaking about MySQL5.6 and WebScaleSQL at Percona Live

At Percona Live, Steaphan Greene and I will talk about MySQL 5.6 and WebScaleSQL at Facebook.

2 April 1:20PM - 2:10PM @ Ballroom E
http://www.percona.com/live/mysql-conference-2014/sessions/mysql-56-facebook-2014-edition

In addition to that, I have two more talks this year.


Performance Monitoring at Scale
3 April 2:00PM - 2:50PM @ Ballroom G
http://www.percona.com/live/mysql-conference-2014/sessions/performance-monitoring-scale


Global Transaction ID at Facebook
4 April 12:50PM - 1:40PM @ Ballroom E
http://www.percona.com/live/mysql-conference-2014/sessions/global-transaction-id-facebook

Many people from Facebook speak at Percona Live this year. Please take a look at an interview from Percona to see what we are going to speak.

I assume many of my blog subscribers have already heard about WebScaleSQL that was announced this morning. MySQL Conference in April is the biggest MySQL conference in the world so it's a perfect timing to release something and collaborate with experts. I hope to meet with many people there.


Thursday, March 13, 2014

How sync_file_range() really works

 There is a relatively new and platform dependent flushing function called sync_file_range(). Some databases (not MySQL) use sync_file_range() internally.
  Recently I investigated stall issues caused by buffered write and sync_file_range(). I learned a lot during investigation but I don't think these behaviors are well known to the public. Here I summarize my understandings.

Understanding differences between sync_file_range() and fsync()/fdatasync()

 sync_file_range() has some important behavior differences from fsync().
  • sync_file_range() has a flag to flush to disk asynchronously. fsync() always flushes to disk synchronously. sync_file_range(SYNC_FILE_RANGE_WRITE) does async writes (async sync_file_range()), sync_file_range(SYNC_FILE_RANGE_WRITE|SYNC_FILE_RANGE_WAIT_AFTER) does sync writes (sync sync_file_range()). With async sync_file_range(), you can *usually* call sync_file_range() very quickly and let Linux flush pages to disk later. As I describe later, async sync_file_range() is actually not always asynchronous, and is sometimes blocked for writeback. It is also important that I/O errors can't be notified when using async sync_file_range().
  • sync_file_range() allows to set file ranges (starting offset and size) to flush to disk. fsync() always flushes all dirty pages of the file. Ranges are rounded to page unit size. For example, sync_file_range(fd, 100, 300) will flush from offset 0 to 4096 (flushing page#1), not limited from offset 100 to 300. This is because minimum I/O unit is page.
  • sync_file_range(SYNC_FILE_RANGE_WRITE|SYNC_FILE_RANGE_WAIT_AFTER) does not wait for metadata flushing. fsync() waits until flushing both data and metadata are done. fdatasync() skips flushing metadata if file size does not change (fsync() also skips flushing metadata in that case, depending on filesystem). sync_file_range() does not wait metadata flushing even though file size changes. If a file is not overwritten (=appended), sync_file_range() does not guarantee the file can be recovered after crash, while fsync()/fdatasync() guarantee that.
 sync_file_range() behavior highly depends on kernel version and filesystem.
  • xfs does neighbor page flushing, in addition to specified ranges. For example, sync_file_range(fd, 8192, 16384) does not only trigger flushing page #3 to #4, but also flushing many more dirty pages (i.e. up to page#16). This works very well for HDD because I/O unit size becomes bigger. In general, synchronously writing 1MB * 1000 times is much faster than writing 4KB * 256,000 times. ext3 and ext4 don't do neighbor page flushing.

  sync_file_range() is generally faster than fsync() because it can control dirty page ranges and skips waiting for metadata flushing. But sync_file_range() can't be used for guaranteeing durability, especially when file size changes.

  Practical usage of the sync_file_range() is where you don't need full durability but you want to control(reduce) dirty pages. For example, Facebook's HBase uses sync_file_range() for compactions and HLog writes. HBase does not need full durability (fsync()) per write because HBase relies on HDFS and HDFS can recover from HDFS replicas. Compactions write huge volume of data so periodically calling sync_file_range() makes sense to avoid burst writes. Calling sync_file_range() 1MB * 1000 times periodically gives more stable workloads than flushing 1GB at one time. RocksDB also uses sync_file_range().

Async sync_file_range is not always asynchronous

 Sometimes you might want to flush pages/files more earlier than relying on kernel threads (bdflush), in order to avoid burst writes. fsync() and sync sync_file_range() (sync_file_range(SYNC_FILE_RANGE_WRITE|SYNC_FILE_RANGE_WAIT_AFTER)) can be used for that purpose, but both takes longer time (~10ms) on HDD if RAID write cache is disabled. You probably don't want to execute from user-facing thread.
 How about using async sync_file_range() (sync_file_range(SYNC_FILE_RANGE_WRITE)) from user-facing thread? It's supposed not to wait for i/o, so latency should be minimal. But I don't recommend using sync_file_range() from user facing thread like that. This is actually not always asynchronous, and there are many cases it takes time for waiting for disk i/o.
  I'm showing a couple of examples where async sync_file_range() takes longer time. In the following examples, I assume stable page writes are already disabled.

Stall Example 1: Small range sync_file_range()

single thread
  fd=open("aaa.dat", O_WRONLY);
  for(i=0; i< 100000; i++) {
    write(fd, buf, 1000); // not aligned page write
    sync_file_range(fd, i*1000, 1000, SYNC_FILE_RANGE_WRITE); // async
  }
 In example 1, with stable page write fix, write() won't wait for dirty pages written to disk(writeback). But sync_file_range() actually waits for writeback.
 When stable page write is disabled, there is a possibility that a page is both writeback in progress and marked dirty. Below is an example scenario.
1. write() -> marking page 1 dirty
2. sync_file_range(SYNC_FILE_RANGE_WRITE) -> sending writeback request on page 1
3. write() -> making page 1 dirty (not waiting with stable page write fix)
4. sync_file_range(SYNC_FILE_RANGE_WRITE) -> waiting until page 1 is written back

 In this case, the second sync_file_range(SYNC_FILE_RANGE_WRITE) is blocked until flushing to disk triggered by the first sync_file_range() is done, which may take tens of milliseconds.
 Here is an example stack trace when sync_file_range() is blocked.
                sleep_on_page
                __wait_on_bit
                wait_on_page_bit
                write_cache_pages
                generic_writepages
                xfs_vm_writepages
                do_writepages
                __filemap_fdatawrite_range
                filemap_fdatawrite_range
                SyS_sync_file_range
                tracesys
                sync_file_range
                __libc_start_main

Stall example 2: Bulk sync_file_range()

 What happens if calling write() multiple times then call sync_file_range(SYNC_FILE_RANGE_WRITE) for multiple pages at once? In below example, calling write() 21 times then triggering flush by sync_file_range().
  fd=open("aaa.dat", O_WRONLY);
  for(i=0; i< 21; i++) {
    write(fd, buf, 1000); // not aligned page write
  }
  sync_file_range(fd, 0, 16384, SYNC_FILE_RANGE_WRITE);
  for(i=22; i< 42; i++) {
    write(fd, buf, 1000);
  }
  sync_file_range(fd, 16384, 32768, SYNC_FILE_RANGE_WRITE);
 Unfortunately, sync_file_range() also may take time in this case too. It works as below in xfs. Since xfs does neighbor page flushing via sync_file_range(), there is a possibility that a page is both under writeback in progress and marked dirty.
1. write -> page 1~6 become dirty
2. sync_file_range (page 1~4) -> triggering page 1~4 and 5, 6 for flushing (in xfs)
3. write -> page 6~11 become dirty
4. sync_file_range (page 5..8) -> waiting for page 6 to be flushed to disk

 Note that if write volume (and overall disk busy rate) is lower enough than disk speed, page 6 should be flushed to disk before starting second sync_file_range(). In that case it shouldn't wait anything.

Stall example 3: Aligned page writes

 The main reason why async sync_file_range() was blocked is that write() was not aligned by page size. What if we are doing fully aligned page write (writing 4KB multiple)?
 With aligned page write, async sync_file_range() does not wait shown at Example 1 and 2, and gives much better throughput. But, even with aligned page write, sometimes async sync_file_range() waits for disk i/o.
 sync_file_range() submits page write i/o requests to disks. If there are many outstanding i/o read/write requests in a disk queue, new i/o requests are blocked until there is a free slot available in the queue. This blocks sync_file_range() too.
 Queue size is managed under /sys/block/sdX/queue/nr_requests. You may increase to larger values.
echo 1024 > /sys/block/sda/queue/nr_requests
 This mitigates stalls at sync_file_range() on busy disks. But this won't solve problems entirely. If you submit many more write i/o requests, read requests take more time to serve (write-starving-reads) which very negatively affects user-facing query latency.

Solution for the stalls

 Make sure use Linux kernels supporting disabling stable page write. Otherwise write() would be blocked. My previous post covers this topic. sync_file_range(SYNC_FILE_RANGE_WRITE) is supposed to by asynchronous, but is actually blocked for writeback in many patterns, so it's not recommended calling sync_file_range() from user-facing thread, if you really care about latency. Calling sync_file_range() from a background (not user-facing) thread would be better solution here.
 Buffered write and sync_file_range() are important for some databases like HBase and RocksDB. For HBase/Hadoop, using JBOD is one of the well known best practices. HLog writes are buffered, and not flushed to disk per write(put operation). There are some HBase/Hadoop distributions supporting sync_file_range() to reduce outstanding dirty pages. From Operating System point of view, HLog files are appended, and file size is not small (64MB by default). This means all HLog writes go to a single disk with JBOD configuration, which means the single disk tends to be overloaded. An overloaded disk takes longer time for flushing dirty pages (via sync_file_range or bdflush), which may block further sync_file_range(). To get better latency, using Linux Kernel supporting to disable stable page write, and calling sync_file_range() from background threads (not from user-facing thread) are important.


Monday, March 10, 2014

Why buffered writes are sometimes stalled

 Many people think buffered write (write()/pwrite()) is fast because it does not do disk access. But this is not always true. Buffered write sometimes does disk access by itself, or waits for some disk accesses by other threads. Here are three common cases where write() takes longer time (== causing stalls).

1. Read Modify Write

Suppose the following logic. Opening aaa.dat without O_DIRECT/O_SYNC, writing 1000 bytes sequentially for 100,000 times, then flushing by fsync().
  fd=open("aaa.dat", O_WRONLY);
  for(i=0; i< 100000; i++) {
    write(fd, buf, 1000);
  }
  fsync(fd);
 You might think each write() will finish fast enough (at least less than 0.1ms) because it shouldn't do any disk access. But it is not always true.
 Operating System manages I/O by page. It's 4KB for most Linux environments. If you'd modify 1000 bytes of the 4KB page from offset 0, Linux first needs to read the 4KB page, modify first 1000 bytes, then write the page back. The page will be sooner or later written to disk. Yes, reading the page is needed. This is called RMW (Read Modify Write). If the page was not cached in filesystem cache (page cache), reading the page from disk is needed, which may take tens of milliseconds on HDD.
 This problem often happens when overwriting large files. You can easily repeat the problem by the following steps.
  • 1. Creating a large file (cached in fs cache)
 dd if=/dev/zero of=aaa bs=4096 count=1000000
  • 2. Uncache the file (i.e. echo 3 > /proc/sys/vm/drop_caches)
  • 3. Writing to the file (using write()/pwrite()) => the target page does not exist in fs cache. So reading from disk. You can verify that by iostat.
 There are a couple of solutions to avoid slow Read Modify Write.

Appending a file, not updating in place

 Appending a file means newly allocated pages are always cached, so slow Read Modify Write issue doesn't happen.
 In MySQL, binary logs are appended, not overwritten. InnoDB log files are always overwritten so this workaround can't be used.
 Note that if you need full durability (calling fsync/fdatasync() per each write()), appending is much more expensive than overwriting for most filesystems. It is well known that sync_binlog=1 is very slow in MySQL ~5.5, and the main reasons were group commit was broken and appending + fsync() was not fast. In 5.6 group commit was supported so multi-threaded write throughput improved a lot.

Always cache target files within filesystem cache

 If target pages are cached in filesystem cache (page cache), write() doesn't hit disk reads.
The obvious disadvantage is that this approach wastes memory. RAM is expensive. If you have 128GB InnoDB log files, you won't like to give 128GB RAM for InnoDB log files. The RAM should be allocated for InnoDB buffer pool.

Aligning write() I/O unit size by 4KB multiple

 If you always write with Linux page size aligned (4KB multiple), Read Modify Write issue can be avoided.
 One approach to do aligned write is zero-filling. Below is an example.
  memset(buf, 0, 4096); // zerofill
  memset(buf, data1, 1000); // set application data (1000 bytes)
  pwrite(fd, buf, 4096, 0); // write 4KB

  memset(buf2, 0, 4096);
  memset(buf2, data2, 1000);
  pwrite(fd, buf2, 4096, 4096);
  ....
 In this example, you write 1000 bytes of application data twice, but actually writing 4KB data twice, 8KB in total. 8192-2000=6192 bytes are zero-filled data. Disk reads don't happen by this approach, even if they are not cached in filesystem cache.
 This approach needs more space. In the above case you wasted 6192 bytes.
 Another approach is remembering application data offset, but writing by 4KB aligned.
  memset(buf, 0, 4096); // zerofill
  memset(buf, data1, 1000); // set application data (1000 bytes)
  pwrite(fd, buf, 4096, 0);

  memset(buf, 0, 4096);
  memset(buf+1000, data2, 1000); // set next application data from real offset
  pwrite(fd, buf, 4096, 0);
  ....
This approach doesn't waste space.
InnoDB log file write unit is not aligned by 4KB, so Read Modify Write issue exists. Some MySQL fork distributions fix the problem by writing 4KB aligned data. Facebook MySQL takes the latter approach, and we're using in production. Percona Server also supports aligned write but it's stated as beta quality.

2. write() may be blocked for "stable page writes"

Dirty pages

 Most write commands don't flush to disk immediately. write()/pwrite() functions write to Linux page cache and mark them dirty, unless the target file is opened with O_DIRECT/O_SYNC. write() and pwrite() are basically the same except that pwrite() has an option to set offset. I mean write() here as dirty page write.
 Dirty pages are sooner or later flushed to disk. This is done by many processes/functions, such as bdflush, fsync(), sync_file_range(). Behavior to flush to disk highly depends on filesystem. I consider only XFS here.
 When a dirty page is written to disk, write() to the same dirty page is blocked until flushing to disk is done. This is called "Stable Page Write". This may cause write() stalls, especially when using slower disks. Without write cache, flushing to disk takes ~10ms usually, ~100ms in bad cases.

 Suppose the following example. There are two clients. One is writing 10 bytes repeatedly via write(), the other is calling fsync() to the same file.
Thread 1:
  fd=open("aaa.dat", O_WRONLY);
  while(1) {
    t1 = get_timestamp
    write(fd, buf, 10);
    t2 = get_timestamp
  }

Thread 2:
  fd=open("aaa.dat", O_WRONLY);
  while(1) {
    fsync(fd);
    sleep(1);
  }
 If you run on a slower disks (HDD with write cache disabled), you may notice sometimes write() (t2-t1) takes more than 10ms. Taking time for fsync() is expected because fsync() flushes dirty pages and metadata into disk, but buffered write also sometimes takes time due to stable page write.

 Another annoying issue is while write() is blocked, the write() holds an exclusive inode mutex. It blocks all writes and disk reads to/from the same file (all pages within the same file).

Disabling Stable Page Writes

 Hopefully there is a patch to disable Stable Page Write, and some Linux distributions support it. 
With this patch, on most filesystems write() no longer waits for dirty page writeback. It helps not to cause write() latency spikes.

3. Waiting for journal block allocation in ext3/4

 If you are using ext3 or ext4, you may still suffer from occasional write() stalls, even if disabling stable page writes. This happens when write() waits for journal block allocation. Example stack trace is as follows. Easier workaround is using xfs, which gives no such stalls.

  Returning from: 0xffffffff81167660 : __wait_on_buffer+0x0/0x30 [kernel]
  Returning to : 0xffffffff811ff920 : jbd2_log_do_checkpoint+0x490/0x4b0 [kernel]
  0xffffffff811ff9bf : __jbd2_log_wait_for_space+0x7f/0x190 [kernel]
  0xffffffff811fab00 : start_this_handle+0x3b0/0x4e0 [kernel]
  0xffffffff811faceb : jbd2__journal_start+0xbb/0x100 [kernel]
  0xffffffff811fad3e : jbd2_journal_start+0xe/0x10 [kernel]
  0xffffffff811db37e : ext4_journal_start_sb+0x7e/0x1d0 [kernel]
  0xffffffff811bd757 : ext4_da_write_begin+0x77/0x210 [kernel]
  0xffffffff810deaea : generic_file_buffered_write+0x10a/0x290 [kernel]
  0xffffffff810e0171 : __generic_file_aio_write+0x231/0x440 [kernel]
  0xffffffff810e03ed : generic_file_aio_write+0x6d/0xe0 [kernel]
  0xffffffff811b732f : ext4_file_write+0xbf/0x260 [kernel]
  0xffffffff8113907a : do_sync_write+0xda/0x120 [kernel]
  0xffffffff8113993e : vfs_write+0xae/0x180 [kernel]
  0xffffffff81139df2 : sys_pwrite64+0xa2/0xb0 [kernel]
  0xffffffff8156a57b : system_call_fastpath+0x16/0x1b [kernel]

Summary

 Question: Why does buffered write() sometimes stall? It just writes to kernel buffer and doesn't hit disk.
 Answer:
  1. write() does disk read when needed. To avoid this issue you need to append a file, not overwrite. Or use OS page aligned writes.
  2. write() may be blocked for "stable page writes". To avoid this issue you need to use newer Linux kernel supporting disabling stable page writes.
  3. If you really care about latency, I don't recommend using ext. Use xfs instead.

 Many of the issues can be mitigated by using battery/flash backed write cache on raid controller, but this is not always possible, and battery often expires.
 In the next post, I'll describe about why sync_file_range(SYNC_FILE_RANGE_WRITE) sometimes stalls.

Monday, December 16, 2013

Single thread performance regression in 5.6 - Replication

 At Facebook, we have upgraded most of MySQL database tiers to 5.6, except very few tiers that have a special requirement -- very fast single threaded replication speed.

 As Oli mentioned, single threaded performance is worse in 5.6. The regression is actually not visible in most cases. For remote clients, the performance regression is almost negligible because network latency is longer than 5.1->5.6 overhead. If clients are running locally but MySQL server is disk i/o bound, the overhead is negligible too because disk i/o latency is much longer than 5.1->5.6 overhead.

 But the regression is obvious when clients run locally and queries are CPU bound. The most well known local client program for MySQL is SQL Thread (Replication Thread). Yes, 5.6 has a slower replication performance problem, if SQL thread is CPU bound.

 If all of the following conditions are met, SQL thread in 5.6 is significantly slower than 5.1/5.5.
  • Almost all working sets fit in memory = Almost all INSERT/UPDATE/DELETE statements are done without reading disks
  • Very heavily inserted/updated and committed (7000~15000 commits/s)
  • 5.6 Multi-threaded slave can not be used (due to application side restrictions etc)
  • You want to use crash safe slave feature and optionally GTID

 Here is a micro-benchmark result. Please refer a bug report for details (how to repeat etc).

Fig 1. Single thread replication speed (commits per second)

 On this graph, 5.1, 5.5, and 5.6 FILE are not crash safe. fb5.1 and 5.6 TABLE are crash safe. 5.6p has some experimental patches. Crash safe means even if slave mysqld/OS are crashed, you can recover the slave and continue replication without restoring MySQL databases. To make crash safe slave work, you have to use InnoDB storage engine only, and in 5.6 you need to set relay_log_info_repository=TABLE and relay_log_recovery=1. Durability (sync_binlog = 1 and innodb_flush_log_at_trx_commit = 1) is NOT required.

 There are a few observations from the benchmark.
  • 17% drop from 5.1.69 to 5.5.33. I did not profile 5.5 much, but I suspect the regression was mostly caused by Metadata Locking feature introduced from 5.5.
  • From 5.5.33 to 5.6.15(FILE), 20% drop if log-slave-updates was enabled, 18% drop if log-slave-updates was disabled. In 5.6, FILE based replication has an inefficiency bug that it writes to OS filesystem cache too often (bug#70342). 5.6.15p(FILE) applied the fix for the bug, but was still 15% lower than 5.5.33 if log-slave-updates was enabled, 2.5% lower if log-slave-updates was disabled. This shows writing to binlog (from single threaded SQL thread) in 5.6 takes more time than 5.1/5.5.
  • Performance drop for 5.6.15 TABLE was serious. Compared to fb5.1 (crash safe), performance drop was 51.2% (14520/s->6103/s). slave_relay_log_info table is opened/fetched/updated/closed per every transaction commit. By keeping the table opened and skipping fetching row (updating only), performance improved to 7528/s (with log-slave-updates) and 9993/s (without log-slave-updates), but these were still much lower than fb5.1 (12505/s and 17056/s).

 How can we fix and/or avoid the problem?

 From MySQL development point of view, there are some room for performance optimizations.
  • Writing to slave_relay_log_info table should be optimized more. Currently the table is opened/fetched/updated/closed via Storage Engine API for each commit. It will be more efficient by updating the table via Embedded InnoDB API, which is currently used from InnoDB memcached plugin. slave_relay_log_info is a system table, and is updated by SQL thread (and MTS worker threads) only, so some tricky optimizations can be done here.
  • Writing to binlog should be faster in 5.6. 5.6 added many features such as group commit, so it is not surprising to get lower single threaded performance without any optimization.
  • 5.1 to 5.5 performance drop was likely caused by Metadata Locking feature. I have not profiled in depth yet, but I think some functions (i.e. MDL_context::acquire_lock()) can be refactored to get better performance.

 From DBA point of view, there are couple of workarounds. First of all, the regression won't happen in most cases. Most application data should be much larger than RAM, and SQL thread is doing many I/O, so CPU overhead is negligible. Even if working sets fit in memory, many applications don't need over 7000 commits per second on single instance. If you don't need high commit speed, you won't hit the problem either.
 But some applications may really hit the problem -- a kind of queuing application is a typical example. At Facebook we are continuing to use 5.1 for such applications.
 If you want to use 5.6 for such applications, you may disable crash safe slave, because usually database size is very small. If database size is small enough (i.e. < 200GB), you can easily restore from other running master/slaves without taking hours. If you don't need crash safe slave feature, you can use FILE based (default) replication, which is still slower than 5.1/5.5 (see "5.6.15 FILE" on the above graph) but is much better than TABLE based.
 Also, Consider not using log-slave-updates. log-slave-udpates is needed when you use GTID. If you want master failover solution without GTID, MHA is a good enough solution and it does not require log-slave-updates.

Thursday, October 10, 2013

Making full table scan 10x faster in InnoDB

At MySQL Connect 2013, I talked about how we used MySQL 5.6 at Facebook, and explained some of new features we added to our Facebook MySQL 5.6 source tree. In this post, I'm going to talk about how we made full table scan faster in InnoDB.

Faster full table scan in InnoDB

 In general, almost all queries from applications are using indexes, and reading very few rows (0..1 on primary key lookups and 0..hundreds on range scans). But sometimes we run full table scans. Typical full table scan examples are logical backups (mysqldump) and online schema changes (SELECT ... INTO OUTFILE).

 We take logical backups by mysqldump at Facebook. As you know MySQL offers both physical and logical backup commands/utilities. Logical backup has some advantages against physical backup. For example:
  • Logical backup size is much smaller. 3x-10x size difference is not uncommon.
  • Relatively easier to parse backups. On physical backup, if we can't restore a database by some serious reasons such as checksum failure, it is very difficult to dig into InnoDB internal data structures and fix corruptions. We trust logical backups rather than physical backups
 Major drawbacks of the logical backup are that full backup and full restore are much slower than physical backup/restore.

 The slowness of full logical backup often causes problems. It may take very long time if database size grows a lot and tables are fragmented. At Facebook, we suffered from mysqldump performance problem that we couldn't finish full logical backup within reasonable amount of time on some HDD and Flashcache based servers. We knew that InnoDB wasn't efficient at full table scan because InnoDB did not actually do sequential reads, but did random reads mostly. This was a known issue for long years. As database storage capacity has been growing, the slow full table scan issue has been getting serious to us. So we decided to enhance InnoDB to do faster sequential reads. Finally our Database Engineering team implemented "Logical Readahead" feature in InnoDB. With logical readahead, our full table scan speed improved 9~10 times than before under usual production workloads. Under heavy production workloads, full table scan speed became 15~20 times faster.

Issues of full table scan on large, fragmented tables

 When doing full table scan, InnoDB scans pages and rows by primary key order. This applies to all InnoDB tables, including fragmented tables. If primary key pages (pages where primary keys and rows are stored) are not fragmented, full table scan is pretty fast because reading order is close to physical storage order. This is similar to reading files by OS command (dd/cat/etc) like below.
# dd if=/data/mysql/dbname/large_table.ibd of=/dev/null bs=16k iflag=direct
 You may find that even on commodity HDD servers, you can read more than 100MB/s multiplied by "number of drives" from disks. Over 1GB/s is not uncommon.

 Unfortunately, in many cases primary key pages are fragmented. For example, if you need to manage user_id and object_id mappings, primary key will be (user_id, object_id). Insert ordering does not match with user_id ordering, so new inserts/updates very often cause page splits. New split pages will be allocated at far from current pages. This means pages get fragmented.

 If primary key pages are fragmented, full table scan becomes very slow. Fig 1 illustrates the problem. After InnoDB reads leaf page #3, it has to read page #5230, and after that it has to read page #4. Page #5230 is far from page #3 and #4, so disk read ordering becomes almost random, not sequential. It is very well known that random reads on HDD is much slower than sequential reads. One very effective approach to improve random reads is using SSD, but per-GB cost on SSD is still more expensive than HDD so using SSD is not always possible.


Fig 1. Full table scan is not actually doing sequential reads


Does Linear Read Ahead really help?

 InnoDB supports prefetching feature called "Linear Read Ahead". With linear read ahead, InnoDB reads an extent (64 consecutive pages: 1MB if not compressed) at one time if N pages are accessed sequentially (N can be configured by innodb_read_ahead_threshold parameter, default is 56). But actually this does not help so much. One extent (64 pages) is very small range. For most large fragmented tables, next page does not exist in the same extent. See above fig 1 for example. After reading page#3, InnoDB needs to read page#5230. Page#3 does not exist in the same extent as #5230, so linear read ahead won't help here. This is pretty common case for large fragmented tables. That's why Linear read ahead does not help much to improve full table scan performance.

Optimization approach 1: Physical Read Ahead

 As described above, the problem of slow full table scan was because InnoDB did mostly random reads. To make it faster, making InnoDB do sequential reads was needed. The first approach I came up with was creating an UDF (Use Defined Function) to read ibd file (InnoDB data file) sequentially. After executing the UDF, pages in the ibd file should be within InnoDB buffer pool, so no random read happens when doing full table scan. Here is an example usage.
mysql> SELECT buf_warmup ("db1", "large_table"); /* loading into buf pool */
mysql> SELECT * FROM large_application_table; /* in-memory select */
 buf_warmup() is a udf that reads entire ibd file of database "db1", table "large_table". It takes time to read the entire ibd file from disk, but reads are sequential so much faster than random reads. When I tested, I could get ~5x overall faster time than normal linear read ahead.
 This proved that sequentially reading ibd files helped to improve throughput, but there were a couple of disadvantages:
  • If table size is bigger than InnoDB buffer pool size, this approach does not work
  • Reading entire ibd file means that not only primary key pages, but also secondary index pages and unused pages have to be read and cached into InnoDB buffer pool, even though only primary key pages are needed for full table scan. This is very inefficient if you have many secondary indexes.
  • Applications have to be changed to call UDF.
 This looked "good enough" solution, but our database engineering team came up with a better solution called "Logical Read Ahead", so we didn't choose UDF approach.

Logical Read Ahead

 Logical Read Ahead (LRA) works as below.
  1. Reading many branch pages of the primary key
  2. Collecting leaf page numbers
  3. Reading many (configurable amount of) leaf pages by page number order (mostly sequential disk reads)
  4. Reading rows by primary key order (same as usual full table scan, but buffer pool hit rate should be very high)
 This is illustrated at fig 2 below.


Fig 2: Logical Read Ahead

 Logical Read Ahead (LRA) solved issues of Physical Read Ahead. LRA enables InnoDB to read only primary key pages (not reading seconday index pages), and to prefetch configurable number of pages (not entire table) at one time. And LRA does not require any SQL syntax changes.
 We added two new session variables to make LRA work. One is "innodb_lra_size" to control how many MBs to prefetch leaf pages. The other is "innodb_lra_sleep" session variable to control how many milliseconds to sleep per prefetch. We tested around 512MB ~ 4096MB prefetch size and 50 milliseconds sleep, and so far we haven't encountered any serious (such as crash/stall/inconsistency) problem. These session variables should be set only when doing full table scan. In our case, mysqldump and some utility scripts (i.e. online schema change tool) turn logical read ahead on.

Submitting multiple async I/O requests at once

 Another performance issue we noticed was that i/o unit size in InnoDB was only one page, even if doing readahead. 16KB i/o unit size is too small for sequential reads, and much less efficient than larger I/O unit size.
 In 5.6, InnoDB uses Linux Native I/O (aio) by default. If submitting multiple consecutive 16KB read requests at once, Linux internally can merge requests and reads can be done efficiently. But unfortunately InnoDB submitted only one page i/o request at once. I filed a bug report #68659. As written in the bug report, on modern HDD RAID 1+0 environment, I could get more than 1000MB/s disk reads by submitting 64 consecutive pages requests at once, while I got only 160MB/s disk reads by submitting one page request.
 To make Logical Read Ahead work better on our environments, we fixed this issue. On our MySQL, InnoDB submits many more page i/o requests before calling io_submit().

Benchmark

In both cases, our production tables (fragmented tables) were used.

1. Full table scan on pure HDD (basic benchmark, no other workload)
Table sizeWithout LRAWith LRAImprovement
10GB10 min 55.00 sec1 min 15.20 sec8.71x
50GB52 min 44.42 sec6 min 36.98 sec7.97x

2. Online schema change under heavy workload
Table sizeWithout LRAWith LRAImprovement
1GB7 min 33 sec24 sec18.8x
* dump time only, not counting data loading time

Source code

 All of our enhancements are available at GitHub.
 - Logical read ahead implementation : diff
 - Submitting multiple i/o requests at once : diff
 - Enabling logical read ahead on mysqldump : diff

 

Conclusion

 InnoDB was not efficient for full table scan, so we fixed it. We did two enhancements, one was implementing logical read ahead, the other was submitting multiple async read i/o requests at once. We have seen 8 to 18 times performance improvements on our production tables, and this has been very helpful to reduce our backup time, schema change time, etc. I hope this feature will be supported in InnoDB by Oracle officially, or at least by major MySQL fork products.