In MySQL ecosystems, it has been a very common practice to shard MySQL databases by application keys, and to manage multiple small sized MySQL instances. Main reason of the sharding was that a single server could not handle so much data. 10 years ago, typical commodity servers had only 16GB RAM, and typical storage configuration was RAID10 with 6~12 SATA HDDs. There was no affordable flash storage available at that time. Because such machines could handle only a few hundred random IOPS, and buffer pool was so limited, we couldn't put much data on a single server — at most a few hundred GBs per server. Even small-mid sized applications easily exceeded single server capacity. They had to split databases into multiple servers.
Disadvantages of Sharding
Sharding by applications has been a common practice to scale MySQL beyond single machine. But there are a couple of disadvantages like the followings.
- You have to write application logic to manage shards. Also, you need to manage many more MySQL instances
- Atomic transaction is not supported across multiple shards
- Can not take globally consistent backups across multiple shards
- Cross instance join is not supported (or very slow)
- Hard to use secondary keys efficiently. Unless secondary keys are part of sharding keys, you need to query all shards, which is very expensive, especially if you need sorting
Shared Nothing SQL database is not general purpose
There are a couple of database products that offer transparent sharding by database internals. MySQL Cluster (NDB) is one of the MySQL engines that has existed for long years, offering shared nothing distributed databases. You can run atomic transactions, run cross-shard joins across multiple instances (data nodes). MySQL Cluster supports NoSQL interface (it is called NDBAPI), to query directly into data nodes, which boost performance significantly. I'm not going to talk about the NoSQL interface, since I'd like to discuss SQL database. Note that MySQL Cluster is a product name and it is not InnoDB. If you want to continue to use InnoDB, Vitess can be a good solution. It helps to build transparent, shared nothing database. Transparent shared nothing databases like MySQL Cluster, Vitess solved some of the issues mentioned above, but there are still issues like below, and users may need to redesign tables and/or rewrite queries as needed.
- Limited secondary key support. This was the same as I mentioned above. In MySQL Cluster, by default, rows are distributed by primary keys. If you do not explicitly specify primary keys in WHERE clause, queries need to scan all data nodes, which may significantly limit concurrency and increase latency, depending on how many data nodes you have and other query conditions (e.g. sorting).
- Queries using JOINs may need lots of network tound-trips. Lots of improvements have been done in MySQL Cluster for handling joins, but it's still slower than InnoDB, especially if queries are complex.
Bigger Small Data and its challenges
Several people at Facebook called MySQL services as "Small Data". Combined MySQL instance size was pretty large, but each instance size was small enough (normally up to 1TB). 10 years ago, people had to run small instances because of limited hardware support. Nowadays, commodity servers have more than 256GB RAM and more than 10TB Flash storage. There are many small-mid databases that fit in 10TB. Successful databases grow beyond 10TB, so they will have to shard anyway. But how about experimental projects, and/or many other applications that are expected to grow up to limited size? Instead of spending engineering efforts to manage shards and rewrite tables/queries, why not just put everything on a single server and take all advantages like atomic and consistent transactions, secondary indexes and joins — running "Bigger Small Data"? There are a couple of public MySQL services supporting bigger storage size. Amazon Aurora (AWS) and Alibaba PolarDB (Alibaba Cloud) are both based on MySQL and claim to support more than 60TB instance size. It was not surprising to me that they chose bigger small data rather than shared nothing distributed database, because they had lots of customers who wanted to do whatever they wanted. They couldn’t control customers not to use joins. But being supported does not necessarily mean working as expected. To make MySQL really work with bigger small data, it needs lots of improvements, beyond 8.0, especially improving concurrency and long running operations, including the followings.
- Parallel logical dump (mysqldump)
- Parallel query
- Parallel binary copy
- Parallel DDL
- Resumable DDL
- Better join algorithm
- Much faster replication
- Handling many more connections
- Good resource control, so that some bad users don't eat all resources
These are needed at least, to answer questions like "how can MySQL handle general operations, if our instance grows 10x bigger"?
I'm not worried about short queries — row look-ups by primary keys or secondary keys. These are where MySQL has been great so far. I'm worried about long running queries. The most common case of the long running queries would be full table scan from a single table. In general, logical dump from 10TB table takes (much) more than 10 times, compared to scanning from 1TB table. InnoDB needs to keep history list for consistent reads. Maintaining history list and reading consistent snapshots from rollback segments get more expensive, if rows are heavily updated during running long running transactions. If you run daily backups by mysqldump, you might not tolerate 10x~20x longer backup time — which might not finish in 24 hours. To make logical dump shorter, parallel query support is needed, and this is not currently not supported by MySQL. Physical backup (binary copy) also needs parallelism, though it can be relatively easily implemented, since physical backup tools are written out side of MySQL and are easily extended.
Running ALTER TABLE on 10TB table is also challenging. Amazon Aurora supports instant DDL — adding a nullable column at the end of a table can be done without rebuilding the table. But there are still many DDL operations requiring copying tables. First, you will want to know when it ends. MySQL currently does not tell that. If it is expected to take 20 days, you might be worried what will happen if mysqld restarts before finishing the DDL. It would be great if the database remembers DDL state periodically, and can resume operations when restarting mysqld.
Replication is another technical challenge. MySQL replication is asynchronous. Slaves are often lagged if master instances are heavily updated. On bigger small data, update volume on master can be 10x or even more. How slaves can handle 10x more replication traffics? Amazon Aurora does not have MySQL replication lag issue, if you run it in a single region. Aurora has 6x storage copies in the same region, across three availability zones. You can scale reads in the same region. But scaling reads across multiple regions requires MySQL Replication, and it is challenging unless making MySQL Replication a lot faster. Alibaba PolarDB offers InnoDB physical replication across different datacenters, which is significantly more concurrent and faster than binlog based replication. But you are constrained to one engine (InnoDB, though it's by far the most common engine) and debugging replication issues might be harder, since it's no longer binlog based.
Reliability improvement should not be ignored. 10x larger data means the instance serves many more connections and queries. If handful bad queries take the whole instance unavailable, the impact in bigger small data is much higher than in small instances. Good resource management is needed. High priority queries can be useful too, for making some important low latency queries finish earlier, without being affected by expensive queries.
There are lots of technical challenges to make Bigger Small Data really work. I expect Amazon Aurora will be ahead to implement these important features to make Aurora truly support bigger data. But I'm almost sure that AWS won't release them as open source software. For everybody to get such features, somebody else will have to implement. I hope Oracle will do, but I understand that they need to compete Amazon Aurora and Oracle will not be willing to give their features to AWS for free. More realistic scenario might be multiple companies, including us, implementing features, releasing as open source and contributing to Oracle and/or MariaDB. I think these are interesting technical projects for MySQL for a couple of years.