A Write-friendly Store for SILT: Part I
Learn how to design a key-value store for fast writing.
Preamble
In the last lesson, we decided to use multiple stores in our design, each optimized for a particular purpose. These stores include a write-friendly store, intermediary stores, and a memory-efficient store. Collectively, these stores will help our design achieve good performance and low memory overhead per key. Let's dive into our first store: the write-friendly store.
Our write-friendly store will handle all PUT
and DELETE
requests. The PUT
requests represent new entries or update old entries. The DELETE
requests are special operations that request the removal of a key-value entry. For the GET
request, we'll later use a higher-level wrapper to find a key in all the stores. Doing so is necessary because, over time, keys trickle from a write-friendly store to a memory-efficient store.
Appending to storage log for fast write
Memory provides fast random access. However, storing all the key-value data in memory will need a lot of RAM (which is fairly expensive and might not be enough to hold all the data). Writing to storage is slow, and storage generally does not perform well with random writes. However, sequential writing to storage is faster than random writing.
Before moving on, let's refresh the difference in random and sequential writing performance for the most commonly used storage hardware.
Random vs. sequential writing
We will not dive into the details of random vs. sequential access. We will compare the differences in random and sequential performances of some modern hardware to emphasize the importance of the considerations made in our design.
Note: We prefer writing and reading traditional rotating disks and newer flash-based disks sequentially for different reasons. Sequential writing is faster on the rotating disks because doing so avoids unnecessary head-seek latencies, and for flash-based disks we can better utilize the storage cells to increase its overall lifespan. Additionally, sequential operations enable us to benefit from on-disk caches that can prefetch data next in line.
The following table lists all the hardware we selected for this analysis. Except for the first disk, all other disks are different kinds of SSDs. For the following table, all conversions from IOPS to MB/s assumes 4KB blocks.
Level up your interview prep. Join Educative to access 70+ hands-on prep courses.