Background of Distributed Cache
Define the core concepts for designing a high-performance distributed cache. Analyze various writing policies, such as write-through and write-back, alongside common eviction algorithms like LRU. Learn how to apply consistent hashing and sharding to ensure scalability and reliability in distributed System Design.
This chapter explains the design of a distributed cache. First, it is important to understand core concepts such as data write policies, eviction strategies, and cache invalidation. The following prerequisites are covered:
Section | Motivation |
Writing policies | Data is written to cache and databases. The order in which data writing happens has performance implications. We’ll discuss various writing policies to help decide which writing policy would be suitable for the distributed cache we want to design. |
Eviction policies | Since the cache is built on limited storage (RAM), we ideally want to keep the most frequently accessed data in the cache. Therefore, we’ll discuss different eviction policies to replace less frequently accessed data with most frequently accessed data. |
Cache invalidation | Certain cached data may get outdated. We’ll discuss different invalidation methods to remove stale or outdated entries from the cache in this section. |
Storage mechanism | A distributed storage has many servers. We’ll discuss important design considerations, such as which cache entry should be stored in which server and what data structure to use for storage. |
Cache client | A cache server stores cache entries, but a cache client calls the cache server to request data. We’ll discuss the details of a cache client library in this section. |
Writing policies
A cache stores a temporary copy of data that is persistently stored in a database or another datastore. A key design decision is when to write data to the cache versus the database. This choice, known as the write policy, directly affects performance and data consistency.
There are three common writing policies:
Write-through: Data is written to the cache and the database in the same operation. This approach ensures strong consistency between the cache and the database, but increases write latency because the operation is only complete after both writes succeed. ...