Versioning Data and Achieving Configurability
Define how vector clocks manage data versioning and resolve conflicts caused by network partitions in a key-value store. Learn to implement configurable consistency using the quorum system. Understand how r and w parameters control read/write trade-offs for performance and availability.
Data versioning
Network partitions and node failures can fragment an object’s
To resolve the inconsistency, the system needs to track causal relationships between events, for example by using logical clocks or version vectors. Physical timestamps are unreliable in distributed systems because clocks can drift or become unsynchronized, so they cannot safely determine which request happened last.
Instead, we use vector clocks to maintain causality. A vector clock is a list of (node, counter) pairs associated with every version of an object. By comparing vector clocks, we can determine if two versions are causally related or if a conflict exists that requires reconciliation.
Explain how metadata like versioning and checksums, which detect data corruption, help maintain data integrity and consistency in a key-value store.
Modify the API design
To enforce causality with vector clocks, each request must include the vector clock from the previous operation along with the originating node ID. The API must be updated so clients send the prior vector clock and node ID with each write request.
The get API call is updated as follows:
get(key)
Parameter | Description |
| This is the |
This returns an object (or a collection of conflicting objects) along with a context. The context contains encoded metadata, such as the object’s version.
The put API call is updated as follows:
put(key, context, value)
Parameter | Description |
| This is the |
| This holds the metadata for each object. |
| This is the object that needs to be stored against the |
This function locates the correct node based on the key and stores the value. The client must provide the context received from a previous get operation to update an object. This context allows the system to determine version history via vector clocks. If a read request reveals divergent branches (conflicts), the system returns all objects at the leaf nodes with their version information. The
Note: This is similar to how Git handles merge conflicts between branches. If the system cannot automatically merge the versions, the client must resolve the conflict at the application level and submit the resolved value.
Vector clock usage example
Let’s consider an example. Say we have a write operation request. Node
Suppose the network partition is repaired, and the client requests a write again, but now we have conflicts. The context
Compromise with vector clocks limitations
The size of vector clocks may increase if multiple servers write to the same object simultaneously. It’s unlikely to happen in practice because writes are typically handled by one of the top
For example, if there are network partitions or multiple server failures, write requests may be processed by nodes not in the top
To prevent unbounded growth as more nodes participate, we can cap the size of the vector clock. We use clock truncation by attaching a physical timestamp to each (node, counter) entry to record the node’s last update time for the item. Entries are removed once the number of (node, counter) pairs exceeds a configured threshold (for example, 10). Since truncation can remove causal history, the system may no longer accurately determine version ancestry, which can reduce reconciliation accuracy.
The get and put operations
One of our functional requirements is that the system should be configurable. We want to control the trade-offs between availability, consistency, cost-effectiveness, and performance. So, let’s achieve configurability by implementing the basic get and put functions of the key-value store.
Every node can handle the get (read) and put (write) operations in our system. A node handling a read or write operation is known as a
There can be two ways for a client to select a node:
We route the request to a generic load balancer.
We use a partition-aware client library that routes requests directly to the appropriate coordinator nodes.
Both approaches have their benefits. The client isn’t linked to the code in the first approach, whereas lower latency is achievable in the second. The latency is lower due to the reduced number of hops because the client can directly go to a specific server.
Let’s make our service configurable by having an ability where we can control the trade-offs between availability, consistency, cost-effectiveness, and performance. We can use a consistency protocol similar to those used in
Let’s take an example. Say
Usage of and
Now, consider two variables,
The following table gives an overview of how the values of
Value Effects on Reads and Writes
n | r | w | Description |
3 | 2 | 1 | It won't be allowed as it violates our constraint r + w > n . |
3 | 2 | 2 | It will be allowed as it fulfills constraints. |
3 | 3 | 1 | It will provide speedy writes and slower reads since readers need to go to all n replicas for a value. |
3 | 1 | 3 | It will provide speedy reads from any node but slow writes since we now need to write to all n nodes synchronously. |
Let’s say
In this model, the latency of a get operation is determined by the slowest of the
The coordinator produces the vector clock for the new version and writes the new version locally upon receiving a put() request for a key. The coordinator sends
Requests for a get() operation are made to the
At this point, the design satisfies the scalability, availability, conflict resolution, and configurability requirements. The remaining requirement is fault tolerance. The next lesson covers how to design the system for fault tolerance.