Quorum Variations

So far we have examined what is called a strict quorum in contrast to a sloppy quorum which we’ll discuss next. Consider a system composed of several dozen nodes. It may not make sense to replicate each data value to every node so we may choose a subset of nodes in the cluster for replicating each value instead of replicating the values on every node in the cluster. Going back to our inequality R + W > N, the N in this scenario is less than the number of nodes in the cluster.

When a network partition takes place, it can happen that the chosen subset of nodes where we want to replicate a given value is not reachable partially. However, other nodes in the cluster are still reachable which aren’t designated to hold the value for the key, we want to write or update. At this juncture, if a write request is received the system can decline it since enough nodes aren’t available to record the write request or the system can accept the write request and temporarily write the received value to nodes that are reachable but aren’t part of the designated nodes for the value. In this manner the value is recorded at a subset of the designated nodes and at nodes which are part of the cluster but not among the designated nodes for replicating the value. This is known as a sloppy quorum and helps improve the write availability of a system. Cassandra, Dynamo, Voldemort, all offer sloppy quorum feature.

Later on when all the designated nodes that should replicate the value become reachable, the value is copied over to these designated nodes from the nodes that temporarily hold the value. This is known as hinted handoff.

Example

Consider a concrete example of 5 nodes A, B, C, D and E. Say we have a key ‘x’ which is designated for replication on nodes A, C and E. To satisfy the inequality R + W > N, we’ll choose W=3, R=1 and N is set to 3. A user updates the key, value pair (x, 5) to (x, 6). The change is recorded at nodes A and C but node E is found offline. Instead of refusing the write request the system records the request on node D. Once node E comes back online, the change is replicated to node E.

  1. Three out of the five nodes are designated to store the key x

Get hands-on with 1200+ tech skills courses.