ReadWriteLock

This lesson examines the ReadWriteLock interface and the implementing class ReentrantReadWriteLock. The lock is intended to allow multiple readers to read at a time but only allow a single writer to write.

If you are interviewing, consider buying our number#1 course for Java Multithreading Interviews.

The ReadWriteLock interface is part of Java’s java.util.concurrent.locks package. The only implementing class for the interface is ReentrantReadWriteLock. The ReentrantReadWriteLock can be locked by multiple readers at the same time while writer threads have to wait. Conversely, the ReentrantReadWriteLock can be locked by a single writer thread at a time and other writer or reader threads have to wait for the lock to be free.

ReentrantReadWriteLock

The ReentrantReadWriteLock as the name implies allows threads to recursively acquire the lock. Internally, there are two locks to guard for read and write accesses. ReentrantReadWriteLock can help improve concurrency over using a mutual exclusion lock as it allows multiple reader threads to read concurrently. However, whether an application will truly realize concurrency improvements depends on other factors such as:

  • Running on multiprocessor machines.

  • Frequency of reads and writes. Generally, ReadWriteLock can improve concurrency in scenarios where read operations occur frequently and write operations are infrequent. If write operations happen often then most of the time is spent with the lock acting as a mutual exclusion lock.

  • Contention for data, i.e. the number of threads that try to read or write at the same time.

  • Duration of the read and write operations. If read operations are very short then the overhead of locking ReadWriteLock versus a mutual exclusion lock can be higher.

In practice, you’ll need to evaluate the access patterns to the shared data in your application to determine the suitability of using the ReadWriteLock.

Fair Mode

The ReentrantReadWriteLock can also be operated in the fair mode, which grants entry to threads in an approximate arrival order. The longest waiting writer thread or a group of longest waiting reader threads is given preference to acquire the lock when it becomes free. In case of reader threads we consider a group since multiple reader threads can acquire the lock concurrently.

Cache Example

One common scenario where there can be multiple readers and writers is that of a cache. A cache is usually used to speed up read requests from another data source e.g. data from hard disk is cached in memory so that a request doesn’t have to wait for data to be fetched from the hard disk thus saving I/O. Usually, there are multiple readers trying to read from the cache and it is imperative that the readers don’t step over writers or vice versa.

In the simple case we can relax the condition that readers are ok to read stale data from the cache. We can imagine that a single writer thread periodically writes to the cache and readers don’t mind if the data gets stale before the next update by the writer thread. In this scenario, the only caution to exercise is to make sure no readers are reading the cache when a writer is in the process of writing to the cache.

In the example program below we use a HashMap as a cache and input a single key/value pair that is periodically updated by the writer thread. From the output of the program, note that the reader threads find the value of the key updated after the acquisition of the lock by the writer thread.

Create a free account to view this lesson.

By signing up, you agree to Educative's Terms of Service and Privacy Policy