Search⌘ K
AI Features

Data Sharing and Synchronization

Explore how to safely share and synchronize data between threads in Python. Learn to prevent race conditions with threading locks, use thread-safe queues for communication, and leverage events for signaling. Understand risks like deadlocks and best practices for building stable concurrent programs.

We have seen how threads allow multiple tasks to progress concurrently. However, concurrency introduces a fundamental risk: shared state. When multiple threads read from and write to the same memory without coordination, the outcome becomes unpredictable. The situation is comparable to two individuals attempting to write in the same notebook at the same time; their changes interfere, and the final result is corrupted.

Concurrent programming requires deliberate coordination. Launching threads is only the first step. Programs must also control how threads interact with shared data. This includes ensuring that only one thread enters a critical section at a time and providing mechanisms for threads to signal when work becomes available or when tasks are complete.

In this lesson, we will examine the synchronization primitives that make such coordination possible, enabling us to build concurrent systems that are both correct and stable.

The hazard of shared state: Race conditions

A race condition occurs when program correctness depends on the unpredictable timing of thread execution. This commonly occurs during read–modify–write operations. Even a simple operation like counter += 1 is not a single step. The CPU must:

  1. Read the current value of counter from memory.

  2. Add one to that value.

  3. Write the new result back to memory.

If the operating system switches context from Thread A to Thread B right after Thread A reads the value (but before it writes), Thread B will read the old value. Both threads will add 1 to the same starting number and write back the same result. We effectively ...