You’ll learn the basic concepts of message passing concurrency in this lesson.

We'll cover the following


Concurrency is a concept similar to parallelism. Both involve executing programs on threads, and as parallelism is based on concurrency, they are sometimes confused with each other.

The following are the differences between parallelism and concurrency:

  • The main purpose of parallelism is to take advantage of microprocessor cores to improve the performance of programs. Concurrency on the other hand, is a concept that may be needed even in a single-core environment. Concurrency is about making a program run on more than one thread at a time. An example of a concurrent program would be a server program that is responding to requests of more than one client at the same time.

  • In parallelism, tasks are independent from each other. In fact, it would be a bug if they did depend on results of other tasks that are running at the same time. In concurrency, it is normal for threads to depend on results of other threads.

  • Although both programming models use operating system threads, in parallelism threads are encapsulated by the concept of task. Concurrency makes use of threads explicitly.

  • Parallelism is easy to use, and as long as tasks are independent it is easy to produce programs that work correctly. Concurrency is easy only when it is based on message passing. It is very difficult to write correct concurrent programs if they are based on the traditional model of concurrency that involves lock-based data sharing.

D supports both models of concurrency: message passing and data sharing. We will cover message passing in this chapter and data sharing in the next chapter.

Basic concepts


Operating systems execute programs as work units called threads. D programs start executing with main() on a thread that has been assigned to that program by the operating system. All of the operations of the program are normally executed on that thread. The program is free to start other threads to be able to work on multiple tasks at the same time. In fact, tasks that have been covered in the previous chapter are based on threads that are started automatically by std.parallelism.

The operating system can pause threads at unpredictable times for unpredictable durations. As a result, even operations as simple as incrementing a variable may be paused mid operation:


The operation above involves three steps: reading the value of the variable, incrementing the value, and assigning the new value back to the variable. The thread may be paused at any point between these steps to be continued after an unpredictable time.

Get hands-on with 1000+ tech skills courses.