Learn about Python's concurrent.futures module for efficient asynchronous concurrency.


Let’s start looking at a more asynchronous way of implementing concurrency. The concept of a “future” or a “promise” is a handy abstraction for describing concurrent work. A future is an object that wraps a function call. That function call is run in the background, in a thread or a separate process. The future object has methods to check whether the computation has completed and to get the results. We can think of it as a computation where the results will arrive in the future, and we can do something else while waiting for them.

In Python, the concurrent.futures module wraps either multiprocessing or threading depending on what kind of concurrency we need. A future doesn’t completely solve the problem of accidentally altering shared state, but using futures allows us to structure our code such that it can be easier to track down the cause of the problem when we do so.

Futures can help manage boundaries between the different threads or processes. Similar to the multiprocessing pool, they are useful for call and answer type interactions, in which processing can happen in another thread (or process) and then at some point in the future (they are aptly named, after all), we can ask it for the result. It’s a wrapper around multiprocessing pools and thread pools, but it provides a cleaner API and encourages nicer code.


Let’s see another, more sophisticated file search and analyze example. In the last section, we implemented a version of the Linux grep command. This time, we’ll create a simple version of the find command that bundles in a clever analysis of Python source code. We’ll start with the analytical part since it’s central to the work we need to be done concurrently:

Get hands-on with 1200+ tech skills courses.