Parallel data processing—the domain of a Jedi programmer!

Parallel computing is known to be difficult, intense, and full of potential minefields in terms of HeisenbugsA software bug that disappears or changes its behavior when we attempt to investigate or resolve it.. Combined with the rise of many-core servers and distributed computing, parallel computing is useful and can’t be ignored by regular programmers to speed up their applications or left to Jedi programmers.

Google introduced a new programming model (MapReduce) that enabled programmers (of any expertise level) to express their data processing needs as if they were writing sequential code. The MapReduce runtime automatically takes care of the messy details of distributing data and running the jobs in parallel on multiple servers, even under many fault conditions. The widespread use of the MapReduce model proves its applicability to a broad range of data processing problems.

In this lesson, we will study the MapReduce system’s design and programming model.

Design of MapReduce in a nutshell

MapReduce is a restricted programming modelA programming model where the programmer is expected to adhere to a specific way of programming but, in return, gets some benefits. to process large datasets (big data) effectively and efficiently, structured or unstructured alike, on a cluster of machines. One of the model’s restrictions is that the input and output of the processing code should be in key-value pairs. It takes input data from key-value pairs and aggregates user-defined processed output as different key-value pairs. The input is a large and complex dataset, and the computations get distributed across numerous machines to finish the processing task in a reasonable amount of time.

The following illustration depicts how the process works. We will explain the design in the rest of the lesson.

Level up your interview prep. Join Educative to access 70+ hands-on prep courses.