The remaining piece of merge sort is the merge function, which merges two adjacent sorted subarrays, $array[p \cdots q]$ and $array[q+1 \cdots r]$ into a single sorted subarray in $array[p \cdots r]$. We’ll see how to construct this function so that it’s as efficient as possible. Let’s say that the two subarrays have a total of $n$ elements. We have to examine each of the elements in order to merge them together, and so the best we can hope for would be a merging time of $\Theta(n)$. Indeed, we’ll see how to merge a total of $n$ elements in $\Theta(n)$ time.

In order to merge the sorted subarrays $array[p \cdots q]$ and $array[q+1 \cdots r]$ and have the result in $array[p \cdots r]$, we first need to make temporary arrays and copy $array[p \cdots q]$ and $array[q+1 \cdots r]$ into these temporary arrays. We can’t write over the positions in $array[p \cdots r]$ until we have the elements originally in $array[p \cdots q]$ and $array[q+1 \cdots r]$ safely copied.

The first order of business in the merge function, therefore, is to allocate two temporary arrays, `lowHalf`

and `highHalf`

, to copy all the elements in $array[p \cdots q]$ into `lowHalf`

, and to copy all the elements in $array[q+1 \cdots r]$ into `highHalf`

. How big should `lowHalf`

be? The subarray $array[p \cdots q]$ contains $q−p+1$ elements. How about `highHalf`

? The subarray $array[q+1 \cdots r]$ contains $r−q$ elements. (In JavaScript, we don’t have to give the size of an array when we create it, but since we do have to do that in many other programming languages, we often consider it when describing an algorithm.)

In our example array $[14, 7, 3, 12, 9, 11, 6, 2]$, here’s what things look like after we’ve recursively sorted $array[0 \cdots 3]$ and $array[4 \cdots 7]$ (so that `p = 0`

, `q = 3`

, and `r = 7`

) and copied these subarrays into `lowHalf`

and `highHalf`

: