Comparing Algorithms
In this lesson, we are going to learn how two or more algorithms may be compared.
Introduction
There are typically several different algorithms to solve a given computational problem. It is natural, then, to compare these alternatives. But how do we know if algorithm A is better than algorithm B?
Important criteria: Time and space
One important factor that determines the “quality” of an algorithm is the amount of time it takes to solve a given problem. If algorithm A takes less time to solve the same problem than algorithm B, then algorithm A is considered better.
Another important factor in comparing two algorithms is the amount of memory required to solve a given problem. The algorithm that requires less memory is considered better.
Comparing execution time
For the remainder of this lesson, we will focus on the first factor, i.e., execution time. How do we compare the execution time of two algorithms?
Experimental Evaluation
First, we can implement the two algorithms and run them on a computer while measuring the execution time. The algorithm with the shorter execution time wins. To be effective, this comparison must be made in a fair manner. Let’s try to punch holes into this idea:

An algorithm might take longer to run on an input of greater size. Therefore, algorithms being compared must be tested on the same input size. But that’s not all. Due to the presence of conditional statements, for a given input size, even the same algorithm’s running time may vary with the actual input given to it. This means that the algorithms being compared must be tested on the same input. Since one algorithm may be disadvantaged over another for a specific input, we must test the algorithms exhaustively on all possible input values. This is just not possible.

The algorithms being compared must first be implemented. What if the programmer comes up with a better implementation of one algorithm than the other? What if the compiler optimizes one algorithm more than it does the other? There’s so much that can compromise the fairness at this stage.

The programs implementing the two algorithms must be tested on exactly the same hardware and software environment. Far fetched as it may be, we could assign a single machine for all scientists to test their algorithms on. Even if we did, the task scheduling in modernday operating systems involves a lot of randomness. What if the program corresponding to “the best” algorithm encounters an excessive number of the hardware interrupts? It is impossible to guarantee the same hardware/software environment to ensure a fair comparison.
Analytical evaluation
The above list highlights some of the factors that make fair experimental evaluation of algorithms impossible. Instead, we are forced to make analytical/theoretical comparisons. The two key points from the previous discussion are that we must compare algorithms for the same input size and we must consider all possible inputs of the given size. Here is how it is done.
First, we assume a hypothetical computer on which some primitive operations are executed in a constant amount of time. We consider a specific input size, say n. Then, we count the number of primitive operations executed by an algorithm for a given input. The algorithm that results in fewer primitive operations is considered better.
What constitutes a primitive operation? You can think of these as simple operations that are typically implemented as processor instructions. These operations include an assignment to a variable or array index, reading from a variable or array index, comparing two values, performing arithmetic operations, making a function call, etc.
What is not considered a primitive operation? Consider a function call, for instance. When a function is called, all the statements in that function are executed. So, we can’t consider each function invocation as a single primitive operation. Similarly, displaying an entire array is not a primitive operation.
The number of times conditional statements run depends on the actual input values. Sometimes a code block is executed, sometimes it isn’t. So, how do we account for conditional statements? We can adopt one of three strategies: best case analysis, averagecase analysis, and worstcase analysis.
Best case analysis
In the best case analysis, we consider the specific input that results in the execution of the fewest possible primitive operations. This gives us a lower bound on the execution time of that algorithm for a given input size.
Worstcase analysis
In the worstcase analysis, we consider the specific input that results in the execution of the maximum possible primitive operations. This gives us an upper bound on the execution time of that algorithm for a given input size.
Average case analysis
In the average case analysis, we try to determine the average number of primitive operations executed for all possible inputs of a given size. This is not as easy as it may sound to the uninitiated. In order to compute the averagecase running time of an algorithm, we must know the relative frequencies of all possible inputs of a given size. We compute the weighted average of the primitive operations executed for each input. But how can we accurately predict the distribution of inputs? If the algorithm encounters a different distribution of inputs in the field, our analysis is useless.
The verdict
The bestcase analysis has limited value because what if you deploy that algorithm and the best case input rarely occurs. We feel that the worstcase analysis is more useful because, whatever answer it gives you, you can be sure that no matter what, algorithm A wouldn’t incur more time than that. Unless otherwise specified, our analysis in this course will use the worstcase running time.
The running time of an algorithm computed in the aforementioned way is also known as its time complexity. Another term that you will often hear is an algorithm’s space complexity. The space complexity of an algorithm is the amount of additional or auxiliary memory space that the algorithm requires. This is memory space other than the actual input itself. You will see examples of evaluating the space complexity of an algorithm later on in this course.
Analyzing a simple Java program
Suppose that instead of an algorithm, we are given Java code instead. Here’s how we can analyze the algorithm underlying the given program. Let’s count the number of primitive operations in the program given below:
Level up your interview prep. Join Educative to access 80+ handson prep courses.