# Time Complexity of Algorithms

Learn about fast vs. slow algorithms and the big-O notation to estimate the time complexity of algorithms.

## We'll cover the following

## Fast vs. slow algorithms

Real computers require a certain amount of time to perform an operation such as addition, subtraction, or testing the conditions in a while loop. A supercomputer might take $10^{-10}$ seconds to perform addition, while a calculator might take $10^{5}$ seconds. Suppose we had a computer that took $10^{-10}$ seconds to perform an elementary operation such as addition, and we knew how many operations a particular algorithm would perform.

We could estimate the running time of the algorithm simply by taking the product of the number of operations and the time per operation. However, computers are constantly improving, leading to a decreasing time per operation, so our notion of the running time would soon be outdated.

Rather than computing an algorithm’s running time on every computer, we rely on the total number of operations that the algorithm performs to describe its running time since this is an attribute of the algorithm and not an attribute of the computer we happen to be using.

Unfortunately, determining how many operations an algorithm will perform is not always easy. Suppose we know how to compute the number of basic operations that an algorithm performs. In that case, we have a basis for comparing it against a different algorithm that solves the same problem. Rather than tediously counting every multiplication and addition, we can perform this comparison by gaining a high-level understanding of the growth of each algorithm’s operation count as the size of the input increases.

Suppose an algorithm $A$ performs $n^2$ operations on an input of size $n$, and an algorithm $B$ solves the same problem in $3n + 2$ operations. Which algorithm, $A$ or $B$, is faster? Although $A$ may be faster than $B$ for some small $n$ (for example, for $n$ between $1$ and $3$), $B$ will become faster for large $n$ (for example, for all $n > 4$). See the figure below. Since $f (n) = n^2$ is, in some sense, a *faster-growing* function than $g(n) = n$ with respect to $n$, the constants $3$ and $2$ in $3n + 2$ do not affect the competition between the two algorithms for large $n$. We refer to $A$ as a *quadratic* algorithm and to $B$ as a *linear* algorithm and say that $A$ is less efficient than $B$ because it performs more operations to solve the same problem when $n$ is large. Therefore, we will often be somewhat imprecise when we count the operations of an algorithm—the behavior of algorithms on small inputs does not matter.

Level up your interview prep. Join Educative to access 70+ hands-on prep courses.