Have a look at the pros and cons of the big-O notation.

We'll cover the following

Using the big-$O$ notation to report algorithm running times has several advantages:

• In many cases, computer scientists mainly care about how the running time grows with the input size—the big-$O$ notation clarifies the growth rate.

• The big-$O$ notation simplifies the formulas for the running time:

• $O(n^2)$ vs. $3n^2+5n+2$.
• $O(n)$ vs. $n+\log_2n +7$.
• $O(n\log n)$ vs. $4n\log_2n+5$. In particular, $\log_2n$, $\log_3n$, and $\log_an$ differ by constant multipliers, so we don’t need to specify the base of the logarithm in the big-$O$ notation.
• With the big-$O$ notation, we no longer need to worry about things like how fast the computer is, what the memory hierarchy looks like, or what compiler we used. Although these things will have a big impact on the final running time, that impact will generally only be a constant multiple.

The big-$O$ notation loses some information since it ignores constant multipliers and additive terms. If we have two algorithms, and one of them is a hundred times faster, they still have the same estimate of the running time in the big-$O$ notation. But, in practice, if we want to make things fast, a factor of 100 is a big deal.
Nevertheless, the big-$O$ notation is very useful, and we will use it throughout this course.