# Asymptotic Order of Growth

Go through the O, Ω, and Θ notations and asymptotic bounds in this lesson.

We'll cover the following

## Introduction

In the asymptotic analysis, we want to estimate the running time of an algorithm $A$ when it runs on very large inputs of length $n$. The most common way to do this is analyzing the worst-case running time $f(n)$, where we want to estimate a bound on the largest possible running time of the algorithm, depending on the input size $n$, and examine how $f(n)$ grows with $n$. According to Sipser (2012), this happens “by considering only the highest order term of the expression for the running time of the algorithm, disregarding both the coefficient of that term and any lower order terms, because the highest order term dominates the other terms on large inputs” (Michael Sipser, 2012Michael Sipser. Introduction to the Theory of Computation. Thomson Course Technology. Boston, Massachusetts, 2012. Cengage Learning.).

### Example 1

Let $A$ be an algorithm that takes at most $1.75 n^{2}+7.5 n+2$ steps. In asymptotic analysis, we say $A$ grows like $n^{2}$.

## $\mathcal{O}, \Omega$, and $\Theta$ notations

### Asymptotic upper bounds

When we want to describe the worst-case running time $f(n)$ of a certain algorithm $A$ on an input size $n$, we determine its asymptotic upper bound. We use the big-$\mathcal{O}$ notation to give an upper bound on a function.

### Big-$\mathcal{O}$ notation

Let $f(n)$ be a function and $n$ its input size. We say $f(n)=\mathcal{O}(g(n))$ if there exist constants $c>0$ and $n_{0} \geq 0$ such that $0 \leq f(n) \leq c \cdot g(n)$ for all $n \geq n_{0}$, or formally:

$\exists c>0 \ \exists n_{0} \geq 0 \ \forall n \geq n_{0}: 0 \leq f(n) \leq c \cdot g(n)$

When $f(n)=\mathcal{O}(g(n))$, we say that $g(n)$ is an asymptotic upper bound for $f(n)$ (Thomas H. Cormen. in 2009, Jon Kleinberg and Éva Tardos in 2013 and Michael Sipser in 2012(Leiserson, Charles Eric, Ronald L. Rivest, Thomas H. Cormen, and Clifford Stein. Introduction to algorithms. Vol. 3. MIT press, 1994. MIT Press.), (Jon Kleinberg and Éva Tardos. Algorithm Design: Pearson New International Edition. Pearson Education Limited, 2013), (Michael Sipser. Introduction to the Theory of Computation. Thomson Course Technology. Boston, Massachusetts, 2012. Cengage Learning.)).

The idea behind the Big-$\mathcal{O}$ notation is shown in this figure.

#### Example

We generalize the case of this example by considering an algorithm whose running time is $f(n)=p n^{2}+q n+r$ for positive constants $p, q$ and $r$. This example claims that we can say this function is $\mathcal{O}\left(n^{2}\right)$. This is true because for all $n \geq 1$, it holds that $q n \leq q n^{2}$ and $r \leq r^{2}$. Thus, we can conclude that

$f(n)=p n^{2}+q n+r \leq p n^{2}+q n^{2}+r n^{2}=(p+q+r) n^{2}$

for all $n \geq 1$. This fulfils the equation of the definition ($Big \space\mathcal{O}-notation$) because $0 \leq$ $f(n) \leq c \cdot g(n)$, with $c=p+q+r$.

### Figure 1:

Get hands-on with 1200+ tech skills courses.