# Computational Complexity

Go through the introduction of computational complexity given in this lesson.

We'll cover the following

## Introduction

The analysis of algorithms involves studies about the scaling of their resource requirements, namely the amount of time and space (or memory) they use when the input size $n$ increases (Kleinberg and Éva Tardos, 2013Jon Kleinberg and Éva Tardos. Algorithm Design, Pearson New International Edition. Pearson Education Limited, 2013.). Since the running time of an algorithm highly depends on the hardware they’re running on (processor speed etc.), the time complexity is not measured in physical time but rather in counting the steps an algorithm requires to solve a problem. The knowledge of the complexity is of paramount interest because even when a problem is computationally solvable theoretically, it may not be solvable in practice if the solution requires an exorbitant amount of time or space (Michael Sipser, 2012Michael Sipser. Introduction to the Theory of Computation. Thomson Course Technology. Boston, Massachusetts, 2012. Cengage Learning.).

In general, the complexity of an algorithm increases with the size of the input $n$, thus the complexity of an algorithm is expressed as a function of the input size $n$ (in bits), and as a consequence, the complexity is a function of $n$ (Michael Sipser, 2012Michael Sipser. Introduction to the Theory of Computation. Thomson Course Technology. Boston, Massachusetts, 2012. Cengage Learning.).

## Running time

The running time or time complexity of an algorithm $A$ is the function $f: \mathbb{N} \rightarrow \mathbb{R}^{+}$, where $f(n)$ is the maximum number of steps that $A$ uses on any input of length $n$.

The order of growth of the requirements of an algorithm, especially its running time, gives a characterization of the algorithm’s efficiency and thus allows a comparison with the performance of alternative algorithms. Since the exact running time of an algorithm is normally a very complex expression, its efficiency usually has to be estimated. Such estimation is usually done by studying the asymptotic efficiency of algorithms, i.e., we want to understand how the running time increases when the size of the input $n$ increases without any upper bound.

Get hands-on with 1200+ tech skills courses.