# Introduction to Time Complexity - Big O

An introduction to the efficiency of algorithms (Reading time: 2 minutes)

The Time Complexity of an algorithm refers to its runtime in relation to an increase or decrease in the number of inputs. The notation used to describe the performance and complexity (the number of operations) of an algorithm is known as **Big O**. We can calculate the execution time (time complexity) and space (space complexity) that are required to run a particular algorithm, and determine whether that algorithm will be useful in this scenario. The most common time complexities are:

The colors represent how good the time complexities are; green is excellent, red is horrible. A graph showing the difference in efficiency of the complexities makes clear why the constant and logarithmic runtimes are the most efficient, and the exponential and quadratic the least:

When considering the runtime of an algorithm, there is a **worst**, **average**, and **best** case scenario, which all have different time complexities. Usually, the worst case scenario is the most important one, as we have to calculate how long it would take to execute an algorithm. However, it is hard to actually determine the worst case scenario. Instead, we usually consider a situation that’s bad, but not necessarily the absolute worst case. This could lead to either an **optimistic result** when we might end up with a situation that’s way worse than we expected, or a **pessimistic result** which is when we end up with an expectation that’s way worse than it actually would ever be!

In the next few lessons, I will discuss the different kinds of complexities in detail.