Home/Blog/Master Algorithms with Python for Coding Interviews

# Master Algorithms with Python for Coding Interviews

Jun 16, 2020 - 12 min read
Aaron Xie

Now more than ever, it’s essential to have a good understanding of algorithms to succeed in coding interviews. Unfortunately, many developers are taught algorithms but not how or why they work.

Today, we will go over the fundamentals of algorithms in Python and walk through some of the most useful algorithms for coding interviews.

What you will learn today:

## Why use Python for algorithms?

Python is a suitable programming language for learning about data structures and algorithms. For one, it’s excellent for algorithmic design, as it’s used extensively in data science and machine learning technologies.

Furthermore, it is a high-level programming language that obscures much of the low-level implementation details, such that your pseudo-code will look very similar to Python.

It also has relatively less syntactic sugar than other languages and can be run in any browser. This is very helpful for those who are just beginning to learn about data structures and algorithms, as low-level implementation details force you to learn unrelated topics to data structures and algorithms.

If you’re new to Python, I recommend you check out our Ace the Python Coding Interview learning path to be guided through 7 curated modules.

Algorithmic paradigms are strategies for solving a problem efficiently. Today, we will talk about the two most common algorithmic paradigms: brute force and divide & conquer. The two other more advanced paradigms are greedy algorithms and dynamic programming. If you want to learn more about these, feel free to check out our course Algorithms for Coding Interviews in Python.

### Brute force

Brute force algorithms are exhaustive methods of solving a problem through pure computing power and trying all possibilities to find a solution rather than using a more advanced strategy to improve overall efficiency.

For example, imagine you are trying to figure out a four-digit password combination. The brute force approach would test every possible combination of four-digit numbers from 0000 to 9999. Linear search, a method to find a target value in a given list, is an example of the brute force method. The search algorithm will traverse through the array and check each element until a match is found.

• Advantages: The advantage of using the brute force method is that you are eventually guaranteed to find the solution. It’s also straight-forward and easy to implement compared to more complex algorithmic paradigm strategies.

• Disadvantages: Though it’s easy to implement, it’s the most inefficient solution. It’s also difficult to improve performance and find shortcuts with this strategy.

### Divide and conquer

Divide and conquer is an algorithmic paradigm that involves solving a problem by dividing it into $N$ subproblems to an “atomic” level. Once the subproblems are small enough, they will each be solved individually. Finally, the algorithm repeatedly combines the solved subsolutions into a solution for the original problem.

• Advantages: It’s very efficient and powerful when dealing with general case solutions where the problem can easily be divided into subproblems. It also is efficient in terms of memory usage, as dividing the problems into atomic subproblems allows the problem to be solved in the cache itself.

• Disadvantages: Because it is a recursive approach, it is oftentimes slow. There’s also a possibility that the approach duplicates subproblems leading to large recursive stacks, which will consume extra space.

## Understanding time and space complexity

### Big O notation

Big O notation is a form of asymptotic analysis that describes how long an algorithm runs. It’s a simple language for developers to quickly describe the performance or complexity of their algorithms.

Because it’s challenging to identify the exact runtime of an algorithm (since it’s based on the computer hardware itself), we describe an algorithm’s runtime based on how quickly it grows. Big O describes the runtime, or execution time, of an algorithm relative to the input N as the input increases. It’s also important to note that we typically use the worst-case scenarios when using Big O notation. In the next section, you will learn how to use Big O notation to describe an algorithm’s time complexity.

### Time complexity:

$O(1)$

def getFirst(arr):    return arr[0]

An algorithm runs in $O(1)$ time if it runs at a constant time no matter its input. From the above code, you can see that the function will always execute at the same time even if the array holds one element or one thousand elements because it only requires one “step."

$O(N)$

def example(arr):
for i in arr:
print(i) 

An algorithm runs in $O(N)$ time if its runtime increases linearly relative to its input N. In other words, the time the algorithm takes to run is directly proportional to the input size. As seen from the code above, the function will take one “step” if the array has one element, and will take one-thousand “steps” if the array has one-thousand elements.

$O(N^2)$

def example(arr):
for x in arr:
for y in arr:
print(x, y)

An algorithm runs in $O(N^2)$ time if its runtime is directly proportional to the square of the input size. For example, this runtime occurs when an algorithm contains a nested loop such that the outer loop executes N times, and the inner loop will run N times for every time the outer loop executes once, such that the runtime is $N^2$.

Some rules to remember:

• Ignore constants: When using Big O notation, you always drop the constants. So, even if the runtime complexity is $O(2N)$, we call it $O(N)$.

• Drop less dominant terms: You only keep the most dominant term when talking Big O. For example, $O(N^3 + 50N +1 7)$ is simply $O(N^3)$. Here’s the rule of thumb: $O(1)$ < $O(logN)$ < $O(N)$ < $O(NlogN)$ < $O(N^2)$ < $O(2^N)$ < $O(N!)$.

Want to read more about Big O notation? Check out our article What is Big-O Notation?

## Sorting algorithms

### Bubble sort

Bubble sort is a sorting algorithm that swaps adjacent elements if they are in the incorrect order. The sorting algorithm will iterate through a list of elements until no more swaps occur, meaning that all the elements are in the correct order.

Let’s take a look at an example with the following array:

The algorithm will begin at index 0 with element 3 and traverse through the array, comparing index $i$ with index $i+1$. At index 1, the algorithm will notice that 23 is greater than 7 and swap the two.