Many shudder at the thought of a coding interview. It can be stressful, grueling, and challenging. Oftentimes, it can be a struggle simply to know what topics to prepare.
Today, you will be introduced to the primary data structures and algorithms that are tested in a coding interview. After reading this article, you should have a good idea of what you need to prepare for to land your dream job.
We will cover the following:
The coding interview tests for your problem-solving abilities and understanding of computer science concepts. Usually, you are given about 30 - 45 minutes to solve one complex problem.
This is where data structures and algorithms come in. These interviews will test you on topics such as linked lists, queues, sorting, searching, and much more, so it’s crucial to prepare.
Not only do companies want to test your technical knowledge, but they also want to evaluate your problem-solving skills.
If you get the job, you will often face bugs that you need to fix, and companies want to make sure that you can overcome these obstacles.
Furthermore, you will often use specific data structures and algorithms to optimize your code so that it runs as efficiently as possible.
Big O notation is an asymptotic analysis that describes how long an algorithm takes to perform. In other words, it’s used to talk about how efficient or complex an algorithm is.
Big O describes the execution time, or run time, of an algorithm relative to its input as the input increases. Though we can analyze an algorithm’s efficiency using average-case and best-case, we typically use worst-case with Big O notation.
Today, you will learn about time complexity, but take note that space complexity is also an important concept to understand. Runtime complexity can be a difficult concept to grasp at first, so let’s look at some examples.
describes an algorithm that always runs at a constant time no matter its input. The function will execute at the same time no matter if the array holds a thousand integers or one integer because it only takes one “step”.
describes an algorithm whose runtime will increase linearly relative to the input . The function above will take steps for values stored in the array. For example, if the array length is 1, the function will take 1 “step” to run, whereas if the array length is 1000, the function will take 1000 “steps” to run.
In the example provided, the array length is the input size, and the number of iterations is the runtime.
describes a function whose runtime is proportional to the square of the input size. This runtime occurs when there is a nested loop such that the outer loop runs times, and the inner loop runs times for each iteration by the outer loop such that the function takes steps.
Ignore constants: When using Big O notation, you always drop the constants. So, even if the runtime complexity is , we call it .
Drop less dominant terms: You only keep the most dominant term when talking Big O. For example, is simply . Here’s the rule of thumb: < < < < < < .
Since each popular algorithm or structure has consistent complexity, its helpful to keep a Big-O Cheat Sheet for ease of access.
When reviewing data structures and algorithms, every developer must know not to stop at time complexity. Space complexity determines whether your solution fits in memory at scale, and it often drives production feasibility (think: large hash maps vs streaming). Also learn amortized analysis: structures like dynamic arrays and hash tables sometimes do expensive operations (resize/rehash), but average out to O(1) per operation.
Finally, remember that Big-O ignores constant factors and hardware effects. Arrays are contiguous and cache-friendly; pointer-heavy lists and trees can suffer due to cache misses. Two solutions with the same asymptotic complexity can behave very differently on real data. In interviews, briefly state time and space, call out any amortized guarantees, and, if relevant, mention cache behavior to show practical awareness.
In simplest terms, data structures are ways to organize and store data in a computer so that it be accessed and modified. You will learn the basics of the important data structures that are tested in the coding interview.
An array is a collection of items of the same variable type that are stored sequentially in memory. It’s one of the most popular and simple data structures and often used to implement other data structures.
Each item in an array is indexed starting with 0, and each item is known as an element. It’s also important to note that in Java, you cannot change the size of an array. For dynamic sizes, it’s recommended to use a List.
In the example above:
The length of the array is 5
The value at index 3 is 1
In Java, all values in this array must be an integer type
Initializing an array in Java:
Common interview questions:
Find first non-repeating integer in an array
Rearrange array in decreasing order
Right rotate the array by one index
Maximum sum subarray using divide and conquer
A linked list is a linear sequence of nodes that are linked together. Each node contains a value and a pointer to the next node in the list. Unlike arrays, linked lists do not have indexes, so you must start at the first node and traverse through each node until you get to the th node. At the end, the last node will point to a null value.
The first node is called the head, and the last node is called the tail. Below visualizes a singly linked list.
A linked list has a number of useful applications:
Implement stacks, queues, and hash tables
Create directories
Polynomial representation and arithmetic operations
Dynamic memory allocation
Basic implementation of a linked list in Java:
Common interview questions:
Reverse a linked list
Find the middle value in a linked list
Remove loop in a linked list
Stacks are linear data structures in a LIFO (last-in, first-out) order. Now, what does that mean? Imagine a stack of plates. The last plate that you put on top of the stack is the first one you take out. Stacks work that way: the last value you add is the first one you remove.
Think of a stack like a collection of items that we can add to and remove from. The most common functions in a stack are push, pop, isEmpty, and peek.
A stack has a number of useful applications:
Backtracking to a previous state
Expression evaluation and conversion
Basic implementation of a stack:
Common interview questions:
Use stack to check for balanced parenthesis
Implement two stacks in an array
Next greater element using a stack
Queues are very similar to stacks in that they both are linear data structures with dynamic size. However, queues are FIFO (first-in, first-out) data structures. To visualize this data structure, imagine you are lining up for a roller coaster. The first people that line up get to leave the line for the ride.
In this data structure, elements enter from the “back” and leave from the “front.” The standard functions in a queue are enqueue, dequeue, rear, front, and isFull.
A queue has a number of useful applications:
When a resource is shared by multiple consumers
Create directories
When data is transferring asynchronously between two resources
Basic implementation of a queue:
Common interview questions:
Reverse first kth elements of a queue
Generate binary numbers from 1 to n using a queue
A graph is a collection of vertices (nodes) that are connected by edges, creating a network.
In the example above, the set of vertices are (12, 2, 4, 18, 23) and the edges are (12-2, 12-4, 2-4, 4-18, 4-23, 18-23, 2-18).
Graphs are very versatile data structures that can solve a plethora of real-world problems. Graphs are often used in social networks like LinkedIn or Facebook. With GraphQL on the rise, data is being organized as graphs, or networks.
Basic implementation of a graph:
Common interview questions:
Find shortest path between two vertices
Check if a path exists between two vertices
Find “mother vertex” in a graph
What’s hashing? Before we dive into hash tables, it’s essential to understand what hashing is.
Hashing is the process of assigning an object into a unique index, known as a key. Each object is identified using a key-value pair, and the collection of objects is known as a dictionary.
A hash table is implemented by storing elements in an array and identifying them through a key. A hash function takes in a key and returns an index for which the value is stored.
So, whenever you input the key into the hash function, it will always return the same index, which will identify the associated element. Furthermore, if the hash function ever receives a unique key that returns an already used index, you can create a chain of elements with a linked list.
A hash table has a number of useful applications:
When a resource is shared by multiple consumers
Password verification
Linking file name and path
Common interview questions:
Find symmetric pairs in an array
Union and intersection of lists using hashing
A binary search tree is a binary tree data structure made up of nodes. The nodes are arranged with the following properties:
The left subnode always contains values less than the parent node.
The right subnode always contains values greater than the parent node.
Both subnodes will also be binary search trees.
Binary search trees are used in many search applications and also used to determine objects that need to be rendered in a 3D game. This data structure is widely used in engineer projects because hierarchical data is so common.
Common operations:
Search - searches for an element
Insert - inserts an element in the tree
Pre-order traversal - traverses the tree in a pre-order way
In-order traversal - traverses the tree in an in-order way
Post-order traversal - traverses the tree in a post-order way
Common interview questions:
Find kth maximum value in a binary search tree
Find the minimum value in a binary search tree
Traverse a given directory using breadth first search
A heap maintains a partial order so that the min (or max) element is obtainable in O(1), and insert/pop take O(log N).
Use cases:
Dijkstra’s algorithm, merging K sorted lists, streaming medians (two-heap pattern), schedulers, and rate-limiters.
Common question patterns:
“Find K largest/smallest,” “top-K frequent,” “merge K sorted arrays.”
Typical pitfalls:
Forgetting to bound heap size to K or using O(N log N) sort when an O(N log K) heap solution exists.
Union-find efficiently tracks connectivity among elements with near-constant amortized time using union by rank/size and path compression.
Use cases:
Detect cycles in undirected graphs, count connected components, Kruskal’s MST, and grouping problems (accounts merge, provinces, islands).
Watch for off-by-one id mapping and always compress paths in find() to achieve near O(α(N)) performance.
Tries store strings by character path. They support prefix queries in O(L) where L is the word length, independent of the number of stored words.
Use cases:
Autocomplete, spell-check, word search on grids (prune by prefix), longest common prefix, and IP routing (radix tries).
Typical trade-off:
Fast lookups vs higher memory; compress with a radix tree or store counts/terminal flags to enable prefix analytics.
Get hands-on with coding interviews.
With thousands of potential questions to account for, preparing for the coding interview can feel like an impossible challenge. Yet with a strategic approach, coding interview prep doesn’t have to take more than a few weeks. Stop drilling endless sets of practice problems, and prepare more efficiently by learning coding interview patterns. This course teaches you the underlying patterns behind common coding interview questions. By learning these essential patterns, you will be able to unpack and answer any problem the right way — just by assessing the problem statement. This approach was created by FAANG hiring managers to help you prepare for the typical rounds of interviews at major tech companies like Apple, Google, Meta, Microsoft, and Amazon. Before long, you will have the skills you need to unlock even the most challenging questions, grok the coding interview, and level up your career with confidence. This course is also available in JavaScript, Python, Go, and C++ — with more coming soon!
Recursion is the practice in which a function calls itself directly or indirectly. The corresponding function that is called is known as the recursive function. While recursion is often associated as an algorithm, it may help to understand it as a way or methodology to solve a problem.
So why is recursion useful? Or what even is it? Let’s look at how we can use recursion to calculate factorials as an example.
In the example above, the function starts at a number . When the function is called, it will call . Say the value is 4; the function will return
Once , the function will return , and we get equal to , which is 24.
Here, you can see the power of recursion. It’s a widely used practice of solving a complex problem by breaking it into smaller instances of the problem until we can solve it. Using recursion, you can simplify a lot of complex problems that would be difficult otherwise.
Bubble sort is a simple sorting algorithm that swaps adjacent elements if they are in an incorrect order. The algorithm will iterate through an array multiple times until the elements are in the correct order.
Say, we have an array as seen below.
As the algorithm scans the array from left to right for the first iteration, starting at index 0, it compares index with index . At index 1, it will see that 11 is greater than 6 and swap the two.
As the algorithm continues scanning for the first iteration, it will see that 13 is greater than 10 and swap the two.
Next, it will go through the array for its second iteration. It will swap the values in index 2 and 3 when it sees that 11 is greater than 10.
The algorithm will scan the array for the third iteration, and because it does not need to make any more swaps for the third iteration, the algorithm will end.
As you can see, bubble sort can perform poorly when working with a lot more elements, making it primarily used as simply an educational tool. It has a runtime complexity of O().
Implementing bubble sort:
Selection sort is an algorithm that splits a collection of elements into sorted and unsorted. During each iteration, the algorithm finds the smallest element in the unsorted group and moves it to the end of the sorted group.
Let’s look at an example:
At first, all elements are unsorted. For the first iteration, the algorithm will go through each element and will identify 4 as the smallest value. The algorithm will swap the 11, the first element in the unsorted group with the lowest element in the unsorted group, 4.
Now, the sorted group is index 0, and the unsorted group is index 1 to index 3. For the second iteration, the algorithm will start at index 1 and scan the array, identifying 6 as the lowest value in the unsorted group. It will swap 11 and 6.
The sorted group is now index 0 to index 1, and the unsorted group is index 2 to index 3. For the third iteration, the algorithm will start at index 2, and find 11 as the lowest value. Because 11 is already in the right index, it will not move. With this, the algorithm ends.
Similar to bubble sort, selection sort is often a slow algorithm. It also has a runtime complexity of O().
Implementing selection sort:
Insertion sort is a simple sorting algorithm that builds the final array by sorting an element one at a time. How does it work?
Examines each element and compare it to the sorted elements on the left
Inserts the item in the correct order for the sorted elements
Let’s look at an example.
The algorithm starts at index 0 with the value 11. Because there are no elements to the left of 11, it stays where it is. Now, onto index 1. The value to the left of it is 11, which means we swap 11 and 4.
Again, the algorithm looks to the left of 4. Because there is no element to the left of 4, it stays where it is. Next, onto index 2. The element with the value 6 looks to left. Because it’s less than 11, the two switch.
The element 6 looks to the left again, but because 4 is less than 6, it stays where it is. Next, we go to the element at index 4. The algorithm looks to the left, but because 11 is less than 13, it stays where it is. Now, the algorithm is done.
Insertion sort is almost always more efficient than bubble sort and selection sort, so it’s used more often when working with a small number of elements. Similar to the two other sorting algorithms, insertion sort also has a quadratic runtime of O().
Implementing insertion sort:
Binary search is the most efficient searching algorithm to find an element. The algorithm works by comparing the middle element of an array or list to the target element. If the values are the same, the index of the element will be returned. If not, the list will be cut in half.
If the target value were less than the middle value, the new list would be the left half. If the target value were greater than the middle value, the new list would be the right half.
This process continues where you keep splitting a list and searching one of the halves until the search algorithm finds the target value and returns the position. The runtime complexity of this algorithm is . It’s important to note that binary search only works if the list is already sorted.
To visualize a binary search, let’s say that you have a sorted array with ten elements, and you are looking for an index of 33.
The middle value of this array is 16, and the algorithm compares it to 33. 33 is greater than 16, so the algorithm splits the array and searches in the second half.
The new middle value is 28. Because 33 is greater than 28, the algorithm searches in the second half of the array.
After the array is split once again into the right half, the new middle value is 33. The algorithm sees that the middle value and the target value are the same and returns the position of the element.
Implementing binary search:
Memorizing every trick is impossible. Instead, map problems to patterns:
Two pointers / fast-slow (cycle detection, remove duplicates, window compaction)
Sliding window (longest/shortest substring with constraints, anagram windows)
Heap + hash map (top-K, frequent elements, streaming)
Union-find (connectivity, grouping)
Monotonic stack/queue (next greater element, sliding window max)
Prefix sums / difference arrays (range queries/updates)
Backtracking + pruning (combinatorics, word search, N-queens)
Once you spot the pattern, the right data structure often becomes obvious, and your solution aligns with the data structures and algorithms every developer must know.
Keep in mind that the concepts we have covered today are simply an introduction. It’s important that you take a deeper dive into the concepts and practice coding questions. Below are other topics we suggest you familiarize yourself with:
Heap sort
Quick sort
Merge sort
AVL tree
Doubly linked list
Dijkstra’s algorithm
Tries
To get practice with these advanced concepts, check out Educative’s beloved course Decode the Coding Interview in Java: Real-World Examples. You’ll get hands-on practice with these concepts by solving real-world problems. By the end, you’ll have the practical experience to confidently answer any question that comes your way in your interviews.