Types of Data Structures
Explore the fundamental types of data structures including primitive and composite categories. Understand the differences between linear, nonlinear, hash-based, static, and dynamic structures to build a foundation for efficient programming and problem-solving with Python.
Primitive vs. composite structures
All data structures in computer science belong to one of two families. Primitive structures are the basic types built directly into the programming language: integers, floats, characters, and booleans. Composite structures are built on top of those primitives to organize multiple values into something with shape and access rules.
This course focuses entirely on composite structures. These are the ones you design, choose between, implement, and reason about as an engineer.
Linear vs. nonlinear
Composite structures are further divided by how their elements relate to each other.
Linear: Elements arranged in a sequence
In a linear structure, each element has at most one predecessor and one successor. You can traverse the whole structure in a single pass from start to finish. These map naturally to sequential memory.
Nonlinear: Elements arranged hierarchically or as a network
In a nonlinear structure, elements can have multiple predecessors or successors. You cannot traverse the whole thing in one sequential pass. These structures model real-world relationships like hierarchies, priorities, and networks.
Hash-based: Key-value access via hash functions
Hash-based structures use a hash function to map keys to storage locations, providing near-constant-time access on average, regardless of how many elements are stored.
How they compare at a glance
Hereβs a high-level comparison across the major structures. Do not worry about memorizing it now. Treat this as a reference you will return to as each structure gets its own chapter.
Structure | Access | Insert | Delete | Search | Best For |
Array | O(1) | O(n) | O(n) | O(n) | Index-based access |
Linked List | O(n) | O(1) | O(n) | O(n) | Frequent insert and delete |
Stack | O(1)* | O(1) | O(1) | O(n) | LIFO processing |
Queue | O(1)* | O(1) | O(1) | O(n) | FIFO scheduling |
Hash Table | O(1)β | O(1)β | O(1)β | O(1)β | Fast key-value lookup |
BST | O(log n) | O(log n) | O(log n) | O(log n) | Sorted data, range queries |
Heap | O(1)* | O(log n) | O(log n) | O(n) | Priority queues |
Graph | varies | O(1) | O(V+E) | O(V+E) | Relationships, paths |
Note:
*Top or front element only.βAverage case; worst case O(n) due to hash collisions.
Static vs. dynamic structures
Structures are also split by whether their size is fixed at creation or can grow and shrink at runtime.
Static structures like fixed-size C arrays have their memory allocated once. They are memory-efficient but require you to know the maximum size upfront.
Dynamic structures like linked lists and Python lists grow and shrink as needed. They offer flexibility, at the cost of slightly more memory overhead per element.
Python's built-in list is a dynamic array. It behaves like a simple array but automatically increases its capacity when it reaches its current limit. This abstraction hides the underlying resizing strategy and associated trade-offs.