Search⌘ K
AI Features

Parallel Arrays

Explore how using parallel arrays and minimizing object sizes in C++ enhances iteration efficiency through better cache usage. Understand performance impacts of data layout by comparing large and small objects and learn techniques to refactor complex classes for faster data access. This lesson equips you to optimize high-performance data structures by balancing design and speed.

We'll cover the following...

Parallel arrays primer

We will finish this chapter by talking about iterating over elements and exploring ways to improve performance when iterating over array-like data structures. We have already mentioned two important factors for performance when accessing data: spatial locality and temporal locality. When iterating over elements stored contiguously in memory, we will increase the probability that the data we need is already cached if we manage to keep our objects small, thanks to spatial locality. Obviously, this will have a great impact on performance.

Recall the cache-thrashing example, shown at the beginning of this chapter, where we iterated over a matrix. It demonstrated that we sometimes need to think about the way we access data, even if we have a fairly compact representation of the data.

Next, we will compare how long it takes to iterate over objects of different sizes. We will start by defining two structs, SmallObject and BigObject:

C++
struct SmallObject {
std::array<char, 4> data_{};
int score_{std::rand()};
};
struct BigObject {
std::array<char, 256> data_{};
int score_{std::rand()};
};

SmallObject and BigObject are identical, except for the size of the initial data array. ...