This is used for algorithms where only a few of the operations are slow. We need to prove that the worst case of the entire algorithm is lower than the worst case of the particular slow operation times the number of operations.
Consider this algorithm: We start with an array of size 2 and each operation adds one element to the array, we do this operation N times. If the array is full, we see the current size of array say
sz. Allocate the
2*sz memory and copy the array to its location so we have space for the new
- Adding to the array if it’s not empty:
- Copying array of size
szto a new location:
The second operation is obviously slower. We do this for N elements so the worst case would be . Yes! But it’s easy to see that the second operation is not that frequent and will happen only times. That means we can arrive at a more strict upper bound than .
Let’s list the steps:
|Array Size / Max size
|copy array of size 2 to new location of size 4 then insert
|2 + 1
|copy array of size 4 to new location of size 8 then insert
|4 + 1
|copy array of size 8 to new location of size 16 then insert
|8 + 1
Total number of operations:
=> N times
So, the complete algorithm runs in time
In the next lesson, we will compare different runtimes and visualize growth.