Trusted answers to developer questions

Educative Answers Team

**Overflow** and **underflow** are both errors resulting from a shortage of space. On the most basic level, they manifest in data types like *integers* and *floating points*.

Unlike the physical world, the number stored in a computer exists in a discrete number of digits. When we make a calculation that results in an extra digit, we cannot simply append that to our result, so we get an overflow or underflow error.

**Overflow errors** come up when working with integers and floating points, and **underflow errors** are generally just associated with floating points.

Overflow indicates that we have done a calculation that resulted in a number larger than the largest number we can represent. Let’s look at an example involving unsigned integers.

Lets assume we have an integer stored in $1$ byte. The greatest number we can store in one byte is $255$, so let’s take that. This is $11111111$. Now, suppose we add $2$ to it to get $00000010$. The result is $257$, which is $100000001$. The result has $9$ bits, whereas the integers we are working with consist of only $8$.

What does a computer then do in this scenario? A computer will discard the *most-significant bit (MSB)* and keep the rest.

This is essentially equal to $r$ $\%$ $2^n$.

Here, $r$ is the result, $n$ is the number of bits available, and $\%$ is the modulo operator.

Underflow is a bit trickier to understand because it has to do with precision in floating points. Again, due to the discrete nature of storage in computers, we cannot store an arbitrarily small number either. The floating-point convention comes up with techniques to represent fractional numbers. When we use these in calculations that result in a smaller number than our least value, we again exceed our designated space. Without going into details of floating-point representation, we can see how this problem would manifest by considering a decimal example.

Suppose we are given designated boxes to write decimal numbers in. We have one box on the left of the decimal point and three boxes on the right. So, we can easily represent $0.004$. Now, we want to perform a calculation, $0.004 \times 0.004$. The answer to this is $0.000016$, but we simply do not have this many places available to us. So we discard the

least-significantbits and store $0.000$, which is quite obviously an erroneous answer.

RELATED TAGS

error

Copyright ©2022 Educative, Inc. All rights reserved

RELATED COURSES

View all Courses

Keep Exploring

Related Courses