Related Tags

software testing

# What is error seeding?

Ace your System Design Interview and take your career to the next level. Learn to handle the design of applications like Netflix, Quora, Facebook, Uber, and many more in a 45-min interview. Learn the RESHADED framework for architecting web-scale applications by determining requirements, constraints, and assumptions before diving into a step-by-step design process.

Error seeding (also known as fault seeding, defect seeding, or bebugging) is a technique used in software development to determine the rate at which software development tests detect errors and the number of undetected errors in the system. These metrics are used to determine the quality of the source code.

The main concept behind error seeding is to insert or “seed” errors into the code, and then count the number of errors detected and the number of errors remaining in the system after testing.

## Fault modeling

Fault modeling is the first step in error seeding. We need to artificially model errors that will be introduced in the system. This can be done by consciously changing function calls, expressions, or branch conditions.

## Fault injection

In the next stage, fault injection, the errors are introduced into the source code. The number of errors injected into the code is dependent on the situation and requirements. For example, a large code base of a couple of thousand lines would require more errors to be injected than a small code base of a couple of hundred lines.

## Error rate calculation

Once the tests have been run, we can use the following equation to calculate the error rate.

$\frac{Seeded\ errors\ found}{Total\ seeded\ errors} = \frac{Actual\ errors\ found}{Total\ number\ of\ actual\ errors}$

By rearranging the above equation, we can easily determine the approximate number of total errors in the system.

RELATED TAGS

software testing

CONTRIBUTOR