Error seeding (also known as fault seeding, defect seeding, or bebugging) is a technique used in software development to determine the rate at which software development tests detect errors and the number of undetected errors in the system. These metrics are used to determine the quality of the source code.
The main concept behind error seeding is to insert or “seed” errors into the code, and then count the number of errors detected and the number of errors remaining in the system after testing.
Fault modeling is the first step in error seeding. We need to artificially model errors that will be introduced in the system. This can be done by consciously changing function calls, expressions, or branch conditions.
In the next stage, fault injection, the errors are introduced into the source code. The number of errors injected into the code is dependent on the situation and requirements. For example, a large code base of a couple of thousand lines would require more errors to be injected than a small code base of a couple of hundred lines.
Once the tests have been run, we can use the following equation to calculate the error rate.
By rearranging the above equation, we can easily determine the approximate number of total errors in the system.
Free Resources