Types of Tests
Get an overview of the different types of tests in software engineering.
We'll cover the following
Why testing matters
It’s well known that no amount of testing can ensure that software is totally bugproof. However, most of the time, developers put more effort into writing production code than testing code. The most common, and often misplaced, motivation here is that writing production code feels more productive than writing testing code.
Consequently, tests are often written as the last step of the development cycle. At that time, we’re usually in a hurry; we just want to deliver the shiny new feature we’ve been working on for days or even weeks. So we write tests as fast as possible, often sacrificing design and architecture. After all, the production code ought to be well designed, readable, and maintainable. However, these software qualities also apply to tests.
If you’re here, about to take this course, you probably disagree with this. The good news is you’re right! You’re probably here because you saw (or maybe wrote, as we did) some spaghetti test code and want to figure out some best practices to enhance your testing style.
Software testing is an essential part of the software development process, and recognizing the importance of testing is the first step to writing cleaner and more reasoned tests.
We, as developers, should always write tests for the following reasons (although there are many more):
Finding bugs early: Tests help developers find and fix bugs early in the development process, reducing the cost and time required to fix them later.
Ensuring quality: Tests help ensure that the software meets the expected quality standards and works as intended. This is especially true for functional and acceptance tests, as we’ll see soon.
Facilitating refactoring: Without tests, it’s very difficult to update the code of your applications.
Improving collaboration: Tests serve as a communication tool between developers, ensuring that all team members understand the software’s behavior and functionality. It’s not uncommon to see the documentation of open-source libraries pointing to the tests to show users how to accomplish a particular task.
We should write our tests with the same, if not more, care we put into writing production code, for several reasons.
Maintainability: Tests need to be maintained over time, just like production code. If the tests are not well designed, they can become difficult to maintain, leading to a significant time investment in the future. Well-written tests are easier to modify, refactor, and update, leading to less maintenance overhead in the long run.
Readability: Tests should be readable by other developers on the team, as well as by external developers approaching your project. If tests are poorly written or confusing, they can lead to delays in development while other developers spend more time understanding them—or worse, fixing them.
Scalability: As the software grows, the number of tests required increases. If the tests are not well designed, adding new tests or extending the existing ones might become a nightmare.
Confidence: Tests provide confidence that the software is working as intended. If the tests are not well designed, there can be gaps in the test coverage, leading to lower confidence in the software’s functionality.
Types of tests
Over the years, the testing community has come up with many different terms to identify types of tests. Broadly speaking, we can organize tests into the following categories:
Unit tests are used to check the functionality of a small piece of code, typically a single function or method. They’re usually automated and ensure that each component of the software works as intended.
Integration tests are used to check the interactions between different components of the software. These tests ensure that the different components work together as expected.
Functional tests are used to test the software’s functionality from the user’s perspective, simulating user actions and checking if the software behaves as intended.
Performance tests are used to test the software’s performance under different conditions, such as high load or prolonged use. These tests ensure that the software performs well even under heavy loads and stressful conditions.
Security tests are used to test the software’s security features and vulnerabilities. They simulate different types of attacks and check if the software is resistant to them.
Acceptance tests are used to test the software’s readiness for deployment, checking if it meets the user’s requirements and is ready for use.
The following picture compares the different test types based on testing frequency (i.e., how often the test is run) and testing time (i.e., how long the test takes to run).
We should run unit tests as often as possible. They’re meant to be fast because they test single components without any interaction with other components. Then we have integration tests, which validate the interaction between two or more components. They’re a bit more complex and might take longer to run.
For software engineering, functional tests are not always present, and we usually want to integrate them only if the application faces the users—for example, using a graphical user interface (GUI). They involve simulating user actions, so they’re usually slower and run only when the feature is complete. Acceptance tests are similar, but they validate the use cases of the applications. For example, in an application programming interface (API)-only app, acceptance tests might ensure that the endpoints behave as expected, whereas unit and integration tests might check smaller, isolated parts of the inner workings of a given endpoint.
Security and performance tests are not always in place. They take a long time to run and might be scheduled periodically to make sure the software is performing as expected without security vulnerabilities.
The literature defines many more types of tests, including regression testing, A/B testing, and so on. It’s outside this course’s scope to comprehensively categorize the various test types.