Unit Testing with JUnit 6
Explore the principles and practices of automated unit testing in Java using JUnit 6. Understand how to write, structure, and run tests that verify code correctness with assertions and lifecycle annotations. Gain confidence in refactoring and maintaining code through fast, isolated tests executed within your development environment.
Up until now, we have verified our code by running a main method and manually checking the output printed to the console. While this works for small scripts, it becomes unsustainable as our applications grow. If we change a piece of logic in one module, we cannot easily know if we broke something elsewhere without manually running every scenario again.
Automated unit testing solves this. When exploring Java in automation testing, unit tests form the foundational layer of all broader testing pipelines. They give us a permanent safety net, allowing us to modify and add features with the confidence that we have not introduced bugs into existing functionality.
Why do we write unit tests?
Writing a unit test in Java means creating automated tests that validate the behavior of a specific unit of code. In practice, we isolate the smallest testable unit of the application, typically a function, method, or class, and assert that it produces the expected output for given inputs. Unit tests should be fast and isolated. They should not depend on external systems such as databases, network services, or file systems.
Manual testing creates a slower feedback loop and depends heavily on human repetition and attention to detail. We write code, build it, start the application, manually trigger the feature, and inspect the results. With automated unit tests, we can validate a class’s expected behavior across defined scenarios in milliseconds.
This tight feedback loop supports the Red-Green-Refactor cycle: we write a failing test (Red), implement the minimal code to pass it (Green), and then safely apply java code refactoring techniques to clean up the implementation.
Because the tests act as a safety net, performing java refactoring becomes a risk-free process as long as the tests remain green.
Writing your first JUnit 6 test
As of 2026, the industry standard for testing in Java is JUnit 6. Released in late 2025, JUnit 6 unifies the versioning of the platform and sets the baseline requirement to Java 17+. While it introduces powerful new internal features, the code you write remains largely compatible with earlier versions, making it easy to learn.
A JUnit test is simply a Java class containing methods annotated with @Test. A true unit test must self-validate; it should either pass (Green) or fail (Red) without human intervention. We achieve this using Assertions, specifically the org.junit.jupiter.api.Assertions class.
To see this in action, imagine we have a simple Calculator class that we need to verify.
Now, let’s write a test class to ensure add works correctly using assertEquals.
Lines 1–2: We import the necessary classes from the JUnit Jupiter API.
Testis the annotation used to mark methods as test cases, andAssertionsprovides the static methods (likeassertEquals) we need to verify our code's behavior. Note that even in JUnit 6, these core package names (org.junit.jupiter.api) remain consistent with previous versions.Line 6: We apply the
@Testannotation to the method. This is the critical signal to the JUnit 6 test runner that this specific method is a test case that must be executed. Without this annotation, the method would be ignored by the testing framework.Line 7: We define the test method itself. In JUnit, standard test methods are always
void(they do not return values) and typically take no parameters.Line 8: We instantiate the System Under Test (SUT). Here, we create an instance of the
Calculatorclass so we can call its methods and verify its behavior.Line 9: We execute the specific action we want to test. We call the
addmethod with inputs2and3and capture the returned value in theresultvariable.Line 13: We perform the assertion.
Assertions.assertEquals(5, result, ...)compares our expected value (5) against the actual value returned by the code (result). If they do not match, the test fails, and the optional message"2 + 3 should equal 5"is displayed in the test report.
Verifying behavior with assertions
While assertEquals is the most common check, we often need to verify other conditions, such as boolean states or null checks. The Assertions class provides methods for these scenarios.
Line 12: We use
Assertions.assertTrueto validate a condition that must be true for the test to pass. In this case, it checks the variableisServerUp. IfisServerUpwerefalse, the test would fail immediately with the message"Server should be up".Line 15: We use
Assertions.assertFalseto verify that a condition is not true. Here, the expression5 > 10is evaluated (which results infalse). Sincefalseis what we expect, the assertion passes. If we had written5 < 10, the result would betrue, causingassertFalseto fail the test.Line 18: We use
Assertions.assertNotNullto ensure that an object reference actually points to an object in memory and is notnull. This is a crucial check before calling methods on an object to avoid aNullPointerException. IfdatabaseNamewerenull, the test would fail with the message"Database name should exist".
Testing for exceptions
Robust software must handle invalid input gracefully. We need to verify that our code throws the correct exceptions when things go wrong.
Suppose our Calculator throws an IllegalArgumentException if we try to divide by zero. We cannot use a standard try-catch block effectively here because catching the exception is actually the success condition. Instead, we use assertThrows.
Lines 11–14: We use
Assertions.assertThrowsto verify that our code fails as expected. This method takes two key arguments: the class type of the exception we expect (line 12:IllegalArgumentException.class) and an executable lambda containing the code that should cause the error (line 13:() -> calculator.divide(10, 0)).If the code inside the lambda throws the specified exception, the test passes, and the exception object is returned and stored in the variable
exception.If the code runs successfully (does not throw) or throws the wrong type of exception, the test fails.
Line 17: We perform a secondary check on the captured
exceptionobject. We useassertEqualsto verify that the error message inside the exception exactly matches"Cannot divide by zero". This ensures that the code failed for the correct reason, not just with the correct type of error.
Managing test lifecycle
Often, we need to set up a clean state before every test (e.g., creating a new object, resetting a list) or clean up resources afterwards. JUnit 6 provides lifecycle annotations to handle this efficiently.
@BeforeEach: Runs before every individual test method.@AfterEach: Runs after every individual test method.
There are also @BeforeAll and @AfterAll annotations, which run once per class. While JUnit 6 unifies the behavior of these annotations across different test engines, for most unit tests, per-test setup (@BeforeEach) is safer to ensure tests do not interfere with each other.
Lines 2–3: We import the
BeforeEachandAfterEachannotations from the JUnit Jupiter API package.Lines 12–18: The
@BeforeEachannotation instructs JUnit to execute thesetUp()method before every single test method in the class. Inside this method, we re-initialize theinventorylist and populate it with default items. This guarantees that every test starts with a fresh, isolated state, preventing data from one test from “leaking” into another.Line 21: This line defines the first test method,
testAdd(). When the test runner reaches this line, we know that thesetUp()method has just finished executing, so theinventoryis guaranteed to contain exactly two items.Line 28: This line defines the second test method,
testRemove(). This is the critical demonstration of isolation: even thoughtestAddadded an item previously, that change is discarded. ThesetUp()method runs again before this line, resetting the list to its original state beforetestRemovebegins.Lines 34–38: The
@AfterEachannotation tells JUnit to execute thetearDown()method after every test finishes, regardless of whether the test passed or failed. We use this to perform cleanup, such as clearing the list. This pattern is essential when testing resources such as file streams or database connections that must be closed manually.
Improving test readability
As test suites grow, finding a specific failure becomes difficult if method names are vague. We can use @DisplayName to give our tests human-readable descriptions. JUnit 6 renders display names more consistently across runners, which makes output easier to scan.
We can also use @Disabled to temporarily disable a test known to be broken, allowing us to fix other tests without deleting the broken code.
Lines 2–3: We import
DisplayNameandDisabledto modify how tests are reported and executed without changing the actual test logic.Line 6: We apply
@DisplayNameto the entire class. This class will appear as “User Account System Tests” rather than the raw class nameDisplayNameTest. This is excellent for grouping related tests under a clear, non-technical heading.Line 10: We apply
@DisplayNameto a single test method. Instead of seeingtestUserCreationin the report, we can see the full sentence “Should create user when email is valid”. This documents the test’s intent directly in the output, making failures easier for non-developers and teammates to understand.Line 16: We use the
@Disabledannotation to prevent this specific test from running. We include a reason string ("Feature pending implementation") to explain why it is skipped. The test runner will ignore this method during execution but will typically mark it as “skipped” or “ignored” in the final report.
In this lesson, we learned the fundamentals of unit testing with JUnit 6. We moved from manual checks to automated verification using assertions like assertEquals and assertThrows. We also explored how to structure tests using lifecycle methods to ensure a clean state for every execution. These tests are typically executed directly from our development environment, giving us immediate feedback as we work and ensuring our code remains reliable as it evolves.