The complete interview can be found on InfoQ.
Adrian Bolboacă, Organizational and Technical Coach and Trainer at Mozaic Works, got interviewed by Ben Linders, InfoQ, about different types of tests, writing sufficient and good acceptance tests, criteria to decide to automate a test, and how to apply test automation to create executable specifications.
Testing techniques like Equivalence Partitioning, Boundary Value Analysis, and Risk-based Testing can help you decide what to test and when to automate a test. When you are developing a new product, it might be better to initially go low on automation, argues Adrian Bolboacă.
When you’re testing an established product he suggests to write more automated tests for areas where bugs have appeared.
InfoQ: How should a good test look?
Adrian Bolboacă: A test should be very clear. Starting with the name of the test and its contents, everyone should understand why we need it, how it can help us, its purpose in life. For this reason it should be short and ideally should not contain technical words. For example, instead of using a variable “exception”, it is better to use a variable “error”.
Having many small, isolated tests help us understand where a problem occurs. But for that we need to write small atomic tests that focus just on one behavior. If you test more behaviors in a test, then it is not a unit test. It can be an integration test, an acceptance test, an integrated test, an end-to-end test or any other type of test. And of course, a good unit test should have minimum one, and maximum one, verification.
InfoQ: What are the differences between integrated tests and acceptance tests or end-to-end tests?
Adrian: I will start with the last one. End-to-end tests are meant to check if more modules of the system work well together. They shouldn’t focus on small behaviors from the modules. They are technical tests, that help us understand if we have the good setup, good security settings, good database connection, good links to webservices, etc. The audience for end-to-end tests is the technical team.
Acceptance tests are focused on features and the main audience is the product people. They need to show that the features work well. Product people can use these tests to accept, or not accept, the features before deploying them to production. Acceptance tests can be at a module level, they can pass through more modules, or at a system level. It depends on what we want to accept, and how our architecture is. The bigger they are, the harder it is to maintain them. The costs of having acceptance tests increases with their size. I recommend having acceptance tests focused on modules, and just using some end-to-end tests to see if they work well together.
Integrated tests are tests that pass through more than one module and are used to check the small behaviors in more modules. They are the worst, because they change a lot, being dependent of each small detail in each module.
Let’s consider we have more modules and each one of them has some behaviors we need to check.
Module 1: 16 behaviors to check
Module 2: 21 behaviors to check
Module 3: 36 behaviors to check
If we would want to cover all the behaviors with integrated tests, we would need to always change just one behavior and keep the rest of them unchanged. So a simple calculation gives us 16 * 21 * 36 = 12096 integrated tests. These tests are also slow, because they use GUI, real databases and real systems.
My alternative approach is to isolate acceptance tests on each module, and then just write a couple of end-to-end tests to make sure the setup, the “gluing” is correct. A simple calculation gives us 16 + 21 + 36 = 73 isolated acceptance tests + 2-10 end-to-end tests. My advice: never use integrated tests!
Read the complete article here.