Want to come work with me? We’re hiring a junior tester!
Leading on from the Science! bit, I want to talk about testability.
So testability is how testable a product or system is by a given person, in a given context. Having software that’s testable makes testing quicker, because not only can we test quicker, we can also be confident that our testing has been effective.
Testability requires certain things:
We need to have a definition of right or correct behaviour, so we can form a test plan or strategy. If we don’t know how the system is meant to work, we can’t ensure it’s working as expected.
We need to put some work into defining features separately, so each can be discussed in isolation (or as much as is possible). This iterative process means testing can happen as early as possible, otherwise we’re in a position when all we can test is all of it in one go, which makes it harder to test.
There are some things that are only really testable in certain circumstances – whether that be a quirk of the system that means we can only monitor the results when live, or features that have an effect over a long period of time. I recently had a feature that tracked results over 7 days, and it wouldn’t be feasible for us to leave off deploying to the test environment for 7 days to see if that works, so there was some hackery involved so I can ensure that it removed data from older than 7 days as expected, things like that. It wasn’t ideal, but it was good enough.
And this is an issue that I run into occasionally. Where possible we try to figure out a way to test it as close to reality as possible, and put those frameworks or systems in place.
Linked to this is being taught how a feature works. Sometimes devs will put notes on a ticket to point me in the right direction, and then will provide release notes for the client. One lead dev recently asked whether we would be worth putting the notes they’d pass onto the client on the ticket when passing the story for testing, so I can both test the feature and the release notes. Now, this is an idea I want to implement, because again, the devs know how to use the feature, so the instructions may be either too broad, or use terminology that’s not well known to the client (referencing entities, MIME types, etc).
I prefer just having an idea of what the feature should do or should allow me to do, so I can see how intuitive the feature is, and make sure I’m not following the smooth path and looking for edge cases. If it’s a complex piece of back end functionality that we would provide instructions for, then I’ll use those to see if they work as expected.
This way, I’m testing both the instructions and the feature at the same time.
There’s more to testability than usability though. The definition is testing by a given person in a given context. Everyone tests differently, and everyone uses different tools to do so. Testability in this sense can be raised by someone helping out with testing, or by a test strategy, product, system, or domain knowledge; anything that means the tester can test more, or feel more comfortable and confident testing increases testability. And gaining knowledge of the product and the client will give greater understanding of risk, both to the specific product and to the system you use more broadly.
For example, I generally speaking, test Drupal sites. Drupal does categorisation in the form of taxonomies and vocabularies out of the box. Specific categories have to be set up, but the framework is there. That takes roughly two minutes for me to look over while I’m testing something else because I know that it doesn’t need testing, really. I feel comfortable taking that risk.
And managing risk, and reducing the distance between what we currently know and what we need to know is really the cornerstone of testing.