Test plans, cases, charters, etc.
I honestly kind of thought all these were kind of different things in some way? It was surprising to find out that they are all basically the same thing.
I generally do a mix of planning tests out when writing AC and using how I’d test the feature to make sure I’d hit most of the edge cases. As I’ve said before, I also use this time to strive to figure out how the client would use this feature, so we can deliver a good product to the client.
So this is where I make any notes for that specific feature, if I need to get anything from the devs on deployment, if I need any test data from the client, things like that go in at the start.
When the feature comes to me, I will have more context – depending on where it is in the build I may have more context about the project, and the client, I may have more features that will interact with each other to consider, I’ve got information from testing other parts of the system, so all this information comes into play when sitting down to test.
I follow the AC as I test, to make sure the basics are covered, but then I test for the best solution we can offer – this can be testing for accessibility, usability, how it looks, if the flow makes sense.
Then there’s making sure it fits with the client, and with the rest of the product.
This also works as a vague form of prioritisation:
- Does it work as we have planned it to?
- If I can’t use it at all, can I tell if it’s fit for purpose?
- Does it fit the need we’ve made it for?
- Regardless of if it passes AC, does it actually work as needed? Is rework needed?
- Can it be used by users with different devices and software?
- Can it be used easily by all users?
- Does it look right?
- Per designs/wireframes
- Subjectively (tricky, but there can be consensus if something’s obviously off, or it’s a good opportunity to get UX/Design/client feedback
And I think being able to prioritise, even vaguely helps. The issue with testing, when you test and not check, is that there’s very rarely a single right answer. This means you have to figure out what the best option is, taking into account all sorts of things like budget (time/money), how complex the issue is, what the client wants vs what I think is the best option vs what the devs think. Then you have to formulate the best option or solution from there. Sometimes you can push it back to the client – if there is no obvious better answer, and there’s no real effort needed to change it, we may just talk to the client and give them options and see what they come back with.
So you have to engage with the product to test it, and I think this on the fly method of planning and testing bringing all the experience and context you’ve got over the course of the project works really well. I think having AC gives me a framework to work around, and I can test outwards from that core functionality.
I think it helps that I test manually. I think or assume that if I did more automated testing I’d probably do a bit more planning out in the ‘how I’m going to build/code/modify tests’ in order to hit all the cases. Maybe. Feel free to tell me I’m wrong.
I’ve started going through the Black Box Software Testing course1, which is supplemental to the Rapid Software Testing course that James Back puts on2, which is something I definitely want to go on (I can’t go to the one in the UK in October, unfortunately. Next time he hits somewhere close, maybe!).
As an aside, reading those slides makes me glad for my biochem degree, I’ve got some decent experimental and scientific thinking background to apply to testing.
As an aside to that aside, I did a lab based dissertation, and despite writing out all my lab protocols and following them to the letter, my experiment didn’t work. We tried various different things and made changes to the protocol; still wouldn’t work. Eventually my supervisor shrugged and said it was too close to the deadline to try again, and he had no clue why the bacteria weren’t playing nice, they just weren’t. So I had to write a dissertation on why my project failed.
I am used to planning things out and it going wrong for no discernible reason.
I can see how rapid software testing would be a bit worrying though. If you don’t write it down, or plan it out, how do you know you’ve not missed anything? Which is why I like having AC, and I make notes about what works (and what doesn’t), so there’s a paper trail of what I’ve done. I just think that being able to have that flexibility can be useful.