Ep 30 – Oh, Arse

Listeners!

Firstly, there is a donate button over yonder -> If you can help me get to my first ever test conference next year, I will be incredibly grateful.

Secondly! Next week will be the last episode of the year, and I will be announcing the guest that will be on the show for the first episode back in the new year! Exciting!

Okay, so this week I finally watched the first Whiteboard testing video (about testing), which was about the FART module (Flawed Approach to Regression Testing)1.

The idea is that you can’t necessarily rely on automated checking for your regression testing, you should be doing testing as well as part of regular testing.

He also mentioned the RC mnemonic2, which means taking the following into account when testing:

R: Recent – how recent was the change?
C: Core – what central functionality has to stay the same/be unaffected?
R: Risky – how risky was the change made? Where does the risk lie with this change?
C: Configuration – what environmental configuration do I need to be aware of when testing this? Are there any config factors that will be present when the code is live that I need to take into account?
R: Repaired – What has been repaired here? What functionality has been changed and what possible effects could it have on other parts of the system?
C: Chronic – What (if any) issues keep popping up? How can I test them? Can I add any specific automated checks to catch the most frequent issues?

This is a fantastic mnemonic that I’ve not heard before, and I think its going to be incredibly useful to me – I mostly test new features to existing projects, and so regression testing is what I spend a lot of time doing. I do this anyway, but its nice to have something to help tick off areas so I miss less.

We have some automate checks – and the devs add new checks for new features, and update the checks if features are changed, but for complex code bases, especially with custom features, or OOB modules that have been extended, there’s so much that can go wrong that we can’t assume that this is enough.

The automated checks concern base, centralised knowledge of the system – ours are based on the acceptance criteria, so they are for the functionality as it stands. The context of the project is missing – we need this to work in IE8 because that’s one of the browsers used more often, or we can’t assume the users will know what this means, they’re not tech-y. Things that can’t necessarily be coded, that require knowledge of the project as a whole, not just the system.

This also means that regressions around these features may be missed if the AC-based checks still pass. Yes, it may technically pass the AC, but that’s not the be all and end all of a story (much the the annoyance of at least one developer I’ve worked with), and I can and will send something back even if it does pass the AC if it needs work elsewhere, and it’s the same here.

Knowledge is needed for testing and knowledge is constantly updating and changing as a project does, and while most of it can be coded and codified, not all of it can be done so as checks.

And as Richard says, if you spend all your time adding to your checks, where do you get new knowledge?

Footnotes

[1]https://www.youtube.com/watch?v=P2PUXqasvGI&feature=youtu.be
[2]http://karennicolejohnson.com/wp-content/uploads/2012/11/KNJohnson-2012-heuristics-mnemonics.pdf

Leave a Reply

Your email address will not be published. Required fields are marked *

Tags: , , ,