I’m going to split this up into a few sections sections: ‘bad’ testing, and delayed testing.
So ‘bad’ testing, for me, is testing where I’ve not had the time, space, or ability/skill to fully test a feature. This could be caused by many things:
No time to cross-browser test
Especially if I need to test specific browsers for the stakeholders – versions of IE for example.
No time to cross device test
No time to do exploratory testing – only testing to the AC
Sometimes this can be done while testing the AC, but if I don’t get time to fully explore, just pass it on the basics, then it’s not really tested properly. The balance between exploring and signing off is skewed too much towards quick testing.
The insidious lack of care that I mentioned a couple of weeks ago
If the devs don’t care then I won’t feel motivated to point out issues to
Not having the context I need to feel fully connected to a project; so I can’t fully tell if the feature meets the needs of the client, or fits in with their branding etc
Not having the ability/skill to test fully: this could be a missing knowledge on the system or things like accessibility testing, performance testing etc
Delayed testing is when the story is shipped without testing, the testing is moved to a separate task, or simply pushed back to the next sprint. Delayed testing can also be caused when you’ve got all the integration testing as well as stories in a sprint to do, in the same amount of time.
Sprint one: 30 stories
Sprint two: 30 stories (each with integration testing)
Sprint three: see above.
Most of the time this is doable, especially with automated regression checks and testing alongside the development process, and closely to developers, as opposed to siloed testing. However, it does add up, and it’s something to take into account.
One way of tackling bad debt is getting a feel of what adds value to a project – what the business wants to get from the feature or project as a whole, and what the stakeholders see as important. You can then focus testing around those areas. This has the dual effect of both bringing more value from testing, as you’re testing what’s higher risk, or most important, so covers off the most bases efficiently, so even if you are pressed for time, you still feel like you’ve contributed something to the project.
Testing debt is the one I’m very familiar with; I’m more confident in my skills, but still learning, and there are huge gaps in my knowledge especially around automation. I find it really hard to test a project I have no context for, and even if I don’t miss anything I still feel like I’ve only tested the feature or issue shallowly. But I’m learning, and learning how to stay out of debt and I become a tester proper.