Testing can never be 100% done. Websites in general, in fact, never really reach a state of ‘done’ which is something that’s hard to get across to clients, sometimes. There will always be parts you’ve not touched, or not put as much effort into – it’s a trade off of a fixed timeline or budget or whatever.
Some clients get it – they’ll be putting ideas into a backlog for the retainer or a second phase of development. Other clients will want everything ‘done’ before something goes live. A definition of done helps, as well as lots of work up front in the planning stage.
We lay down what we’ll provide, including device and browser support, and level of accessibility.
Of course the client wants perfect. The site is an extension of their brand, a way of bringing revenue and people to them. And we don’t want to give them shoddy work, we just have to be realistic about what we can provide in the time given, along with any other constraints we may have (the system we’re using, the systems the client need may us to integrate with, designs, functionality needed etc).
Sprint based testing, with automated regression testing means I mostly focus on the sprint and story in front of me. I do exploratory testing, and focus on the parts that are the highest risk, including integration with other parts of the system. But that still means I might miss something.
It’s impossible to test all scenarios, all data, in all contexts, and even if it was, it wouldn’t be sensible.
Testing cannot show the absence of bugs. Finding no bugs doesn’t mean there aren’t any – just we’ve not found them. You can’t prove an absence. It becomes a risk assessment, a list of priorities of what needs to be tested, then what should be tested, and some exploratory testing based off what the first two bits expose.
Over the course of a project you can learn more about what the client wants from the project, their biases and how they affect their view and usage of the site, and that can feed into how you test; the parts you focus on.
We can also guide clients to what we follow as best practice, and what we think is best for them, and we’ll mould our way of working to theirs. We manage their expectations (which is a terrible phrase). We make sure nothing is a surprise, that they know that 1) they will probably find some issues 2) not all of them will be bugs, but some of them will 3) we will push back on some of them if we think its necessary.
And there are definitely some clients who think the site should be perfect and complete, whose reaction to finding a bug is to point the finger and question our abilities. We shrug and explain and move on, and it makes me so so glad that my employer, my teams, don’t react the same way.
Quality is not testing. Or at least, not only testing. Quality is a team effort; it has to be. Every member of the team brings their own speciality, bias, and knowledge to a project and their inputs are what make a quality project. Quality is built in to the project, not tested into it. Testing mops up the edges, makes sure we’ve not lost sight of any of the details whilst putting the site together, makes sure we’re hitting the compatibility/accessibility targets we need to hit.
Testing is not perfect.