I’m crowdfunding my trip to TestBash net year! Donation link in the sidebar, if you can give. I’m also getting stickers sorted for the ‘cast, AND starting to organise another guest! Exciting times in LTATB towers.
Today though, I want to discuss usability testing, and what if you can’t do testing with actual users.
As a QA I’m partly customer facing – I have to be – I’ve spoken before about how we’re a bridge between users or POs and devs, and as such we need to interact with both.
This means that we can start to test for usability.
When I spoke to Chris in episode 11, he mentioned the user testing lab they have at the BBC and I’m ridiculously jealous. I’d love to watch people testing things in the wild, so to speak, that I’ve tested, just to see what I’m missing.
The closest I’ve personally come into contact with is things like user testing1, which is a company that essentially has a catalogue of people who will test your site or app. The people are from various demographics, and you can also get heatmaps/videos/audio feedback on your product.
The issue here is people are being paid to test. They feel like they have to give feedback, and it’s not the most natural form of testing, compared to just letting users use the product as they would normally.
Thoughtworks wrote a blog on guerilla testing, which is getting people off the street to test your product as a way of getting that candid user feedback and testing2. Which is an interesting concept, and the best way to do user testing, but requires both logistics and some handholding to get useful feedback (I expected x not I think this should be a darker blue).
One of our projects in currently in public beta, with a link to a feedback survey, before go live towards the end of the year, which is one way of doing this kind of testing.
But how do you tackle this when you can’t do user testing for whatever reason?
The team I work with are pretty good as this. We’re happy to go to another member of the test team and ask them to look at a feature, to see if it makes sense to someone coming in cold to the site and feature.
We’ve also had apps tested by the majority of the office at the same time to do real time load testing and where the bugs are across platforms and users.
But that’s only good for finding where things aren’t quite intuitive enough. It’s not good for specifically testing if what you’ve built actually meets the needs of the user. That’s what the workshops and domain knowledge is for.
But, what if your user is the public? You’ve got myriad different contexts, abilities, knowledge bases to take into account, and that’s before you start thinking about accessibility, devices, breakpoints; the technical or physical attributes that you need to take into account to cater to your users. There’s a whole bunch of other contexts that you need to think about when going through the build process. So you workshop and some clients have information on browsers/devices used, and personas, which is information that we can take into account when testing, but is a first approximation at best.
So, you’ve got UAT, and that means you’ve got your product owners and stakeholders testing it, and they may have a decent to good idea who their users are likely to be, and, possibly more usefully, bring their own set of contexts and quirks to the testing (this is not always a good thing, but it’s interesting). Having sprint based UAT means that testing happens early and often, so you can catch usability issues early on.
We have internal walkthroughs every sprint, where the lead dev, front end dev, QA, design, and PM will sit down and go through that sprint’s work, to look at it as a whole before passing over to the client. This sometimes catches things, and is a useful marker of how far along in the project we are.
You can’t test everything, but you can plan, get your quality baked in, and grab enough different eyes on a project to catch as many issues as possible.