Ep 15: So Cunning You Could Pin A Tail On It And Call It A Fox

Test plans, cases, charters, etc.

I honestly kind of thought all these were kind of different things in some way? It was surprising to find out that they are all basically the same thing.

I generally do a mix of planning tests out when writing AC and using how I’d test the feature to make sure I’d hit most of the edge cases. As I’ve said before, I also use this time to strive to figure out how the client would use this feature, so we can deliver a good product to the client.

So this is where I make any notes for that specific feature, if I need to get anything from the devs on deployment, if I need any test data from the client, things like that go in at the start.

When the feature comes to me, I will have more context – depending on where it is in the build I may have more context about the project, and the client, I may have more features that will interact with each other to consider, I’ve got information from testing other parts of the system, so all this information comes into play when sitting down to test.

I follow the AC as I test, to make sure the basics are covered, but then I test for the best solution we can offer – this can be testing for accessibility, usability, how it looks, if the flow makes sense.

Then there’s making sure it fits with the client, and with the rest of the product.

This also works as a vague form of prioritisation:

  • Does it work as we have planned it to?
  • If I can’t use it at all, can I tell if it’s fit for purpose?
  • Does it fit the need we’ve made it for?
  • Regardless of if it passes AC, does it actually work as needed? Is rework needed?
  • Can it be used by users with different devices and software?
  • Can it be used easily by all users?
  • Does it look right?
  • Per designs/wireframes
  • Subjectively (tricky, but there can be consensus if something’s obviously off, or it’s a good opportunity to get UX/Design/client feedback

And I think being able to prioritise, even vaguely helps. The issue with testing, when you test and not check, is that there’s very rarely a single right answer. This means you have to figure out what the best option is, taking into account all sorts of things like budget (time/money), how complex the issue is, what the client wants vs what I think is the best option vs what the devs think. Then you have to formulate the best option or solution from there. Sometimes you can push it back to the client – if there is no obvious better answer, and there’s no real effort needed to change it, we may just talk to the client and give them options and see what they come back with.

So you have to engage with the product to test it, and I think this on the fly method of planning and testing bringing all the experience and context you’ve got over the course of the project works really well. I think having AC gives me a framework to work around, and I can test outwards from that core functionality.

I think it helps that I test manually. I think or assume that if I did more automated testing I’d probably do a bit more planning out in the ‘how I’m going to build/code/modify tests’ in order to hit all the cases. Maybe. Feel free to tell me I’m wrong.

I’ve started going through the Black Box Software Testing course1, which is supplemental to the Rapid Software Testing course that James Back puts on2, which is something I definitely want to go on (I can’t go to the one in the UK in October, unfortunately. Next time he hits somewhere close, maybe!).

As an aside, reading those slides makes me glad for my biochem degree, I’ve got some decent experimental and scientific thinking background to apply to testing.

As an aside to that aside, I did a lab based dissertation, and despite writing out all my lab protocols and following them to the letter, my experiment didn’t work. We tried various different things and made changes to the protocol; still wouldn’t work. Eventually my supervisor shrugged and said it was too close to the deadline to try again, and he had no clue why the bacteria weren’t playing nice, they just weren’t. So I had to write a dissertation on why my project failed.

I am used to planning things out and it going wrong for no discernible reason.

I can see how rapid software testing would be a bit worrying though. If you don’t write it down, or plan it out, how do you know you’ve not missed anything? Which is why I like having AC, and I make notes about what works (and what doesn’t), so there’s a paper trail of what I’ve done. I just think that being able to have that flexibility can be useful.

Footnote

[1]http://www.testingeducation.org/BBST/
[2]http://www.satisfice.com/rst.pdf/

Ep 12: Give Us A Clue

Or the importance of context and domain knowledge.

This episode was inspired by an blog post on uTest that came into my inbox this week, and I wanted to chat and expand on the points made there1.

As a tester, not having any context bugs me (hah!).

I need some insight into the product, or the client for me to feel comfortable testing. Having the spec or AC is not necessarily enough (though can certainly be useful for the basic functionality testing and checking). But for the other types of testing, where the lines between QA and testing blur, around testing fit for purpose, and the actual solution to the issue itself? I need to know who we’ve produced this work for, and why.

What industry? B2B? B2C? Not commercial? What users are they aimed at? Is the functionality I’m testing just for the CMS users or is it for visitors or customers? What are they hoping to achieve with what we’re giving them?

All these things inform how I test things, the questions I ask, and the feedback I give.

I’ve spoken before about how my background informs my testing – I know I make certain assumptions about a website’s or app’s functionality, because I use them all the time and am a techy person. That is something I have to keep in mind when I’m testing, and plan it out. I’ve started looking into mindmapping a basic test a website mindmap, and I’m hoping to put some steps together to remind me to test for things like user journey as well as the functionality I am actually testing.

Having more tech experience than the users we are testing for but less domain knowledge is the worst of all worlds. I can’t even begin to figure out the best way to test the system. I try to mitigate the first by being aware of my knowledge and being aware of best UX and accessibility practices but the latter needs context.

I’m not saying I need to know everything about the industry – especially from an agency point of view – I don’t necessarily need to be an expert in all sectors we build for (currently ranging from charity, to government sites, through to various ecommerce sites – gardening equipment and plants to scientific consumables), but I need to know basics at least for testing and QA.

I also need context about the project itself – what’s out of the box and what’s a custom functionality that I might not have seen before? If I’m doing manual testing I need to be able to use the site effectively, and as a user would, in order to get you the best feedback possible.

I also test better if I’m connected to a project – if its something I feel like I know.

I have been challenged on this – while discussing this episode Mark Jones asked me if actually having domain knowledge – especially for an ecommerce or other project that will be marketing driven to get visitors – would hinder my testing, as I’d know about what the product owner would expect me to do. If I come to the site completely without any knowledge about what was expected of me as a user, would I do more effective exploratory testing, as I’d be relying on the site to tell me all I needed to know? Which is an interesting idea. I’d like to think that I can divorce myself from my knowledge (like I do with tech knowledge) and be able to say ‘yes I can do this this way, but I’d expect to be able to do it that way as well or instead’. But its something to keep an eye on, to see if this desire to know about the product is influencing my testing.

Mark also suggested that in addition to my mind map I also write down all my testing steps to slow myself down and ensure I’m looking at things properly. This might be something I have to start doing as part of an exploratory testing session, and just take copious notes, and see what comes out of it.

So, I prefer having domain knowledge, I think it helps me more than it hinders, especially if I try to keep aware of my biases, knowledge, and expectations of how websites work, but too much knowledge may be a bad thing. What do you think listeners? Do you have a preference? How much domain knowledge do you have. As someone who’s done far more agency work that SaaS or other project work, I’d be interested in how testers in those settings find their domain knowledge affects (or doesn’t) their testing.

Footnote

[1]Domain Knowledge – Is it Important to Testers?

Ep 9 – This. Is. Dataaaa

The pain I went through for this ep. Garageband has a grudge. It lost half the ep after I’d edited it all, then I had to record the new outro (finally) about three times. But I fought, and I prevailed to bring you this episode. This. Is. Dataaaa (or Dataaagh). After listening to this/reading the notes, data will no longer feel like a real word. Be warned.

Firstly, I need to talk about episode 7 and the title, which comes from Starship Troopers. IF you’ve not watched it (and you’re not squeamish), you should definitely go watch it.

Secondly: I’m going to have another guest on the show! BBC Dev and co-founder of Manchester Tech Nights, Chris Northwood will be joining me on ep 11 😀

Gathering and managing test data can be part of the planning process. Test data can come from multiple parts of a project – when making a website there can be transactional data – both from orders going in, order data being exported to a third party stock system, there could be user data, subscription data, blog data, all sorts of different kinds of data from various sources, all interacting with each other in multiple ways.

Test data can also refer to data produced whilst testing – outputs of the system in response to various inputs for example.

Understanding the data the system uses, how it uses it, and what data is produced is crucial to understanding the system, and its only when you understand the system, you can test (and build!) it.

Sometimes you can use junk, or not real data. I have a folder on my computer of placeholder images that are just used to test how systems handle images (does including them in a content item mess up the layout for example), I also have some placeholder documents or various types to test their placement. And everyone’s seen the tweet about QA Engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 999999999 beers. Orders a lizard. Orders -1 beers. Orders a sfdeljknesv[1].

And that’s fine for the basics, right? Easy stuff. Its when you’re testing the more complex stuff, how the system interacts with each other, how data travels through the system, that you need to consider sensible, real data.

A fair few of the projects I’ve worked on involve third party integration, which introduces another level of complexity to the whole project, but especially any sort of testing. You’ve essentially got another set of test data there to produce and check.

For something like this, live data, either for processing or for comparing results is the quickest and easiest way to get the best data possible.

Things to consider when getting live data (assuming you can get live data):

Age and Relevance: You need data to be useful for the new functionality you’re building, or for the functionality you’re replicating if you’re just copying the current functionality to a new site. Old, irrelevant data is bad. Relatedly: will you need to edit the live data? If so, can you just change the parts that need to be changed without affecting any other data? Will changing this data affect the tests in any way apart from the way you want it to?
Going off on a tangent, sometimes running live data that you know should fail is useful, because you know what the fail should be, so you can make sure that it does fail in that way.

Security: Will you need to mask any sensitive data? If you do get and store live data, what data protection and security will you need to take into account? Will you have to ensure it’s masked, used, and deleted within a certain time period? That may affect when, where, and how you get live data, so you can be efficient when doing your due diligence.

Size: How much live data do you need? How easy is it to extract parts of the data? How long does processing take? How about storage and access to people who need it (see also: Security)?

Ease of access: Is it easy to get a hold of or generate? Does it require a third party to give that access or data? How about refreshing or updating it if needed?

Is it worth it?: Making sure that all of the above is worth it, weighed against how useful the live data will actually be.

I am all for using systems as they will be used in the wild – I think it’s an efficient way of bug hunting and testing, so I prefer live data for complex testing whenever possible. Sometimes you don’t really have to consider any of the above – we recently got a csv of product details and prices from one of our clients, which was simple, easy, and not really subject to any security measures. But customer details, anything that’s confidential, or sensitive in any way really needs a test management plan.

There are plenty of programs, and methods to manage and even generate your test data, and I’m definitely interested in hearing your experiences with them! I’ve never used them, I mostly work on small projects that don’t require huge amounts of data and data management, so this an area I’ve enjoyed looking into, and will definitely be checking out more!

Footnotes

[1]https://twitter.com/sempf/status/514473420277694465?lang=en
Other reading:
http://www.cmcrossroads.com/sites/default/files/article/file/2012/XDD3202filelistfilename1_0.pdf

Ep 6: Access All Areas?

Testing for accessibility. Its a biggie, and something I always feel I personally can do better at, though looking at and using some tech out there I think the majority of people can do better. When building websites, you can ensure the site, and structure (if using a CMS) meeting best practices.

We can point out accessibility issues with clients, things like pointing out if their brand colours don’t contrast enough, and ensuring that title/alt/caption text fields are present when the product is live, but there’s only so much you can do when you’re not necessarily providing content, just the site, though some clients will ask us for usability or accessibility input when we work on the site.

Things I do

  • Listen to the site/try to navigate using VoiceOver. This is especially good for picking up on things like link text which is ‘Click Here’ with no context, and a lack of ‘skip to main content’ links. Incidentally this seemed to take ages for me to get to grips with, it really seemed counter intuitive to me, but I’m not sure if that was because it wasn’t designed for me and how I interact with websites. For example, I have to close app notifications when I use it as they get focus as soon as they appear, moving away from the site
  • Don’t use the mouse, try to access all areas with the keyboard. Tabbing for the win. Use in conjunction with VoiceOver for maximum testing
  • Use the WAVE[1] Toolbar (checks contrast, turns off styles, checks for content issues as well). This flags up a lot of things, it’s a good tool, quite simple to use.
  • Checking for alt text for images that need alt text (there’s some disagreement on if all images need alt text, or only ones that have a purpose need explanatory text? I generally go for important images have alt text, and others can have alt text if the client feels its needed, but I’m not sure what’s best for people).
  • Semantic HTML. Make sure the HTML is semantically correct, headings are actually using h elements, strong, emphasis markup. Using lists correctly, all these things can be checked and help to make the site as accessible as possible

Mobile is another thing. There’s usability issues like the size of active areas and making sure links are highlighted/not too close together (all of which you should be testing on mobile devices anyway), and again, iOS devices come with VoiceOver for visually impaired users. Android now comes with talkback and haptic feedback

I feel like I’m missing something more often than not, but I also feel like I send sites out there that are pretty accessible, especially compared to others out there (though I know that’s not the metric we should judge our work by). I still feel I’m missing out on techniques and tools to test for accessibility, or missing out on user experience that means people can use the sites we put out. So, tell me, what’re your favourite tools or techniques for testing for accessibility? I’d love to hear more, and I’ll compile a list of accessibility awesomeness.

[1]https://wave.webaim.org/toolbar/