Ep 30 – Oh, Arse

Listeners!

Firstly, there is a donate button over yonder -> If you can help me get to my first ever test conference next year, I will be incredibly grateful.

Secondly! Next week will be the last episode of the year, and I will be announcing the guest that will be on the show for the first episode back in the new year! Exciting!

Okay, so this week I finally watched the first Whiteboard testing video (about testing), which was about the FART module (Flawed Approach to Regression Testing)1.

The idea is that you can’t necessarily rely on automated checking for your regression testing, you should be doing testing as well as part of regular testing.

He also mentioned the RC mnemonic2, which means taking the following into account when testing:

R: Recent – how recent was the change?
C: Core – what central functionality has to stay the same/be unaffected?
R: Risky – how risky was the change made? Where does the risk lie with this change?
C: Configuration – what environmental configuration do I need to be aware of when testing this? Are there any config factors that will be present when the code is live that I need to take into account?
R: Repaired – What has been repaired here? What functionality has been changed and what possible effects could it have on other parts of the system?
C: Chronic – What (if any) issues keep popping up? How can I test them? Can I add any specific automated checks to catch the most frequent issues?

This is a fantastic mnemonic that I’ve not heard before, and I think its going to be incredibly useful to me – I mostly test new features to existing projects, and so regression testing is what I spend a lot of time doing. I do this anyway, but its nice to have something to help tick off areas so I miss less.

We have some automate checks – and the devs add new checks for new features, and update the checks if features are changed, but for complex code bases, especially with custom features, or OOB modules that have been extended, there’s so much that can go wrong that we can’t assume that this is enough.

The automated checks concern base, centralised knowledge of the system – ours are based on the acceptance criteria, so they are for the functionality as it stands. The context of the project is missing – we need this to work in IE8 because that’s one of the browsers used more often, or we can’t assume the users will know what this means, they’re not tech-y. Things that can’t necessarily be coded, that require knowledge of the project as a whole, not just the system.

This also means that regressions around these features may be missed if the AC-based checks still pass. Yes, it may technically pass the AC, but that’s not the be all and end all of a story (much the the annoyance of at least one developer I’ve worked with), and I can and will send something back even if it does pass the AC if it needs work elsewhere, and it’s the same here.

Knowledge is needed for testing and knowledge is constantly updating and changing as a project does, and while most of it can be coded and codified, not all of it can be done so as checks.

And as Richard says, if you spend all your time adding to your checks, where do you get new knowledge?

Footnotes

[1]https://www.youtube.com/watch?v=P2PUXqasvGI&feature=youtu.be
[2]http://karennicolejohnson.com/wp-content/uploads/2012/11/KNJohnson-2012-heuristics-mnemonics.pdf

Ep 28: You’re like me, I’m never satisfied

Firstly, this week’s title comes from Hamilton, a broadway play about US founding father Alexander Hamilton. I had no special interest in his life until I listened to the music from the show and now I’m obsessed1.

So, last week I went to my first testers meetup. I almost chickened out, but I made it there.

One of the first people I spoke to was someone who was also there for the first time, which was kind of nice – neither of us knew anyone, so we bonded over that.

As the room started to fill up I noticed that 1) the crowd was a diverse lot, which was nice and 2) people knew each other. Whilst that was intimidating, the environment was very open and friendly, and joining in conversations felt easy.

There was pizza and drinks and chat before the main event, during which I met and spoke to James Lyndsay, who is great.

He showed us some challenges – puzzles that he had created to help with exploratory testing. He’s got a few of them on his site2, but we specifically used ones put together on a page for the meetup – these are a smaller set of the ones found on the main site3.

After tackling the first two individually, we split into groups for 15 (3 or 4, with some combination or at least one tester, one suggester, and one observer). The idea was to look at how we were testing, and our reactions to the puzzles, not necessarily what the answers to the puzzles are.

That said, there are two I’ve not figured out properly yet, and its bugging me. I need to find time to look at them properly. I’m not at work next week, so I’ll make time to figure them out. If you know, don’t tell me! I won’t tell you. But if you want to compare notes, maybe, send me a message.

I have a tendency to rush into things – and puzzle 15 proved that when it took me a couple of attempts to breakdown exactly what was happening. I needed a pen and paper to write down what happened and what triggered it, but I didn’t even have my laptop so was using my phone, which made everything awkward (I realised I should have brought a laptop the second I sat down).

The puzzles are simple, but figuring out the correct pattern isn’t always simple – our brains love patterns and that means they’ll find one – sometimes the wrong one over the right one, and then we need to work to look past that to see the correct pattern. We’ll also try to prove ourselves right by performing the same actions, which may lead us to not think about what else we could do.

The puzzles are pretty ingenious – simple enough that you don’t get bogged down in the functionality, but with just enough twists to keep your attention (I totally missed a twist to puzzle 22, which I found out after someone at the meetup let it slip – kicking myself for not thinking about testing that).

After the puzzles we cleared up a bit and when for a quick drink. The conversations were excellent, and I’m disappointed I had to leave early to get home. I’m really looking forward to the next one.

Again, being around people who were passionate about testing was brilliant, and reinforced how much I want to be a career tester (and QA), and I plan to spend a fair amount of next week making more podcasts, and getting back into testing (my work has been a lot of admin/organisation work, with less time for testing recently. I miss testing!), just getting back to the thing I want to do with my life.

Footnotes

[1]https://www.youtube.com/watch?v=JrbCFR1FsZk
[2]http://blackboxpuzzles.workroomprds.com/
[3]http://nwtg.workroomprds.com/

Ep 27: #FML

So, what happens when it all goes wrong?

As much as quality assurance is the job of everyone in a team, and getting QA involved at the planning phases means that QA is a process, not a step, work still goes through QA on the way to the client or user.

I still feel a level of ownership, which yeah, is my job, I’m supposed to own this, its what I signed up for. Its my job to work with the developers to make everything the best we can, and flag up issues. Its might job to find other people’s failure in a way. And sometimes its hard. Its hard to go ‘well, actually, there’s an issue here, we can’t go live with this’ but its the job.

But what happens when it goes wrong anyway? When the deadline isn’t hit, or the product isn’t up to scratch come the deadline?

I’m used to telling developers that I’ve found an issue with their work, that it doesn’t work quite right, or they’ve missed some part of the AC, but sometimes a bug is found after it’s gone through me – either the client picks it up, or a developer or someone else involved in the project does and I’ve missed it. And it sucks. But, you’ve got to use it as a way to learn.

A product hitting the client late, or with bugs that should have been found is always a blow to everyone, but how do you find out the cause and learn from that to move forward?

I think first of all you need the right atmosphere – looking for a reason and a process that maybe needs fixing is different from pointing the finger. You need that kind of atmosphere where people are – not comfortable being wrong, but comfortable admitting when something wasn’t right. I’ve been in retrospectives where developers have admitted they felt they missed too many issues before passing work to me, and that was noted but just accepted, and the developer moved on, knowing they wanted to approve. I’ve done the same. I’ve spoken to PMs about feeling like I can’t handle the workload, and I’ve been upfront where there have been issues.

I think this goes a long way to preventing failure by missing a deadline.

You also have to be able to look at the processes. Were they followed correctly? If so, is that the issue – is there a shortfall, or something in the project that needed a more custom process? If not, why not? If all of the above doesn’t yield any issues from the process, was it a communication issue, a client issue?

There needs to be a balance between assuming there is an issue with the process and assuming the failure in human error. And of course, allowing for the ‘shit happens’ defence.

The most important thing, I think, is to not immediately try to find who to blame. You want reasons, and responsibility maybe, if there’s an ownership gap maybe, but you don’t want to play the blame game, that makes people immediately defensive.

A couple of months ago, at the North West Tester Gathering, there was a meetup themed ‘Failure is Not a Dirty Word1’, and there were three talks about failure, and it was really interesting to hear people be so open about failing. And its useful, because hearing other people talk openly about failure and what they learned from it encourages other people to do the same, and admitting and learning from failure is more useful than being defensive about it.

My points of failure tend to be:

I rush a lot. And sometimes I do miss things I should catch because of this. Generally I’m good at catching myself and going back and looking over the task again, because I know this about myself, but my first reaction to pressure is to go faster – a legacy from working in pharmacy

Communication. Sometimes my anxiety means that I find it hard to communicate in a timely manner, especially if it’s bad news. And procrastinating on that can mean the bad news turns into worse news.

And talking about failure is important because we need to face it in order to learn and grow from it. So let’s share our fails today!

Footnote

[1]https://www.youtube.com/watch?v=_DWl4Wtf5_U&feature=youtu.be

Ep 26: Nobody’s Perfect

Testing can never be 100% done. Websites in general, in fact, never really reach a state of ‘done’ which is something that’s hard to get across to clients, sometimes. There will always be parts you’ve not touched, or not put as much effort into – it’s a trade off of a fixed timeline or budget or whatever.

Some clients get it – they’ll be putting ideas into a backlog for the retainer or a second phase of development. Other clients will want everything ‘done’ before something goes live. A definition of done helps, as well as lots of work up front in the planning stage.

We lay down what we’ll provide, including device and browser support, and level of accessibility.

Of course the client wants perfect. The site is an extension of their brand, a way of bringing revenue and people to them. And we don’t want to give them shoddy work, we just have to be realistic about what we can provide in the time given, along with any other constraints we may have (the system we’re using, the systems the client need may us to integrate with, designs, functionality needed etc).

Sprint based testing, with automated regression testing means I mostly focus on the sprint and story in front of me. I do exploratory testing, and focus on the parts that are the highest risk, including integration with other parts of the system. But that still means I might miss something.

It’s impossible to test all scenarios, all data, in all contexts, and even if it was, it wouldn’t be sensible.

Testing cannot show the absence of bugs. Finding no bugs doesn’t mean there aren’t any – just we’ve not found them. You can’t prove an absence. It becomes a risk assessment, a list of priorities of what needs to be tested, then what should be tested, and some exploratory testing based off what the first two bits expose.

Over the course of a project you can learn more about what the client wants from the project, their biases and how they affect their view and usage of the site, and that can feed into how you test; the parts you focus on.

We can also guide clients to what we follow as best practice, and what we think is best for them, and we’ll mould our way of working to theirs. We manage their expectations (which is a terrible phrase). We make sure nothing is a surprise, that they know that 1) they will probably find some issues 2) not all of them will be bugs, but some of them will 3) we will push back on some of them if we think its necessary.

And there are definitely some clients who think the site should be perfect and complete, whose reaction to finding a bug is to point the finger and question our abilities. We shrug and explain and move on, and it makes me so so glad that my employer, my teams, don’t react the same way.

Quality is not testing. Or at least, not only testing. Quality is a team effort; it has to be. Every member of the team brings their own speciality, bias, and knowledge to a project and their inputs are what make a quality project. Quality is built in to the project, not tested into it. Testing mops up the edges, makes sure we’ve not lost sight of any of the details whilst putting the site together, makes sure we’re hitting the compatibility/accessibility targets we need to hit.

Testing is not perfect.

Ep 23: You can’t teach a fish to climb a tree

Or, talking about that ‘oh’ moment

“Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.”

This episode came out of me writing about usability testing and teaching people to give the type of feedback that is useful. You want ‘this took longer than I expected’ not ‘I’d prefer if this logo was a lighter blue’.

People aren’t sure how to give feedback, and people very rarely realise they are testing things every time they use them, so they overthink things.

I once worked at a company that tried outsourced user testing, so companies paid people to test websites, and recorded the screen and audio feedback from the test session, and the audio feedback we got was really awkward? The user felt the need to narrate everything they did, and not in a useful way. It was more ‘this is a website, this is a menu’, as opposed to ‘this is what I expected’ or ‘this is not what I expected’. And I think some guidance is needed for users who are testing your product, or you’ll have to spend time filtering out the terrible stuff.

You don’t want to tell people how to test, or influence them. You may want to tell them what they should try to do ‘I want you to use this app to purchase these tickets’, and then help them give the correct feedback.

You want people to think out loud1. Thinking out loud is how you arrive at an answer, regardless of the actual answer. The answer isn’t always important, especially in user testing/ux, but the journey is.

The idea is to ask people something like, the number of windows in their house. Then tell them to talk while they think of the answer. Make it clear that you’re not so much interested in the answer, but the method of getting there. So they can recount how they get there – moving through their house mentally from room to room, counting windows.

This then makes them more likely to have that moment when, during testing, you go ‘oh’. The moment where the system surprises you. This is not always a bad oh, but it’s something to pay attention to.

If someone finds something that makes them go oh, you need to figure out:

  • What is the surprising thing?
  • What about it surprised them?
  • What would they expect instead?

Then you can build the issue and how and why it affects them, and if its something that needs changing.

And it’s that moment – the oh – that you want to cultivate. And that requires people thinking about what they are doing. The web is huge and ubiquitous, and so, in agency work, there are things you take for granted; the burger menu, the way menus change across breakpoints, things that we don’t think twice about, but users might not understand, or understand in the same way. So thinking aloud helps to understand where the feedback you are receiving is coming from. And people tend not to self-censure as much when they’re thinking aloud compared to recording or writing something in a formalised setting.

So, this is kind of two separate things you need to bring together when talking to people about testing – you can’t tell them how to test, and you don’t want them to overthink testing but you need them to verbalise their thinking process in order to get that feedback.

We have a site that’s currently in public beta prior to go live – there’s a survey at the top to feedback on how users feel about the new site. As with all surveys you have to ensure you’re not asking leading, or loaded questions, objective versus subjective questions, things like that. Which is difficult, when you’re eliciting feedback. There is a list of questions with range answers and then some free text fields, which is a pretty good way to get open feedback that you can trawl through, and that I’m guessing is always a thing you have to take into account when eliciting this kind of feedback. I wonder if asking questions like you get from cashiers ‘did you find everything you were looking for today?’ would be a way of getting people to think about the actual journey, as opposed to the scenery?

Footnote

[1]http://www.uxmatters.com/mt/archives/2012/03/talking-out-loud-is-not-the-same-as-thinking-aloud.php

Ep 22: Stranger In A Strange Land

Phpnw was this weekend! It’s a php conference in Manchester that’s in its eighth year. This is my second year, and it was fab. I’m not a developer or techy or anything, but it’s still a good conference to go to. I spent most of saturday morning in the unconference track, listening to talks on how the Cub Scout motto can apply to code, how to measure code complexity, and text fields in mysql/postgres. The talks are always interesting and passionate even if it does go over my head.

It’s a bit weird, as a tester going to a developer conference – and I’m a tester that’s not really interested in becoming a developer, but I think it’s worth it.

Firstly, they work hard to keep the cost low, which is something I am immensely grateful for. Secondly, the manchester tech crowd is a good one, and this is a great opportunity to catch up with people. The talks are always high quality, and there’s enough of a mix of technical and non-technical talks that I don’t feel that I’m wasting my money going there.

And sometimes I like hearing the technical talks, even if they’re over my head. It’s generally interesting, to see how developers put things together, and the standards they use.

The atmosphere is what I really go there for though. Everyone is so passionate, and the energy in the talks and the breaks and the social is excellent. I am welcomed without question, and people are interested in what others have to say.

Sometimes just being in a room with passionate people is enough to motivate you to do all sorts of things. In between talks, I wrote two and a bit podcast episodes, just bringing in things I’d heard and new perspectives brought in from comments in the talks.

Sometimes it feels like I’m a bit of a David Attenborough role: ‘Here we see the developer in the wild…’ but mostly I’m there with them.

In particular, I want to talk about the first keynote

The title was Stealing People Lessons from Artificial Intelligence, which excited me, I find the idea of machine learning fascinating, and I love applying lessons there to other places1

Meri’s talk was hilarious, insightful, and taught me things about management, motivation, and water polo. There is a list of books that she recommends, and I’m going to add them to my christmas wishlist this year (along with a TestBash 2016 ticket!). There’s also a great sketchnote video that she links to2

The idea is that money actually doesn’t motivate people when they have to do anything other than the most mechanical of tasks. Anything that involves thinking, creativity, engaging your cognitive skills? You need a whole other set of motivational tools around autonomy, mastery, and purpose. It’s a fascinating subject area.

And it was nice to see someone being so enthusiastic about managing people and managing them well and growing people, moving them through a career path. It was inspiring.

And that’s why I go to phpnw, despite being a dev conference, they give space to non-dev talks, and those talks are good and really useful to pretty much everyone. I’d recommend it to anyone who’s in the area.

Footnote

[1]http://blog.geekmanager.co.uk/2015/10/03/phpnw2015-keynote-stealing-people-lessons-from-artificial-intelligence/
[2]https://www.youtube.com/watch?v=u6XAPnuFjJc

Ep 21: Best laid plans

Or lack thereof.

So, we had an email from a client today, asking why our AC doesn’t contain the edge cases. We’ve send a reply talking about what AC is for, why we don’t put edge cases in AC and how we test around AC.

I’ve never come across this before, and we generally don’t document test plans, but use session based testing. Now, I have got lax in my documenting things, and this pulled me up on that, so I can do better there. Other than that, I’m unsure how best to convey that we do push the system when we test in a documented way. Guidelines will be either too vague to be useful if the guidelines are used across projects, or too long and in depth if they are written per project.

So, really, this episode discusses the situation, how we’re solving it, and asks how other people deal with this – how to gain the trust of client when doing more exploratory, session based testing, and not using a spec or in depth test plans.

Ep 19: Oracles

Let’s talk about Oracles.

Oracles determine whether a program or piece of software has passed or failed a test. In mythology they are also just that, mythological. And whilst, unlike seers who divined via interpretation of signs, they were spoken to directly by Gods, they didn’t share their knowledge often, were over subscribed, and seemed to be bribable by animal sacrifices, especially if they were Greek Oracles.

Oracles were often misinterpreted – for example the Greek tragedy of the original motherfucker Oedipus, King Laius is told he will die at the hands of his son by an Oracle, so he tells his wife (Queen Jocasta) to kill their baby son. She tells a servant to do it, and through various means depending on the version, Oedipus ends up being adopted. Oedipus hears a rumour of his adoption and asks an Oracle who his real parents are. Instead he is told is will kill his father and mate with his mother. Oedipus flees from his home, taking this to mean his adoptive parents are his real parents.

He meets King Laius and Queen Jocasta on the rod, falls in love with the queen, kills the king, and via a run in with a Sphinx, ends up with her hand in marriage. Thus the prophecy is fulfilled but via a roundabout kind of way.

(Yes this is mostly an excuse to talk about Greek mythology, something I am a huge fan of. Relatedly, if you’re also interested in Greek mythology, and want a comic rec, there is a great comic called ODY-C, which is a gender swapped retelling of homer’s odyssey, set in space and its amazing)

A fitting name, I think. Making decisions is hard. As humans we are horrifically bad at making decisions – there is a whole range of ways we can mess up making a simple decision (side note, I finally got the audiobook version of Thinking Fast and Slow, which is on basically every tester’s reading recommendations I’ve ever seen – I feel like listening to it is some form of initiation), but let’s take the two most likely here:

False Positive: We think all is well, when actually, the program has failed somewhere
False Negative: We’re pretty sure that there is a fail, but the program is fine

Generally speaking, this pitfalls affect automated checks more than humans.

So, maybe instead of viewing oracles as well – oracles – mouthpieces of the Almighties, maybe we should come at this from a heuristic, divination sense. Seers, if you will.

When we start a project, chances are the client wants something new, a new system, or at least a new design. This means that our heuristics are quite basic – we know what the client wants to achieve – and what they want their clients or users to achieve – but we have very few oracles to go off.

This stretches the definition of oracle a little, which is based on actual testing of software, but we have to test the plan first – is what we plan to deliver going to pass the tests we set? Only then can we build the basics, and start testing from there. I put together an incomplete list of the oracles we can get when starting to plan a project.

Company branding and house style
Previous versions; what they liked, what they didn’t
What examples of competitors or other similar functionalities and apps/sites they like
Third party integration they need/want, and the documentation there

These can be used to help us decide if our ideas and plans meet the needs of the client and their users.

Then we can get more specific, though still incomplete by necessity:

A set of preconditions that specify the state of the software and system when you start the test (The scenario)
A set of procedures that specify what you do when you do the test (When I do this)
A comparison of what the software under test does with a set of postconditions: the predicted state of the system under test after you run the test. This set of postconditions make up the expected results of the test. (Then I should see this vs. what the test actually produces)

Oracles are, or probably kind of have to be, heuristic as opposed to prescriptive – they enable people to discover or learn, they’re not a script. Michael Bolton has a long series of posts on this which helped me understand these concepts a bit better1. This is because context applies to software and testing – things have to be correct and fit for purpose, not definitively right (see essentially any other episode – I’ve spoken about this a lot).

If you apply a test (or an oracle) to a piece of software and you review the results, you can then decide if the results is correct or not. Is it a problem? Is what you expect? This is the true value of oracles – giving you information that you can use to test more and better.

So where am I going with this? Oracles help you decide what is a problem. Because, as I said, humans are terrible at decisions. Even if you end up applying the wrong oracle, you can still learn from applying it – you get information from the results and you can hone from there.

So oracles are terrible at telling the future, okay at telling you when a piece of software has passed the test, but best at helping you decide if any problem is actually a problem.

Also, if you’re in ancient Greece, they are bribable with animal sacrifices.

Footnote

[1]http://www.developsense.com/blog/2012/04/heuristics-for-understanding-heuristics/

Ep 18: What a Tool

What is a tester without their tools? As a manual tester I generally don’t use the coding tools, but that doesn’t mean my toolbelt isn’t nifty.

But first, apparently, the 9th September was National Software Testers day, because on the 9th September, 1947:

While she [Admiral Grace Hopper] was working on a Mark II Computer at Harvard University, her associates discovered a moth stuck in a relay and thereby impeding operation, whereupon she remarked that they were “debugging” the system.1

And while the terms bug and debugging were around before that, it is this incident that is credited with popularising the term. So I hope you all had a good software testers’ day, free of moths in your code.

Moving on to the show, let’s talk about the tools of the trade.

Firstly, we use the Atlassian suite of products.

  • So, Confluence is our document repository, our haven of information. I try to update it whenever I notice something that’s not quite right, so I can contribute to the inconvertible truth that is our confluence. Meeting notes go on there, and you can add actions and mention people by username and they get a notification. I think the only thing missing is multiple people editing like on GDocs. I don’t think you can do that on Confluence, but its functionality that I very rarely need, so I don’t miss it much.
  • We use JIRA to track the bugs, organise sprints and stories, track time, plan time. Pretty much everything.

    I like being able to organise sprints there and plan the developers’ time by dragging and dropping the sprint into their planner, so everyone can see what resource is allocated to which project.

  • Jira is also where I manage my test sessions, make notes, link them to tickets, create bugs, things like that.

    JIRA capture is a nice browser plugin that makes logging bugs on JIRA a lot easier. You can take and edit screenshots, and link bugs/tasks to the relevant stories. You can use templates so all bugs in a project are automatically added with the correct sprint, version, component, things like that.

  • I keep showing clients Jira Capture to try and get them to add annotated screenshots to tickets to help us all, and some have taken this to heart; even if they don’t user capture they add screenshots with annotations to tickets, which is great.
  • HipChat! Useful and fun. We have lots of rooms, one per project, and one for teams, and pretty much anything people want. I use this all the freaking time and I love it. I like being able to check in with people and make sure I’m not interrupting something before going and asking them something – good for communicating with devs and people who are not in the office for whatever reason. The downside is that I have it on my phone and I’m far too quick to reply to things when I’m out of the office – I need to get some willpower!
  • Sublime Text 2: Texteditor
    When I do need a text editor, I use Sublime. It meets my needs simply, and its nice and clean. Good for looking at XMLs if needed.
  • Virtualbox/browserstack. I work on a mac, and so I need virtual machines for various versions of IE (and Edge soon), to test in those browsers.
  • Ghostlab. I use ghostlab when I need to test a site in multiple devices and browsers. Its nice for things that need testing for glitches or compatibility issues as opposed to close feature testing. You can sync multiple devices and browsers, so doing something on one reflects that action across all of the devices and browsers. It saves a ridiculous amount of time
  • Paper journal. This has actually partly taken over from Trello for me at the moment. I’ve found a great system called BulletJournal2, and its a way of organising a to-do list that is structured enough to make it nice and easy to follow/keep consistent, but not too structured. I don’t utilise all of the methods as I don’t need to, but the basics are nice. I also keep a page a day at least for scribbling notes and plans down if I need to get my thoughts clear.
  • Google Calendar. I stalk my colleague’s calendars like no-one’s business. I organise a lot of meetings as scrum master and so knowing people’s schedule is vital.
  • My tools are far more linked to organisation than testing, but then sometimes it feels like I do far more organising than testing. My manual testing requires fewer specific tools, mostly organising and taking notes that scripting or automated testing.

    As always, I am interested to know what tools and process people use, as this is how you learn new things to make you work harder better faster stronger. So please do let me know your favourite testing tools! 😀

    Footnote

    [1]https://en.wikipedia.org/wiki/Debugging#Origin
    [2]http://bulletjournal.com/

Ep 12: Give Us A Clue

Or the importance of context and domain knowledge.

This episode was inspired by an blog post on uTest that came into my inbox this week, and I wanted to chat and expand on the points made there1.

As a tester, not having any context bugs me (hah!).

I need some insight into the product, or the client for me to feel comfortable testing. Having the spec or AC is not necessarily enough (though can certainly be useful for the basic functionality testing and checking). But for the other types of testing, where the lines between QA and testing blur, around testing fit for purpose, and the actual solution to the issue itself? I need to know who we’ve produced this work for, and why.

What industry? B2B? B2C? Not commercial? What users are they aimed at? Is the functionality I’m testing just for the CMS users or is it for visitors or customers? What are they hoping to achieve with what we’re giving them?

All these things inform how I test things, the questions I ask, and the feedback I give.

I’ve spoken before about how my background informs my testing – I know I make certain assumptions about a website’s or app’s functionality, because I use them all the time and am a techy person. That is something I have to keep in mind when I’m testing, and plan it out. I’ve started looking into mindmapping a basic test a website mindmap, and I’m hoping to put some steps together to remind me to test for things like user journey as well as the functionality I am actually testing.

Having more tech experience than the users we are testing for but less domain knowledge is the worst of all worlds. I can’t even begin to figure out the best way to test the system. I try to mitigate the first by being aware of my knowledge and being aware of best UX and accessibility practices but the latter needs context.

I’m not saying I need to know everything about the industry – especially from an agency point of view – I don’t necessarily need to be an expert in all sectors we build for (currently ranging from charity, to government sites, through to various ecommerce sites – gardening equipment and plants to scientific consumables), but I need to know basics at least for testing and QA.

I also need context about the project itself – what’s out of the box and what’s a custom functionality that I might not have seen before? If I’m doing manual testing I need to be able to use the site effectively, and as a user would, in order to get you the best feedback possible.

I also test better if I’m connected to a project – if its something I feel like I know.

I have been challenged on this – while discussing this episode Mark Jones asked me if actually having domain knowledge – especially for an ecommerce or other project that will be marketing driven to get visitors – would hinder my testing, as I’d know about what the product owner would expect me to do. If I come to the site completely without any knowledge about what was expected of me as a user, would I do more effective exploratory testing, as I’d be relying on the site to tell me all I needed to know? Which is an interesting idea. I’d like to think that I can divorce myself from my knowledge (like I do with tech knowledge) and be able to say ‘yes I can do this this way, but I’d expect to be able to do it that way as well or instead’. But its something to keep an eye on, to see if this desire to know about the product is influencing my testing.

Mark also suggested that in addition to my mind map I also write down all my testing steps to slow myself down and ensure I’m looking at things properly. This might be something I have to start doing as part of an exploratory testing session, and just take copious notes, and see what comes out of it.

So, I prefer having domain knowledge, I think it helps me more than it hinders, especially if I try to keep aware of my biases, knowledge, and expectations of how websites work, but too much knowledge may be a bad thing. What do you think listeners? Do you have a preference? How much domain knowledge do you have. As someone who’s done far more agency work that SaaS or other project work, I’d be interested in how testers in those settings find their domain knowledge affects (or doesn’t) their testing.

Footnote

[1]Domain Knowledge – Is it Important to Testers?