Ep 49: Leading the Witness

Firstly, an announcement: LTATB is moving to fortnightly episodes. I need to level up my content game, and I can’t do that in weekly slots, so there will be an episode every two weeks. The next episode will be on 19th May.

People are really bad at telling you what they want. Really bad. They think they know, but I guarantee you they don’t. And it’s not because they’re stupid, if anything, it’s because they know their business and processes really well. Or, they know their business and processes as they stand really well. Translating that to a new system can be difficult to do.

How well do you know your commute to work? Or how to cook your favourite meal? Or the layout of your phone screen or desktop. The things you use all the time, you know what you’re doing well enough that you may not even think about every little step. You may not even realise if you’re doing something slightly (or not so slightly) inefficiently. Or, you may realise, but there’s a reason for it (I occasionally walk a longer way to/from work because it’s prettier and there’s more chance of seeing dogs being walked, for example).

Or, another way, changing computers. You get a new or different computer, and you start transferring/re-downloading files and programs. How many times after that initial set up do you realise you’re missing a program? I’ve had a week go by easily before realising there is something missing.

These little things are going to be the things that actually turn out to be really integral to a system. The stuff that isn’t the main points (browser of choice, or the pasta in a lasagna) but the stuff that just makes everything smoother (setting up key shortcuts, or adding basil or oregano). Technically you can get away without them, but it makes things just a little harder, or less great, and the people using the system will miss it and be less willing to engage with what you’ve built. So, how do you figure these things out? Ideally, you watch people interact with the system as it stands, and have a play yourself. I spoke last week about inheriting legacy systems, and some of those techniques apply here.

Another way of doing this is going through user journeys with the product owner and team.

People are really good at telling you what they don’t want. There’s gets a point in a discussion about a system where you can kind of tell that the client isn’t sure what part you’re not getting, so I’ll go through my assumption of the user journey. Suddenly, when I get to the bit I’m pretty sure I’m wrong on, they’ll re-engage and point out where my assumptions are wrong. Its easier to go ‘no, not like that’ then it is go to ‘this and this, and then this, except, shit, I missed a step here’.

However, this assumes that you’re wording things in the right way. Leading the witness is when a lawyer asks a leading question; or a question that puts words into the witness’ mouth. In this line of work, it could be as simple as assuming something and phrasing it as ‘and then you do x’ as opposed to ‘and after that, what happens? X? Or something else?’. The idea is you prompt them, but not fill in the gaps for them. In a situation where maybe people are feeling tired after a long meeting, or a bit nervous, or overwhelmed by techspeak, something like that could be just agreed to, and so you want to balance between making it go smoothly and easily and telling clients what they are going to get. We’ve implemented some things that on the surface of them make no sense, but for the context we’re working with, make perfect sense. You wouldn’t ever do it for a public facing site, but for the internal processes of the client, it made sense). And we’ve worked on a few government/public funded or charity sites, and their processes are larger than anything we can do or effect change, so we have to make them fit the system we build, not try to get them to change their processes for the new system.

The best and smoothest projects I’ve ever worked on are where the whole team has that understanding; we’re the experts in the tech side, but the PO knows their team and users better than we do, and so they say ‘we have this need, maybe we can have this?’ and we go either ‘sure’ or’ yes, maybe, but how about this?’ and then it works amazingly well.

Further Reading


Ep 44 – Now for the Science bit

I felt a bit sciency, so let’s discuss the scientific method.

We all use it, whether we realise it or not. The basics:

  1. Oh, this is odd, is it because of this?
  2. Update your hypothesis based on experimental results
  3. Keep going til you have nailed down a cause and effect.

As testers, we can’t prove there are no bugs, we can only say we’ve not discovered or encountered any, backed up with the evidence of the tests we’ve done.

And then you have to gather the evidence of bugs you have found, and maybe use that evidence to advocate for those bugs. Not all bugs will be fixed, or even can be fixed, and so a decision has to be made. As well as what makes sense technologically or for the budget, we also need to choose what bugs to fix on the basis of what will be most useful to the users or clients, and the way to gauge this is to have evidence about the bug.

So the first step to this is, once you’ve got a bug, information gathering. Sometimes this is easy, and if you know the system you may know what the bug is (permissions issue, something missing, fairly obvious stuff), or you may have to do some digging to find out the details of the bug, At minimum I like to include steps to reproduce, what I expect, what actually happens, with screengrabs if appropriate. If I need to put my reasoning as to why I expected what I expected, I’ll add that. That report is my findings and essentially justification as to why I think this bug should be fixed. Then I can either make the bug a blocker to the story or not.

This then goes to the devs and if a conversation is needed about the bug it can happen with all the information documented in a (hopefully) clear and helpful manner. The exception to this is if I’m not sure if the bug is actually a bug, or if I’m unsure if it should be a blocker, in which case I may talk to the lead dev and go from there, sometimes talking to the client if we want their opinion on the bug.

So that’s the basics, what else can we learn from the scientific community with regards to testing.

Scientific papers go through peer review before publishing. The idea is that the work is independently reviewed before it’s published (deployed to live), and then the world gives feedback (UAT).

In the software dev world, this is provided both by the tech and code review by another dev, and then testing by the testers*.

There is one section of the scientific method that testing (and exploratory testing especially) does deviate from. The experiment design or plan. A lot of work goes into an experiment plan, including setting the statistical significance level, p values against the null hypothesis, or confidence intervals (these are all ways of determining the level of confidence needed that what you’re seeing is not chance, essentially). And all this has to be done before you start, because this can determine your sample size.

None of this stuff is done in testing outside of a lab, really, and even a formal test plan isn’t really done in exploratory testing, or it’s there but not as rigorous.

That’s not to say we don’t have a plan. I use the AC plus my knowledge of the system and the feature I’m testing, plus any feedback or results I get from the feature as I’m testing. We know the areas that have the most risk and can focus on those. At the end of the session, we can review our notes and see if there’s any areas we’ve skipped over and go back and do further testing if needed.

What happens essentially is we build a robust plan we go along through the testing process, building and adding as we go. We may start with a framework, but we end up with a full test plan and execution report to use as evidence to support our hypothesis, which is there are no bugs that we can find in the system we are testing.

*Let’s not talk about the failings of peer review in the scientific community, or the worrying trend of burying results that don’t fit the hypothesis – if you’re interested, read Ben Goldacre’s stuff, it’s fascinating and angering in equal measure: