Ep 46: Reasonable Doubt

A couple of things:

TESTBASH MANCHESTER OMGGGG. Come to Manchester! Its awesome, I swear. Who needs a beach when you have the angry Beetham tower?

ANYWAY, I will be there and I am doing a thing on Saturday! As well as a session, that Richard Bradshaw kindly invited me to do I really want to do something podcasty, either a recording session, or a lean coffee style thing for podcasting/youtubing/blogging? I’ll build the idea more over the months, but get in touch if you want in!

Do you want to talk to me? Do you want to be on the ‘cast but only have a short message, or are in another part of the world? I now have a skype! I also have voice messaging switched on so you can leave me a message even if I’m not online, and I can put that into an episode. The username is letstalkabouttests, because brand synergy is important, so yeah. Let’s actually talk about tests!

I’ve been doubting myself a lot recently – both professionally and personally. Nothing like being told someone thinks you’re good enough to do a thing to make you insist you’re the opposite.

So let’s talk about doubt. Doubt can have many forms and many causes:

• Imposter syndrome

• Being a newbie/feeling like your skills aren’t extensive

• Not having contextual knowledge


Doubt is risky, no matter what the cause. Causes can be split into two broad categories:

‘Reasonable’ doubt in that you know you don’t have the resources you need to do your work to your best ability, be that time, tools, knowledge, etc.

This doubt is sortable – the testing community is huge and chock full of information, blog posts, podcasts, slack channels, meetups, conferences, everything. You may not know where to start, but chances are you’ll either find a post somewhere that does it, or find someone who’s willing to point you in the right direction.

The doubt can also apply situationally. If you inherit a site with a complex piece of functionality, you need to feel, as a team, that you understand it all. Sometimes that means you start testing without any test data or plan, just to see what you find.

This is where you need to manage your testing in a session, with a set time. You may not have a goal other than ‘find out more’ which is nebulous but also pretty easy to hit as goals go. You can even structure the session up to help mind map it out (incidentally, mindmaps, I cannot use them, I just find them really hard to do, compared to writing a list? I am clearly a weirdo, but if anyone can point me in the direction of other note taking/brainstorming style formations, that would be useful).

But sometimes you still can’t find all the information you need – you feel like you don’t know enough, which for me usually means I don’t know why this functionality was built, who uses it, and why. You may be able to get this from documentation, or the client/stakeholders. I guess if all this fails you could pick up bits and pieces from what the stakeholder report as issues or request as new features, and build up a picture of what they need from there.

Another kind of doubt is doubt in you, and of your work. People who are knew to exploratory testing, or who want test cases and test plans may not feel that you can ‘prove’ your testing when using exploratory, or session based testing. That requires education, and maybe presenting your testing notes to people, so you can show your testing (all of ours are saved on the session, which is associated with the ticket).

This type of doubt is sortable though, even if it is hard. Next week I want to talk about unreasonable doubt.

Ep 44 – Now for the Science bit

I felt a bit sciency, so let’s discuss the scientific method.

We all use it, whether we realise it or not. The basics:

  1. Oh, this is odd, is it because of this?
  2. Update your hypothesis based on experimental results
  3. Keep going til you have nailed down a cause and effect.

As testers, we can’t prove there are no bugs, we can only say we’ve not discovered or encountered any, backed up with the evidence of the tests we’ve done.

And then you have to gather the evidence of bugs you have found, and maybe use that evidence to advocate for those bugs. Not all bugs will be fixed, or even can be fixed, and so a decision has to be made. As well as what makes sense technologically or for the budget, we also need to choose what bugs to fix on the basis of what will be most useful to the users or clients, and the way to gauge this is to have evidence about the bug.

So the first step to this is, once you’ve got a bug, information gathering. Sometimes this is easy, and if you know the system you may know what the bug is (permissions issue, something missing, fairly obvious stuff), or you may have to do some digging to find out the details of the bug, At minimum I like to include steps to reproduce, what I expect, what actually happens, with screengrabs if appropriate. If I need to put my reasoning as to why I expected what I expected, I’ll add that. That report is my findings and essentially justification as to why I think this bug should be fixed. Then I can either make the bug a blocker to the story or not.

This then goes to the devs and if a conversation is needed about the bug it can happen with all the information documented in a (hopefully) clear and helpful manner. The exception to this is if I’m not sure if the bug is actually a bug, or if I’m unsure if it should be a blocker, in which case I may talk to the lead dev and go from there, sometimes talking to the client if we want their opinion on the bug.

So that’s the basics, what else can we learn from the scientific community with regards to testing.

Scientific papers go through peer review before publishing. The idea is that the work is independently reviewed before it’s published (deployed to live), and then the world gives feedback (UAT).

In the software dev world, this is provided both by the tech and code review by another dev, and then testing by the testers*.

There is one section of the scientific method that testing (and exploratory testing especially) does deviate from. The experiment design or plan. A lot of work goes into an experiment plan, including setting the statistical significance level, p values against the null hypothesis, or confidence intervals (these are all ways of determining the level of confidence needed that what you’re seeing is not chance, essentially). And all this has to be done before you start, because this can determine your sample size.

None of this stuff is done in testing outside of a lab, really, and even a formal test plan isn’t really done in exploratory testing, or it’s there but not as rigorous.

That’s not to say we don’t have a plan. I use the AC plus my knowledge of the system and the feature I’m testing, plus any feedback or results I get from the feature as I’m testing. We know the areas that have the most risk and can focus on those. At the end of the session, we can review our notes and see if there’s any areas we’ve skipped over and go back and do further testing if needed.

What happens essentially is we build a robust plan we go along through the testing process, building and adding as we go. We may start with a framework, but we end up with a full test plan and execution report to use as evidence to support our hypothesis, which is there are no bugs that we can find in the system we are testing.

*Let’s not talk about the failings of peer review in the scientific community, or the worrying trend of burying results that don’t fit the hypothesis – if you’re interested, read Ben Goldacre’s stuff, it’s fascinating and angering in equal measure:
http://www.badscience.net/
http://www.amazon.co.uk/Bad-Pharma-How-Medicine-Broken/dp/000749808X

Ep 41: Friends don’t let friends use IE6 and Ep 40 revisited

I want to revisit mobile testing as I realised I focused a lot on choosing the correct devices and tools and then was just …’and then you test stuff? Websites are easy to test LOL’ which is a little reductive, so I’m gonna talk about actual testing. I think the reason I skipped over it is I realised that my input into mobile stuff generally comes in at requirement capture. We’ve had more that one client completely forget that you can’t hover on a mobile device, so if they want that feature, we’ll have to consider how to implement it on a mobile device in a way that makes sense.

There are also things to consider like how user journeys and expectations change on mobile devices. You need to make sure links aren’t too small or close together, you need to make sure you don’t rely on hovers or dragging, or anything gesture-y that might either go against what gestures mean on a phone or that people would have issues carrying out on a phone screen. Swiping is about as complex to go outside of games, I think.

I also missed some great tools you can get from using emulators – and the emulation of the touch screen is one of them – if you can’t test on devices, then you can use chrome’s dev tools and the cursor will be replaced by a small circle, to show the touch area.

I also like the network and location emulation. You can enter a latitude and longitude and the phone will act like you’re in that spot for any geo-location dependant features you may need (good for looking at shipping on shop sites).

Footnotes:
https://dojo.ministryoftesting.com/series/mobile-software-testing

Part two, and we hit the desktop versions.

You can’t fully support every browser; it’s like trying to hit WCAG AAA, it’s impossible for all areas of a site or application, and even then you’ll still miss some users. So you hit the most you can sensibly, and then try to degrade as gracefully as possible. As long as people can use it on IE7, then does it matter if it’s not as pretty as on IE 11, or Firefox?

Cross browser testing is important because people have their own weird set ups (One of my coworkers installed Fedora on his mac, because he couldn’t get on with OSX, for example), and so you want to make sure as many people can access and use your site or app.

We’ve been revisiting our browser support policy in light of IE kicking most versions of their browsers out of support, and from reviewing the stats it does look like most IE using people are on IE11. Oddly enough, IE8 is the next most popular IE version, but that’s only 2% market share, and no longer supported, so that’s definitely in the ‘gracefully degrade’ category of browsers.

Global Stats
http://gs.statcounter.com/#browser_version_partially_combined-ww-monthly-201501-201601

And market share is how you define the browser you’re willing to support. You can check both on a global scale, and , if you’ve got access to a current site with Google Analytics, you can pull data that’s directly relevant to the client to put a proposal together.

There are more than just browser versions and browser/OS combinations to think of, there things like HTML standards, which browsers support which parts of CSS3/HTML5? What about JS? Webfonts? OH GOD. It’s a pain in the arse, but it’s got to be done.

So once you’ve got a policy of browsers we will fully support, and ones we’ll ensure the system works and is useable, but it may not look as good, then I can base my testing on that policy.

You can save more time by only testing on the latest version of Chrome and Firefox, as these are often upgraded automatically, and even then, unless someone has brought out a really old version, the differences are minimal. It’s only Safari and IE that we have to take note of version with.

Again, Ghostlab can be used to test multiple browsers in one go. I have some issues with using it with a VM so testing IE on my mac is a pain with it, but even if I just test Opera/Chrome/Firefox/Safari in one go, then do IE separately, it’s still reducing the workload a good amount.

I have two strategies for cross browser testing. I test each story individually on browsers – rare that anything other than visual bugs come out of this testing – and then, towards the end of the project, I do a couple of test sessions just to give the site a full integration check, and I’ll do cross-browser and device testing then. This is just to catch the bits of integration or visual bugs that may have been missed during testing individual stories, and it just makes sure everything is more polished.

Like with device testing, I tend to screenshot and make notes as I go then create issues/write up afterwards.

Footnotes:
http://apps.testinsane.com/mindmaps/uploads/Cross%20Browser%20Compatibility%20Testing%20Basics%20-%20TestInsane%20-%20Santhosh%20Tuppad.png

Ep 12: Give Us A Clue

Or the importance of context and domain knowledge.

This episode was inspired by an blog post on uTest that came into my inbox this week, and I wanted to chat and expand on the points made there1.

As a tester, not having any context bugs me (hah!).

I need some insight into the product, or the client for me to feel comfortable testing. Having the spec or AC is not necessarily enough (though can certainly be useful for the basic functionality testing and checking). But for the other types of testing, where the lines between QA and testing blur, around testing fit for purpose, and the actual solution to the issue itself? I need to know who we’ve produced this work for, and why.

What industry? B2B? B2C? Not commercial? What users are they aimed at? Is the functionality I’m testing just for the CMS users or is it for visitors or customers? What are they hoping to achieve with what we’re giving them?

All these things inform how I test things, the questions I ask, and the feedback I give.

I’ve spoken before about how my background informs my testing – I know I make certain assumptions about a website’s or app’s functionality, because I use them all the time and am a techy person. That is something I have to keep in mind when I’m testing, and plan it out. I’ve started looking into mindmapping a basic test a website mindmap, and I’m hoping to put some steps together to remind me to test for things like user journey as well as the functionality I am actually testing.

Having more tech experience than the users we are testing for but less domain knowledge is the worst of all worlds. I can’t even begin to figure out the best way to test the system. I try to mitigate the first by being aware of my knowledge and being aware of best UX and accessibility practices but the latter needs context.

I’m not saying I need to know everything about the industry – especially from an agency point of view – I don’t necessarily need to be an expert in all sectors we build for (currently ranging from charity, to government sites, through to various ecommerce sites – gardening equipment and plants to scientific consumables), but I need to know basics at least for testing and QA.

I also need context about the project itself – what’s out of the box and what’s a custom functionality that I might not have seen before? If I’m doing manual testing I need to be able to use the site effectively, and as a user would, in order to get you the best feedback possible.

I also test better if I’m connected to a project – if its something I feel like I know.

I have been challenged on this – while discussing this episode Mark Jones asked me if actually having domain knowledge – especially for an ecommerce or other project that will be marketing driven to get visitors – would hinder my testing, as I’d know about what the product owner would expect me to do. If I come to the site completely without any knowledge about what was expected of me as a user, would I do more effective exploratory testing, as I’d be relying on the site to tell me all I needed to know? Which is an interesting idea. I’d like to think that I can divorce myself from my knowledge (like I do with tech knowledge) and be able to say ‘yes I can do this this way, but I’d expect to be able to do it that way as well or instead’. But its something to keep an eye on, to see if this desire to know about the product is influencing my testing.

Mark also suggested that in addition to my mind map I also write down all my testing steps to slow myself down and ensure I’m looking at things properly. This might be something I have to start doing as part of an exploratory testing session, and just take copious notes, and see what comes out of it.

So, I prefer having domain knowledge, I think it helps me more than it hinders, especially if I try to keep aware of my biases, knowledge, and expectations of how websites work, but too much knowledge may be a bad thing. What do you think listeners? Do you have a preference? How much domain knowledge do you have. As someone who’s done far more agency work that SaaS or other project work, I’d be interested in how testers in those settings find their domain knowledge affects (or doesn’t) their testing.

Footnote

[1]Domain Knowledge – Is it Important to Testers?