Episode 81: It’s the small things that count

This week I talk to Andrew Morton about unit tests!

We cover the basics of units tests: what they are, and what they do. We also cover unit tests as documentation, pitfalls to avoid, and some tools you might see in the wild.

References and resources we mention:

https://github.com/testingchef/bad-units – A repo of bad unit tests
https://www.youtube.com/watch?v=q8I8-hVS5JI – Andrew’s Whiteboard Testing video on unit tests as documentation

Ep 80: Who lives, who dies, who tells your story

Manchester is my adoptive city, and I am even more in love with it after this week. Strange to feel pride and be heartbroken at the same time. I was hoping to get a bee tattoo but all the tattoo parlours have been overwhelmed by demand, so I may get one at a later date, and just make a donation to the fund.

Life goes on, and so do we.

I went to an event on Thursday about storytelling called What Makes You, You? I wanted to go to primarily get tips for my other podcast (Inner Pod), and help people come up with their own stories, but I actually found a lot that connected with me on a testing level.

So, testers being story tellers isn’t a new thing, people have spoken about it before (on this podcast even!), Huib Schoots is running a workshop on it at London Tester Gathering workshops etc., but the guy who did the talk/workshop thing (Andrew Thorp) really solidified the crossover between testers and storytelling.

Okay, so firstly, there were parts of this that were a bit wanky. Like the concept of storyselling (selling something using storytelling), but he did admit he usually gave this talk to corporate types who sell things for a living, not for creative types who are looking for a way to either sell themselves to get better at public speaking or interviewing. Overall I found it incredibly useful, and have some pointers in my arsenal when it comes to building a good story and helping people build theirs. If you’re in the area and interested, he’s doing the same talk again on 13th June.

The most common thing I hear when asking people to come on the show is something like ‘I’ve got nothing new to say/I’m not interesting enough’, which firstly is bullshit otherwise why would I be asking you to come on the show? But more importantly, Andrew had 3 principles for having a good story, and they are basically 3 principles for being a good tester.

Hoovering: Hoover up experiences. Not just your own, other people’s as well. Think about observational comedy and how those kinds of comedians can turn something small into something significant. They can see the importance in these things.

Testers should do the same. It’s only in learning about other’s experiences that I, as a web tester, can learn how people interact with the sites and apps they use. It’s as I learn about new technology that I can see how it’ll connect with my job going forward. It’s noticing the small things and the effect they may have that mean testers can catch cases that others may have missed.

Be interested: Being interested is great on many levels. Interested people tend to be interesting. Firstly, interested people tend to do active listening, which is listening where the listener fully concentrates, understands, responds and then remembers what is being said. It’s not reflective listening, where you repeat what the speaker said to drive home shared understanding, and it’s definitely not the false listening I find myself falling into occasionally, when I’m either jumping in to say something and slightly talking over people because I’m excited, or even worse, waiting for someone to stop speaking so you can speak. We all fall into the trap – we think faster than we can speak or listen – but we need to learn to listen.

This is fairly obvious in how it relates to testing, we should be listening to our co-workers, our stakeholders, our clients, everyone. Our job is just as much taking in other people’s stories as it is telling our own. In fact, one of my biggest issues is getting clients to tell us their stories. People know they need a website but often smaller businesses, or more corporate brands may not know what message, what story they are trying to sell.

Be willing to open up the bonnet. Curiosity! Curious people have things to say, and testers are curious people. Not just curious in their work but outside their work as well. Being curious not only facilitates the previous two points but also allows you to craft your own story. If you’re not curious how do you know what excites you, or angers you, or just leaves you apathetic. These, across contexts, will allow you to craft your own story.

I’m giving but a small sliver of my Testbash talk at Liverpool Tester Gathering on 15th June. Come along! https://www.meetup.com/Liverpool-Tester-Gathering/events/240216573/

More links

Andrew’s free PDF on storytelling/storyselling: http://mojoyourbusiness.com/wp-content/uploads/2017/03/School-of-Mojo-Manifesto-21.03.17.pdf

Ep 79: It Takes Two (Rebroadcast)

Back in October last year, I spoke to Maaret Pyhäjärvi about pair testing. We did a session of pair testing then recorded our debrief/thoughts on the session. It is a great episode, and the shownotes are full of great resources, including the mindmap we used during our session.

However, the editing and sound quality was not amazing. Maaret was quite quiet compared to me, and it was difficult to listen to. I’ve been wanting to reedit and rebroadcast for a while, and finally circumstances came together to do this. The sound still isn’t perfect, but I think it’s a lot clearer, and the volume makes more sense. I’ve also made the outro a bit quieter so there isn’t suddenly a big jump in sound.

I hope you all enjoy the new improved episode 61!

Ep 78: May the Testbash be with you

So this week we finally discuss Testbash Brighton and the freshly announced Testbash Manchester!

First, Testbash Brighton. Testbash was awesome as always. Brighton moved to a new venue but all the normal Testbash awesomeness applied!


Conference Day Takeaways

  • Nice to see a focus on empathy
  • Empathy for developers
  • Empathy for customers – Tito customer feedback/Ethics in testing
  • Good balance of tech/non-tech
  • Range of topics! Starting as a new tester in agile, AI and testers, ethics, how testing has changed, API testing, Toolsmiths.
  • Not all speakers were testers: there was a professor + 2 Devs turned CEOs!
  • Testers creating better Turing tests kind of blew my mind despite being bloody obvious now I think about it
    • Testers as the ideal middleman between understanding how people work and how machines work.
  • Lots of juicy ideas to take into my new job that I’ve just started. Amy Philips and Del Dewar’s talks covering ideas on starting on a new project and what good leadership really means.

Open Space Takeaways

  • A nice, low pressure environment to share
    • Can be a discussion or a question, not necessarily a talk or workshop (though they can be done as well)
    • Dan Billing did a demo of Burp Suite and a whirlwind tour to Ticket Magpie (see episode 77)
  • Matt did a session on bringing testing into education (schools, uni, etc – see episode 75 for our talk on a testing syllabus)
  • I did a session on mental health with Mike Talks, which was so awesome
  • A talk popped up during the last break of the open space where we shared stories about people in our lives we were grateful for, and why, and it was a genuine love-in.

Testbash Takeover!

So if you check out the lineup for Testbash Manchester you should see a couple of familiar names! Matt is doing a workshop on API testing, aimed at people wanted to get into API testing who may not have done so before.

I am giving a talk called Anxiety Under Test. With as few spoilers as possible, I’m going to be talking about my anxiety, and how I’ve applied the strategies I’ve learned to help cope with my anxiety to testing, with a quick look at how you can check in with your own mental health and keep on top of it.

The rest of the lineup look A+ and we’re looking forward to an amazing few days!

If you can’t make the conference there will be the pre- and post-testbash meetups as always. These are free to attend so come along and hang out with a bunch of great people!

Ep 77: The sonic screwdriver won’t get me out of this one

This week I talk to Dan Billing! (Check out his podcast: Screen Testing (co-hosted with another friend of the show Neil Studd!))

We cover how to get into Security Testing, a brief look into the mindset of security testing, and share resources to allow you to start Security Testing ethically, legally, and without making your Sys Admins angry.

Topics covered:

  • Resources and tools!
  • Being legal and ethical
    • Check your local laws – in the UK/EU/US it’s illegal to hack a production site
    • There are some fake sites to train/practice this testing:
    • If you’re bringing this testing into your workplace, seek permission first
      • Talk to your system admins/security team/technical team/line manager
    • Get a quarantined environment to work on
    • Take a backup on the environment first
    • Warn your sys admin team before you start crawling sites/running reports – they may have logging and be alerted to suspicious behaviour (and do you ever really want to piss off your sys admins?)

Thanks to Dan for being on the show, and thanks for reading/listening. If you want to support the show you can rate and review us on iTunes or check out the Patreon!

Ep 76: You don’t own the bug

This week I talk about Bug Advocacy! This came swirling into my mind after seeing a couple of older blog posts about the matter and it turns out I have feelings on Bug Advocacy!

Definition: Simply put bug advocacy is that: advocating for a bug. Advocating that it should be prioritised and fixed.

There are different ways to do this:
– A decent bug report. A bug report in and of itself is bug advocacy, because you’re saying ‘hey, I found a thing, this is the affect it has’. If you give enough information, you’ll be conveying the importance of the bug and it’s effects.
– Advocacy during prioritisation meetings. Telling people that you think this bug is important, that it should be fixed.

One of these is definitely something testers should do. Bug reports should contain information so people can see and understand the effect and impact a bug or defect will have on the system and it’s users. This information is crucial so that the bug can be prioritised correctly for the team’s priorities.

Sometimes you might not agree with how the team has prioritised your bug. They might not think it’s actually a blocking bug, or that it’s that big a deal. Maybe they don’t think it’s a bug but instead a change request.

So what do you do here?
You can make sure that the PO or lead dev truey understands what the bug is and what affect it has. Maybe demo the bug to them, or talk them through it. If they explain their reasoning to them you can either counter it, or accept it (however begrudgingly).

You can fight for it, and this is where people start to be anti-advocacy. If a tester’s job is to present information only, and let other team members take that information and use it, then fighting for a bug to be fixed goes against that.

There are also plenty of reasons a bug might not be prioritised in a way you agree with: ignorance, incompetence, lack of time or resource, knowledge of business or user cases and functions.

If the team doesn’t think it’s worth fixing or fixing any time soon, you have to let it go. Just because you raised the bug, doesn’t mean you own the bug.

Further Reading

Ep 75: What is the plural of syllabus?

A few weeks ago I took part in Weekend Testing Europe, and I enjoyed it so much we did a whole episode on it! There is a blog post on the site that covers the scenario and discussions: http://weekendtesting.com/?p=4496

I was in Group C (the C stood for Cool): http://weekendtesting.com/wp-content/uploads/2017/02/WTEU73-Group-C.pdf

Our mission was to imagine we’d been granted permission to present a testing module within a university Computer Science course for a single semester (12 weeks of 1-hour classes), plus homework/assignments.

This didn’t give us a lot of time. I found there were a few immediate difficulties in coming up with a syllabus.

Where do you start?

Do we start with the philosophical stuff? Such as “What is testing?” or What is a tester?”. Or do we jump straight into the nitty gritty (Techniques, heuristics, etc)?

Honestly if someone had said to me that ‘Quality is value to some person’ then I wouldn’t know what that meant. Now obviously it makes sense to me, but I’m not sure it would’ve back then. YMMV and all that. To balance that you don’t want to go too deep and detailed too early, an overview and introduction is needed.

What don’t you include?

By choosing what you include you obviously don’t include some things, but there are also things you make a deliberate choice not to include. Do we give a flavour of the types of testing you can specialise in (security testing, performance testing, etc)?

There’s not enough time to cover all of it in any type of detail but a high level look at how varied testing is might be enough to give people an idea of what to look into more.

What do you include?

Everyone has a different testing role, and path into testing, which makes boiling testing down into a series of lectures difficult, but I think there are some core skills we can set down:

  • Communication
  • Thinking
  • The various flavours of testing
  • The role of automation
  • The role of manual/exploratory testing
  • Shift left and right

Things I didn’t include but considered:

  • Learning
    • They’ll be getting some of these skills by being at uni.
  • Code skills
    • This is part of a comp sci degree. Yes there will be differences in how to code tests but I feel the degree will give a good background.
  • Note taking.
    • While can be done as part of an assignment I feel, again, some of these skills come from uni.
  • Homework:
    • Practical
      • How to provide a good practical experience
        • Test real apps?
        • Test other programming student’s project work?
      • What experience to offer?
        • App?
        • Website?
        • API?
        • Mobile
    • Bug reports?
    • Testing notes
    • Non-practical
  • Resources out of the wazoo
    • Clear
    • Concise
    • Useful
    • Relevant

Challenges with this idea

There are several challenges with trying to create a module like this in reality:

  • How do we keep the module up-to-date? The industry moves extremely fast.
  • How do we balance keeping the content general but interesting? How do we improve awareness of specalised areas without having to go into detail?
  • How do we help improve awareness of testing? Modules such as this would only help those that attend univeristy and many testers don’t attend univeristy.
  • Do we even need a degree or module? We wouldn’t want to see such a module becoming a pre-requisite for testing and turning people away. Maybe even a strength in testing is that we have lots of people become testers from lots of different backgrounds.

Ep 74: Around the World in 80 Tests

Firstly: This month is #TryPod, a month where podcasters, and people who love podcasts bring podcasts to those who do not listen to them. I share the love for some of my latest subscriptions here

Secondly: Matt and I will both be at Testbash Brighton! come say hi if you’re there 😀

And now, onto the episode. This week we’re discussing localisation:

Here are our talking points, in both mindmap and notes form:

  • Cultures
    • language
      • error messages
      • context for translations
      • play on words
    • expectations
    • Local holidays
    • Personal name and title conventions
    • Aesthetics
    • Colour symbolism
    • Ethnicity
    • Socioeconomic status of people
    • Local customs and superstitions
    • Differences in address, post code and telephone numbers
    • Measurements
    • Voltage and current standards
  •  Translations
    • Video or audio – dubbing or subtitles?
    • Printed materials?
    • Logos or graphics?
    • Layout issues with the length of translation or character sizes
    • Dialect? Register? Variety?
    • Format of numbers?
    • Date and time format?
    • Different calendars?
    • Do you need to translate? Common phrases or graphics
    • Consistency heuristic
  • Regulatory compliance
    • Laws
    • Taxes
    • Restrictions
    • Disclaimers
    • Great Firewall of China
  • Third party service availability e.g. Google Maps or PayPal
  • Device differences
  • Languages
    • Complex text layout where characters change depending on context
    • Writing direction (left to right or right to left)
    • Different text sorting rules
    • Numeral systems
    • Different pluralisation rules
    • Different punctuation
    • Keyboard shortcuts
    • Accessibility/Screen reading
  • Technical challenges
    • Translating easy – but maintaining with new text or fixes?
    • Building software that is easy to localise
    • Separating locale-dependent parts
    • Deployments
    • Timezones
    • Daylight savings

Ep 73: There is no spoon!

This week on the show we’re talking to Maaike Brinkhof about cognitive biases! There are so many biases, but we talk about a few here, and share some resources for learning more.

Why are we biased?

Four categories

  • Too much information → We filter it
  • Not enough meaning → We fill in the gaps
  • The need to act fast → We want to feel in control
  • What should we remember? → We pick out what stands out

Is it a bad thing or a blessing in disguise?

As with all things it can be both. Biases are evolutionary as brains can’t process everything, but they can be a crutch, a mental shortcut which can cause issues.

If you view biases as a bad thing, then you’re missing the point. You can choose to view it as something you can learn more about and getting to know more about yourself – such as learning to recognise when you might be biased and trying to adjust your behaviour.

Biases in testing – its not just about your own biases

Often when we are testing, we are looking for problems caused by biases from people around us, such as the biases of developers or project managers. Often people will write their own biases into products without realising it.

We also have to be aware of our own biases guiding us, hiding or obscuring information. We might like to think we are objective in our thinking, but we are not perfect either.

Understanding biases can also help you explain and justify your testing, questions, problems or information that you’re providing to people.

Lets talk about some biases!

Confirmation bias (the biggest bias of all, and can be broken down into several ‘smaller’ biases, for example the ones below):

How do you deal with biases?

  • Work alone less and pair or mob more
  • Focus and de-focus
  • Self-awareness of your own concentration levels and behaviours
  • Awareness of biases
  • Referring to and using heuristics
  • Stepping back and examining your thinking, concentration, and working patterns to avoid relying on biases to take shortcuts.

Books and courses we’ve mentioned

Ep 72: Mayday! Mayday!

Today, in a first of what will probably be a semi-regular series: ‘How is [career] like testing’, we cover Air Crash Investigations.

Investigating what has caused an accident/bug

  • First action?
    • Find the black boxes! (logs/monitoring)
    • Collect all of the evidence (effects of the bug, clues in data that led to it, eye witness accounts)
    • Come up with hypotheses with the data and attempt to disprove
    • Simulate the problem/Steps to reproduce
    • Sometimes its not possible to identify the exact problem, only give a summary of findings and suggest any possible improvements

Implement new procedures

  • Providing evidence and explanation for new procedures
  • A tester/investigator only provides recommendations, they do not implement them

Post-incident investigation and action

  • Mentality
    • Focus on facts
    • Thinking outside of the box, outside of the immediately obvious
    • Gathering as much information as possible – even it appears irrelevant – then assessing with all the information
    • Coming up with hypotheses and attempting to disprove it
    • Being unafraid of uncovering the truth, despite politics


  • Needing to work fast
    • Evidence may disappear over time
    • A need to quickly discover if there are any fatal design flaws that could affect other similar planes
  • Always considering human factors
    • Considering the mental state of those involved
      • What were they thinking?
      • Were they stressed or under pressure? Did they have other motivations which drove their decisions?
      • What was their experience?
    • Psychology, sociology
      • Seniority affecting juniors?
    • Popularity (“no one questions this person”)
      • Cultural or language barriers
        • Tenerife accident – “Ok” meaning both confirmation of receiving and giving clearance.
        • Korean Air Cargo Flight 8509 – communication breakdown because junior officers wouldn’t take control from captain.

Not like testing?

  • Its only investigating incidents, not really being involved in the process of building directly
  • We have to explore what hasn’t happened yet and try to create the problems, as well as triage and investigate them.

Some anecdotes…..

Test environment not the same as production

Atlantic Southeast Airlines Flight 2311

  • Propellers got stuck in a position that caused drag
  • Was tested for this scenario, but only on the ground in a lab
  • Accident happened because it behaved differently at altitude, in the air
  • Investigator (tester) felt sure this was the case, but had to prove it with live test that shocked the engine suppliers

Checklists – how difficult it is to write them

1999 South Dakota Learjet crash

  • Checklist for depressurisation warning started with diagnosing the problem
  • By the time the crew got to the check about donning their oxygen masks, they had already lost their ability to think
  • Checklist should start with donning masks no matter what

The interaction of humans with automation – people tend to assume software is smarter than it really is

Asiana Airlines Flight 214

  • Plane crashed due to landing short of the runway
  • Caused by the pilots inputting the wrong setting into the autopilot
  • Pilots were trained to let the automation assist them
  • This broke down when they gave the automation the wrong command and they didn’t understand its behaviour
  • Lesson here on abstracting important procedures and relying on automation
  • Automation cannot think or second guess, but people can believe its more intelligent than it really is