Ep 77: The sonic screwdriver won’t get me out of this one

This week I talk to Dan Billing! (Check out his podcast: Screen Testing (co-hosted with another friend of the show Neil Studd!))

We cover how to get into Security Testing, a brief look into the mindset of security testing, and share resources to allow you to start Security Testing ethically, legally, and without making your Sys Admins angry.

Topics covered:

  • Resources and tools!
  • Being legal and ethical
    • Check your local laws – in the UK/EU/US it’s illegal to hack a production site
    • There are some fake sites to train/practice this testing:
    • If you’re bringing this testing into your workplace, seek permission first
      • Talk to your system admins/security team/technical team/line manager
    • Get a quarantined environment to work on
    • Take a backup on the environment first
    • Warn your sys admin team before you start crawling sites/running reports – they may have logging and be alerted to suspicious behaviour (and do you ever really want to piss off your sys admins?)

Thanks to Dan for being on the show, and thanks for reading/listening. If you want to support the show you can rate and review us on iTunes or check out the Patreon!

Ep 73: There is no spoon!

This week on the show we’re talking to Maaike Brinkhof about cognitive biases! There are so many biases, but we talk about a few here, and share some resources for learning more.

Why are we biased?

Four categories

  • Too much information → We filter it
  • Not enough meaning → We fill in the gaps
  • The need to act fast → We want to feel in control
  • What should we remember? → We pick out what stands out

Is it a bad thing or a blessing in disguise?

As with all things it can be both. Biases are evolutionary as brains can’t process everything, but they can be a crutch, a mental shortcut which can cause issues.

If you view biases as a bad thing, then you’re missing the point. You can choose to view it as something you can learn more about and getting to know more about yourself – such as learning to recognise when you might be biased and trying to adjust your behaviour.

Biases in testing – its not just about your own biases

Often when we are testing, we are looking for problems caused by biases from people around us, such as the biases of developers or project managers. Often people will write their own biases into products without realising it.

We also have to be aware of our own biases guiding us, hiding or obscuring information. We might like to think we are objective in our thinking, but we are not perfect either.

Understanding biases can also help you explain and justify your testing, questions, problems or information that you’re providing to people.

Lets talk about some biases!

Confirmation bias (the biggest bias of all, and can be broken down into several ‘smaller’ biases, for example the ones below):

How do you deal with biases?

  • Work alone less and pair or mob more
  • Focus and de-focus
  • Self-awareness of your own concentration levels and behaviours
  • Awareness of biases
  • Referring to and using heuristics
  • Stepping back and examining your thinking, concentration, and working patterns to avoid relying on biases to take shortcuts.

Books and courses we’ve mentioned

Ep 56: And our survey says

An interview with Rosie Hamilton!

This year Rosie did a large survey of software testers and then did the number crunching to produce a snapshot of testers in 2016. All the data and the scripts she used are up on GitHub for transparency, and for others to do their own analysis if they wish.

Rosie has now made a web app for people to explore the data she found without having to run scripts or learn R. She has just published a blog post about how she did this, with a link to the app.

I’ve been interested in the make up of testers; where we come from, when and how we decide to become testers for a while now, and I knew when I saw the survey that I wanted to invite Rosie on to the show to chat about it.

We talk about the survey process, from inception to analysis; what the survey taught her about testers, and about surveys, where testers come from, and the plans for the next iteration of the survey.

Summarised transcription:

The inception, the decision making, whys, whens etc. Did you have it all planned out from the beginning and have the analyses you wanted to do before you formulated the questions or what it more ad hoc than that?

  • It kind of happened accidentally. A lot of people had been asking me the same questions. I tried to answer from personal experience but couldn’t, I reached out to some friends on social media to get their views, but still wasn’t any close to answers.
  • Had a few doubts about asking people to fill out a form, mostly I wasn’t sure how I was going to analyse the results to find answers, but decided to go for it.
  • I spent about 4 weeks planning the survey. I had a list of questions I wanted answered and I worked backwards from there. I tested the survey on the testers I work with. The main concern which was raised while testing the survey was keeping it totally anonymous. Some of the questions got changed before it was unleashed on the community.
  • While the survey was open and collecting data I started planning how I was going to deal with the results. I started learning R which was a very steep learning curve at first.

Lessons learned, both from the survey and from doing the survey?

What I learned about surveys:

  • Open ended text boxes are a real pain to clean up and make sense of the results.
  • I should have asked more demographic questions which would have allowed the results to be split into more groups. From the start I didn’t want to divide the sample by gender as I thought gender was irrelevant but I wish I had asked which country or continent people were working in as it would have been nice to try find some patterns there.
  • The questions which were crystal clear with closed answers like yes/no true/false gave the best results.
  • The more data collected the better. You can never have too much data.
  • I learned that R is REALLY powerful. It lets you start writing the analysis before all the results are in. You can just keep feeding it data as the collection of responses grows.
  • I think being transparent from the start about why the data was being collected, what it was going to be used for and also making the results publicly available was what has made this study quite unique. I know there are other surveys about testing like the ‘state of testing’ but the raw data from that one is not in the public domain, only someone’s interpretation of the results is available.

What I learned about testers:

  • We are a really diverse group and we all have our own stories.
  • I spoke to one tester while the survey was going on, they said they had struggled to answer the what made you apply for your first testing job. They said they came to work one day and were told ‘you are a tester now, your old job doesn’t exist’.
  • I feel testing is still has a bit of stigma around being an unglamourous job. It is very hard to hire testers. I also feel that testing is not a really aspirational career choice given how few testers wanted to test while in Education.
  • One of the results which surprised me was that 2 out of 3 testers which started testing in the last two years didn’t study computing. I think students which study computing are mostly moving into development jobs. Which means junior testing jobs are being filled by people without computing backgrounds.
  • There was one person that completed the survey that when asked what the liked about testing answered ‘none of the above’. One in every four testers aren’t happy in their jobs so I think testing isn’t for everyone. I also think there are some really bad testing jobs out there. In the worst workplaces we have testers getting blamed for missing bugs, management not understanding what testers do and forcing testers to sign disclaimers stating there are no bugs.

Feedback from other people?

The feedback has been really positive. I’ve also realised even though the blog posts were written for a testing audience, other groups have been reading about the survey.

Future plans?

I have this crazy plan to turn all the data from this survey into a web application. I think this will be good because it will provide an interface for people to explore the data themselves. If I manage to create this web app that I’m dreaming about, I am certainly going to blog about it.

I definitely think I am going to survey testers again next summer, which should give me enough time to design a much better survey for 2017.

Ep 55: FOMO

Or, fear of missing out.

Quick news:

  • I’m at Liverpool Tester Gathering on Thursday 5th August. Come say hi if you’re there!
  • Next week I’m interviewing Rosie Hamilton!
  • The google form for telling me your origin story will be closed on Monday 1st August, so get going if you’ve not filled it in already!

This was not the episode I planned to record today. I couldn’t get that episode to flow properly, so instead I’m talking about FOMO. Now, this text is not as close to a transcript as normal, as I recorded this mostly on the fly, so it’s a bit rambly and not actually written down, but I’ve got the main points below.

On Tuesday, a group of us took part in the first #TuesdayNightTesting, which was a remote lean coffee evening. It was a lot of fun, and one of the questions was about getting testers in a slump into the community. And it got me thinking about the community, FOMO, and people that don’t want to be in the community. It’s been something I’ve been thinking about, and I’ve basically got a list of things I try to do to limit, control, and maximise my testing community involvement.

  1. Have a to-do list
  2. Use bulletjournal, use trello, use whatever tool you want but write or type out a to-do list. Even if you remember everything you need to do, visualising it will help you see how much space you have for new things.

  3. Find a niche
  4. I am a magpie of testing, I’ve spoken about this before; that I can find it hard to focus because everything looks so cool that I want to do it all. I think its easier for me to choose a niche for contributing, because it depends so much on what I enjoy, as opposed to what works well for my context, the problem I have, etc. But pick a contribution (blogs, twitter, podcast, speaking, code/tool development), and roll with it.

  5. Have a plan
  6. I am formulating a plan for the podcast. It has taken me a year to realise I need one, but I’m going to write down what the podcast is, what I want it to be, and how I’m going to continue it, so I know what I want to do. It doesn’t have to be detailed, but I think if you’re serious about doing something in a big way, you need a plan.

  7. Say no/Defeat FOMO
  8. Say no. You’re going to have to at some point so get used to it with small things. Or say ‘let me check’ and go check fully before saying yes or know. If seeing other people do and tweet and blog about things you’re missing out on is going to bother you, pull back a bit.

  9. Take a break
  10. Related to the above. Take a break. Self care is important, and self care whilst doing stuff and being part of tech is something I’m really interested in.

Ep 54: Origin Stories

A series of unconnected things:

  • I’ll be at the Liverpool Tester Meetup on 4th August! Come say hi!
  • I’ve got more interviews coming up and that is my plan for August – start reaching out for interviews.
  • 30 days of testing! This is a thing that Ministry of Test is doing, and though I got off to a strong start, I’ve started to flag a bit. I need to get back on course!
  • I’ve started working with selenium! I’ve picked python as I assumed a language I knew would be less of a learning curve, but that may change as we figure out what works for us.
  • I’ve been listening to a lot of creepy podcasts – The Magnus Archives, Archive 81. Highly recommended if you like creepy fiction!

I want to talk about getting into testing.

My session at Testbash will touch on getting into testing in less than ideal circumstances and learning testing in those circumstances, but today I just want to discuss getting into testing.

So many testers say they just ‘fell’ into testing, that they either didn’t see testing as their career, or that they didn’t even realise testing was a career, and I’m intrigued by it. I am a tester that did the same, and I’m sure there are other careers where people also just fall into it, but as a tester, it’s testing I’m interested in.

I’d be interested in collecting experiences of how people got into testing; not necessarily their background, but maybe the moment you realised this is what you wanted to do, or the moment you shifted to testing almost full time – whatever you want to think as the moment or series of moments you got into testing, I want to hear them.

I got into testing by saying yes. I was working as a customer support person in a digital agency. My co-worker, who was our only tester, had moved into release management but that meant there was less time for him to test (we were ostensibly a start up, so profit margins were slim). As the person who had bugs reported to them, and the person who had the most free time, relatively speaking, it seemed obvious that some of that work would come to me. I was still new to the job, and pitching in and helping out had never really gone wrong for me before so why not?

My testing was weak, relatively speaking. I mean, I have a science degree and can be pedantic about things, so you know, I wasn’t completely in the wild, but looking back I know I missed some things I would’ve caught now.

We were working off spreadsheets of test cases, which I actually found a useful starting off, because they gave me the foundations I needed to feel comfortable. There was nothing saying I couldn’t go off script, but we wanted to make sure we had everything down (incidentally, I think I only ever used the ones handed over to me, never ones I’d made myself. Seemed like a lot of faff for little reward).

I don’t remember the moment I became a tester, but I remember a moment I realised maybe I could do this.

I wasn’t happy with an implementation. It’s hard to describe without jumping into details but it was wrong. Firstly, I was proud of myself that I’d got enough knowledge of the system to know it was wrong. We pointed out the issue and got it fixed. However, the problem was twofold (maybe three). Firstly, we had offshore devs that had no real interaction or onboarding of the site. They were treated like code monkeys, and this led to a lack of investment and knowledge of the system. Two: the feature request had had no discussion, really. It came from the client, through the account manager, straight to said offshore dev, with no other input. So the feature hadn’t been technically planned, no real AC was on the ticket, nothing. Just ‘make this happen’.

That was the failure in my eyes – there was no guidance from the developer, and there wasn’t clear lines of communication on this stuff, so he didn’t ask for clarification. He was new to the system, so didn’t have the time or knowledge to really figure out what knock-on effects there were. So I started stepping in a little bit and trying to make sure we had a list of requirements and edge cases.

And I started to google. I wanted to see what other people did, how other companies handled this stuff (at this point our tester had left, leaving me as the go to tester person – did I mention the company was not so slowly going bankrupt?), and I found the testing community.

I think those two moments – realising I’d picked up a bug in the system and then finding the testing community (or the bits I inhabit anyway), were the moments I decided this is what I wanted to do.

And I think a lot of people get pulling into testing because ‘anyone can do it’, and for me it was realising that it goes deeper than that that brought me into the career properly.

I would absolutely love people to let me know how they got into testing. I can’t promise any fancy R scripts and graphs like Rosie Hamilton did, but I can promise at least a blog post on it!

Link to the form

Ep 53: You have no control who lives who dies who tells your story

Thiiings: I have 3 potential interviews coming up for the podcast and I am excited! More details when I get them. I am panicking about Testbash Manchester as I said I’d do a practice talk at the local Drupal User group in August and lols I’ve not even written the thing yet. Ministry of Test is doing a series of webinars, facilitated by Maaret Pyhäjärvi, which I’ve asked to take part in, so I need that topic as well.

Sooo, I want to talk about storytelling.

I’ve always been fascinated by stories. I was a reader as a child, I got lost in words more often than anything else. I read the fifth Harry Potter book the night before my Physics GCSE, staying up until the early hours because the alternative – not reading it – was unthinkable). I always thought it was some kind of magic, storytelling. Later, hearing stories became my obsession, through Ted talks, podcasts, and conference talks, and it’s still my obsession now.

Stories keep you engrossed, invite you on a journey. You’re not just telling someone something, you’re informing them, showing them, bringing them with you on something.

When I started hearing about testers as storytellers I sat up, and paid attention. I don’t have the imagination to write fiction – I’ve tried and I can’t hold a plot with two hands and a bucket, but writing has always brought me joy, and I am always trying to write and speak better, tell my own stories in a way that’s useful and engrossing.

And I realised, as I was reading about storytelling in testing and software development just how much space there is for storytelling in our work. I knew and had used personas before; and how this persona interacts with a product. What I hadn’t thought about is how writing bugs reports is storytelling.

Part of a tester’s job is to advocate for bugs. We report the bugs and in our reports are the whys and hows of raising them, and why we think they should be fixed. Sometimes this is nothing major – this doesn’t work at all, or doesn’t work in this environment. Sometimes it’s a bit more nuanced, “I would expect this here” “I shouldn’t see this because of this reason” and getting the details across fully and in a compelling way is important, because it will get the bug across to people who are in charge of fixing or scheduling the fix.

Sometimes it’s not a bug as much as it’s something that’s not been thought of, or something has been missed, and you need to explain why we need to include the change you’re asking for. Stories can help advocate for things.

So how can we make our stories compelling? First it needs to have a structure: a beginning, a middle, and end. This person went to here and did this and this happened. This is what we thought was going to happen, this is what should happen, this is what we need to happen, and this is why we think this should happen.

Tell the report to someone. This is one reason I’m inclined to talk to developers first when it comes to an awkward bug, or a bug that’s not obvious. I can talk through things and get a feel for what details are needed. If I can’t, or it doesn’t make sense to, I’ll write the bug out in bullet points, with an introduction and end to structure the bug. I make the title relevant (no puns in ticket titles, no matter how tempting. I save all my puns for you, listeners <3), and I make sure I include a desired resolution. Never leave a story on a cliffhanger, people will hate you. A pox on bug reports that are literally ‘the menu doesn’t work’.

Choose your evidence – annotate and refer to the images, videos, numbers you include, don’t just include them without explaining (this is something I am guilty of a lot). Otherwise it’s just decoration and something making the ticket bigger.

Exposition is allowed, and is preferred to a light touch when it comes to details. At the same time, we’re not Charles Dickens and none of us are being paid by the word (this was not actually true, but I like the idea of it). Choose your details wisely.

Write first, then edit before sending it to a developer. Choose what details make sense when you review the entire report.

When the bug comes back to you, detail the resolution – say x was done, or on reflection, y was done because of reasons a, b, c. Finish the story, close the loop. Start again.

We should tell clients more stories. Instead of saying ‘this will do x and y, say, this will allow you/your customers to do x and y, but not a, or b.” or, “we chose to implement this because of these advantages.”

And we should listen and help clients form their own stories. Point out plot holes, and suggest how to tighten the plot up. Offer different opinions, viewpoints, and expertise (you’d send a novel to a copy editor before printing, right?). Help them guide their clients or users or stakeholders through the journey of the site or the app, and make that journey part of a story. When something becomes part of your story it becomes something you care about, and engage with, which is important when it comes to developing successful software. Speak in a common language, and make sure the goal is the same on all sides.

Your user story statement should in fact tell people the story of the feature. As this person, this character, I want to do this thing to reach this goal. Break with the format if needed, but make sure the story elements are there.

Practice. I really like internal sprint walkthroughs. These happen prior to the sprint demo to the client, and it means that the team as a whole gets to look at the work we’ve done. We take each feature ticket, and demo how we meet that criteria. It’s practice for one; the lead dev can find the most sensible way to demo the entire sprint (to tell the story of the sprint, maybe?). It gives the team a chance to review progress as a whole, and make sure everything fits together well.

Hell, storytelling could be a good way of substituting metrics for useful data. Metrics don’t tell you much of anything, but you can at least supplement them with words. X% of stories were rejected with bugs, this is because of x, y, and z, is much better than x% of stories were rejected. Even better would be ‘we had issues with a few stories because of these reasons’, and then move forward, but that doesn’t fit into a graph nicely.

There’s a million things I haven’t done – I’ve not spoken about what happens when the story is taken out of your hands, or talking to people you don’t work with (interviewers, other testers, people who aren’t techy), but I wanted to focus on the sprint cycle.

The whole agile process it an exercise in storytelling, and I think we need to get better at telling them, and about helping other people develop them. Stories are fundamental to human nature – humans are rooted in narrative; we form lives and memories around stories, There’s no reason we can’t continue this in software development, and bring a bit more of the human into the software.

Further reading

https://www.import.io/post/8-fantastic-examples-of-data-storytelling/
http://janetgregory.ca/im-right-im-wrong-all-the-time/
http://www.gmgauthier.com/advocacy-observation-and-the-future/
http://thetesteye.com/blog/2011/10/software-testing-storytelling/
http://firstround.com/review/Lessons-from-Pixar-Why-Software-Developers-should-be-Story-Tellers/

http://visible-quality.blogspot.co.uk/2016/06/resurrecting-signature-series-of.html

Ep 52: And I say, Zangief you are bad guy, but this does not mean you are *bad* guy

Okay, I want to talk about something that rips me up when my mental health is being particularly hard, and when deadlines are looming.

I want to talk about being the bad guy. This was mentioned in Nicola Sedwick’s fantastic TestBash talk on testers being human; a talk that is well worth the watch if you can – it’s on the Dojo1.

It’s hard to be a tester and go with the flow, be under the radar. You’ve got to speak up because that’s the job – you have to point things out, and sometimes get people to explain their assumptions or decision making. I mean, you have to do that to yourself as well, but no one sees that. They see a contrarian person who wants to know why and how and when and everything else.

You’ve got to speak up and say that something’s wrong, or you think something might not be right, or that there’s a scenario that people haven’t thought of or what about on mobile etc. Most of the developers I work with see me asking the same question of clients as well, so I think that helps. I’m a dick to everyone!

Sometimes being that person, that dick, genuinely feels difficult.

Don’t get me wrong, if there’s a bug blocking the story, then it’s blocking the story, simple as. But that doesn’t mean it isn’t disheartening to reject someone’s work, especially if there are deadlines looming, or you know it’s been a tough piece of work to get through. Or this isn’t the first time you’ve sent a piece of work back to them. Or if you know everyone is stressed enough as it is. Or if you’re going to have to defend the bug – which is fine most of the time, I would much rather get called on shit and then come to a greater understanding or get other people to understand my thinking than not but there are times when that is a difficult conversation. And sometimes you want the smooth conversation.

This is the job we’ve signed up for, but it’s not always enjoyable.

I’ve finally got out of the ‘no bugs raised = bad testing’ mindset, but there is some satisfaction in finding and pinning down a nasty, weird, or just plain juicy bug.

When my mental health is bad, or I’m stressed, or I’ve got multiple deadlines or anything, I want to put my head down and get on with work. If I find a bug, chances are I can’t do that. The majority of small bugs I’ll file without talking to a developer, but a big bug warrants a conversation. I want to double check that the bug actually exists and isn’t an environmental issue or a user issue, and I want to do that before I file the bug if possible so I can keep admin down. Or I want to show the developer, make sure they understand my notes and repro steps; I find it’s useful to get that information on a ticket before sending it over.

Then there are scrums where I have to give updates on my testing. I want to make my testing visible2, and I need to give the team a heads up if something is blocking me or potentially blocking the sprint. I have to take the good with the bad.

And making my bugs known is an important part of the development process. Flagging up that I’ve had to send a story back to a developer is important information for everyone to have at the start of the day, as 1) they may not have seen it, and 2) they might need to rearrange their work plan. I just go for matter of fact, just like I’d mention any other fact.

Okay, so strategies!

I do try to keep my head down when I can. Headphones, moving away from my desk to work somewhere else, generally means that I can focus and keep my head down a bit, get back to where I need to be head-wise.

Find the lesson. I’ve had a week of fuckups recently, and so I’ve took a day where I wasn’t in work, where I was away from the project to evaluate and figure out what I could’ve done. This also can be done in retrospectives. For me, I need to balance time and thoroughness. A series of deadlines meant that I was too focused on quick and not on thoroughness. I’m forcing that time by hand writing some notes for each story I test instead of typing them. I find handwriting forces me to think more and I’m more likely to remember things if I’ve written them down. I can then review these notes when I type them up in my testing session, which means I can think about these again to ensure I’m covering the bases I can.

Mindfulness. I’m out of practise, if fact I’m pretty sure my last session was in 2015. So I’ve started doing 10 minutes sessions before I go to bed. I don’t think it’s a coincidence I managed to finish this episode the night after I did a session of mindfulness: It’s been hanging around for about a month and a half while I try to find the words to describe what I find hard.

Find the good. I’ve focused on fuckups but sharing praise can also be a good way to minimise feeling like a bad guy. It also encourages other people to share praise, which I think is such an important part of team building.

Honestly, most of the time I love my work, I do. There are just times it highlights the cracks in my mental health, so I need to update my coping strategies to make sure I can still do the best job I can.

Footnotes

[1] https://dojo.ministryoftesting.com/lessons/do-testers-need-a-thick-skin-or-should-we-admit-we-re-simply-human-nicola-sedgwick
[2] http://katrinatester.blogspot.co.uk/2016/03/use-your-stand-up-to-make-testing.html

Further reading

Dr. StrangeCareer or: How I Learned to Stop Worrying and Love the Software Testing Industry

Ep 50: Vocab or What *do* you do?

Podcast recommendations!

I’ve just started listening to The Bright Sessions, which is a fantastic podcast about therapy sessions for ‘unusual individuals’. Also, if you’ve not listened to The Black Tapes or Tanis yet, and you love your supernatural/paranormal/creepy mysteries, then you’re missing out!

We testers bloody love vocab don’t we? Some of the debates and comments I’ve seen on blogs and in slack are really high level – sometimes I feel I need a firmer background in philosophy and formal logic to even begin to join in. While that’s intimidating at times, it’s also incredibly interesting. I’ve mentioned before how much I love the passion of the testing community. We love what we do, and are constantly striving to be better. As a community, we’re also (unsurprisingly) specific in our wording. Which sometimes leads to debates about wording to the detriment of the actual discussion going on (I’m sure I’m not the only one whose seen a discussion on twitter shut down by someone replying with ‘you mean checking’ and then patting themselves on the back – so clever! That is another rant though. For the most part, we’re a friendly bunch).

One of the debates that is always going on is the subject of vocabulary. The testing, not checking debate. The controversy around certifications and what counts as ‘professional testing’ anyway? The one I want to talk about it one that I struggle with a lot, and that’s the subject of job titles.

When people ask me what I do, I say I’m a tester. My job title is QA tester. I do like the idea of testing the quality assurance that our company does, and in theory, I do do that.

When you’re testing, you’re not just testing the functionality in front of you. You’re testing assumptions, both your own and that of the developer who built the work. You’re testing how the feature fits in with what has come before it. One of the reasons I am so insistent on meeting the client as early as possible and being in workshops for requirements gathering is to get a feel for the client, their business, and what their priorities are. For example, we have a client that is really visual. Most of the time we need to wireframe features, or sketch them out as he finds it hard to fully visualise something using words alone, so we know to be more vigilant on the front end side of things. But it’s only through meeting the client and gauging their priorities that we know where to focus.

QA is the whole team’s responsibility for the most part, from the start of the project to the end (there are some contexts where one person being QA makes sense, but it tends to be more regulatory than testing). There are plenty of people who talk about this better than me, and I’ll link to them in the shownotes.

When I introduce myself, I say I am a tester, though that feels like I’m mis-selling myself. Hell, most of the podcast is less about the nitty gritty side of testing and is instead about requirements gathering, and building the right product. That’s where my priorities lie. I love testing, exploratory testing is seriously one of my favourite things, but I much prefer doing that on something I know (to the best of our abilities, and in the context of the project) fits the client’s needs.

However, I’m also aware that without the term tester? I wouldn’t be here, doing this. I wouldn’t be in slack, I wouldn’t have gone to Brighton. The community we’ve built under this ridiculously broad umbrella is wonderful, and without it, would this community exist? Yes, a rose by any other name etc, but would the community find another umbrella term.

I don’t think I’ve come to any great conclusions here – I still have no idea to describe my job function. I love the idea of testing QA, and then UAT can be the client testing our QA process as well, but other than that? I have no idea. I’m grateful we all identify broadly as testers, regardless of how ill-fitting that term is, because I love talking, reading, listening, and meeting you all!

More reading

http://visible-quality.blogspot.co.uk/2016/05/boxing-tester.html
I’m QA. It’s in QA. It’s being QA’ed. You sure about that?<

Ep 49: Leading the Witness

Firstly, an announcement: LTATB is moving to fortnightly episodes. I need to level up my content game, and I can’t do that in weekly slots, so there will be an episode every two weeks. The next episode will be on 19th May.

People are really bad at telling you what they want. Really bad. They think they know, but I guarantee you they don’t. And it’s not because they’re stupid, if anything, it’s because they know their business and processes really well. Or, they know their business and processes as they stand really well. Translating that to a new system can be difficult to do.

How well do you know your commute to work? Or how to cook your favourite meal? Or the layout of your phone screen or desktop. The things you use all the time, you know what you’re doing well enough that you may not even think about every little step. You may not even realise if you’re doing something slightly (or not so slightly) inefficiently. Or, you may realise, but there’s a reason for it (I occasionally walk a longer way to/from work because it’s prettier and there’s more chance of seeing dogs being walked, for example).

Or, another way, changing computers. You get a new or different computer, and you start transferring/re-downloading files and programs. How many times after that initial set up do you realise you’re missing a program? I’ve had a week go by easily before realising there is something missing.

These little things are going to be the things that actually turn out to be really integral to a system. The stuff that isn’t the main points (browser of choice, or the pasta in a lasagna) but the stuff that just makes everything smoother (setting up key shortcuts, or adding basil or oregano). Technically you can get away without them, but it makes things just a little harder, or less great, and the people using the system will miss it and be less willing to engage with what you’ve built. So, how do you figure these things out? Ideally, you watch people interact with the system as it stands, and have a play yourself. I spoke last week about inheriting legacy systems, and some of those techniques apply here.

Another way of doing this is going through user journeys with the product owner and team.

People are really good at telling you what they don’t want. There’s gets a point in a discussion about a system where you can kind of tell that the client isn’t sure what part you’re not getting, so I’ll go through my assumption of the user journey. Suddenly, when I get to the bit I’m pretty sure I’m wrong on, they’ll re-engage and point out where my assumptions are wrong. Its easier to go ‘no, not like that’ then it is go to ‘this and this, and then this, except, shit, I missed a step here’.

However, this assumes that you’re wording things in the right way. Leading the witness is when a lawyer asks a leading question; or a question that puts words into the witness’ mouth. In this line of work, it could be as simple as assuming something and phrasing it as ‘and then you do x’ as opposed to ‘and after that, what happens? X? Or something else?’. The idea is you prompt them, but not fill in the gaps for them. In a situation where maybe people are feeling tired after a long meeting, or a bit nervous, or overwhelmed by techspeak, something like that could be just agreed to, and so you want to balance between making it go smoothly and easily and telling clients what they are going to get. We’ve implemented some things that on the surface of them make no sense, but for the context we’re working with, make perfect sense. You wouldn’t ever do it for a public facing site, but for the internal processes of the client, it made sense). And we’ve worked on a few government/public funded or charity sites, and their processes are larger than anything we can do or effect change, so we have to make them fit the system we build, not try to get them to change their processes for the new system.

The best and smoothest projects I’ve ever worked on are where the whole team has that understanding; we’re the experts in the tech side, but the PO knows their team and users better than we do, and so they say ‘we have this need, maybe we can have this?’ and we go either ‘sure’ or’ yes, maybe, but how about this?’ and then it works amazingly well.

Further Reading

https://softwaretestingnotesblog.wordpress.com/2016/04/10/ignorance-as-a-tool-to-frame-better-questions

Ep 48 – (Tron) Legacy

Gotta have a Wanz reference in there, though he may hate that it’s a reference to the new Tron…

We inherit a few legacy systems in our line of work. We have to get to grips with them across the board – PMs, QA, devs. We all need to know what the system does, how, and most importantly, why.

How do we do that?

Firstly, we do an audit of a site we’re inheriting, before it comes over to us. This will help the developers and system admins get to grips with any quirks of the code, or hosting needs. We can also do a security audit, make sure modules are upto date, if it’s magento make sure we know of any module licenses that need sorting out, etc.

Then we start on the actual functionality:

At this point we’ve had interaction with the client, so we’ve got a sense of what the site is for, who it’s for, what the business area is. We can get an idea of what they business needs are and how the site meets them (or doesn’t). But sometimes the client doesn’t have the in depth knowledge of the site, functionality, or code that’s needed to support the site.

Documentation is always useful, even if you can only see a user guide or other internal documentation, because that gives you insight to what bits of the system are used most often and what features or information is needed by stakeholders.

If documentation isn’t present or is old, then the code is another form of documentation. You can talk to the devs about what they’ve found, or even sit with them while they figure it out.

Finally, there’s the galumphing and seeing what you can find option. Paired with either of the previous techniques, it’s a good way to get to grips with the system, and start to test it; even without anything to test against, you can test your assumptions.

If you need to do it by itself, it that’s how you need to find out about a system, then it’s not going to be as comprehensive, but still useful. While you may not have any requirements, you can still get a basic structure for your test session, so you can time box and manage it properly.

So a basic structure may be something like:

Figure out inputs (valid, invalid, multiple, etc). Even if you know nothing about what you’re testing, if there’s a UI you’ll have cues or types of input that is accepted, then you may be able to make guesses on valid/invalid inputs from there
Outputs (with all the inputs above)
These can take the form of reports, data, messages, all sorts of outputs based on what you put in
Dependencies and interactions. From the above, can you see the flow of information. Can you see what happens if something fails along the way? Is it possible for the system to fail in that way?
Hypothesize on what you’ve learned, what inputs and outputs connections you’ve found, any other information that’s pertinent.
Repeat the above until you’ve narrowed down the expected results

Take notes. This can be the start of your documentation if needed. You can write up your findings and then talk to the team (and I include stakeholders in the team here), see where the gaps in your knowledge are.

You may be able to start writing tests from there for regression, if there are no tests present, or not enough tests. All new functionality should have tests, and if there are obvious tests that can be added, add them when you can. Each sprint should have a section for updating tests as needed.

Worst case it the code isn’t up to the standards of your agency/developers, or it could be under maintained. You may or may not be able to refactor it to be to your standards or liking, either way you can add tests asap to build the quality in and start to improve as needed.

Legacy systems may be awkward, and I’ve focused on the most awkward here, but it can be interesting, and you can learn a lot from picking up an old system and seeing where you can run with it.