Ep 60: Go Soft or Go Home

This week I talk to Matt Bretten about Humans vs. Tools. Matt’s gonna be a voice you’ll be hearing more of moving forward. Stay tuned for more news!

This is an unfairly stacked deck, I admit. You can’t really take the position that tools are more important/better than the human side of testing. However, we did touch on myriad fascinating topics:

  • “A bad workman always blames his tools”
  • Psychology and it’s importance to software development
  • Writing code and using tools is the easy part, it’s harder to know when to make use of them and when not but also communicate and work together effectively
  • You could describe ideas, knowledge and skills as tools – for example I consider pairing a tool I find useful for certain situations
  • Boeing report – 80% of aircraft incidents caused by human error
  • The ELIZA effect

Ep 59: Hit Record (and play)

Quick news:

I am going to be at Leeds Testing Atelier on 20th September, which is a free one day testing conference, and I’m very excited to be there1., there’s going to be some amazing talks and some people I’m excited about meeting IRL at last.

Obviously I am also going to be at TestBash here in my adoptive city of Manchester, which is also very exciting! Plus more podcast interviews and other awesomeness.

This week, I want to talk about Selenium IDE.

Selenium has two parts: the script management/automated testing bit (Webdriver) and a Firefox add-on (IDE) that you can use to record browser actions and then play them back. It’s often denigrated by the testing community and for some very good reasons:

They’re brittle and flakey (but then again, so is every automation script that relies on find_element_by_id as far as I can tell). If nothing else, you’re only testing the UI, or through the UI2., and that can can cause issues (Mark Winteringham talks about this briefly in this blogpost here, and I saw him do a talk about this in Liverpool this month, it was very good, highly recommended, the video is on youtube3.). Some of these tests should be pushed down to the API/service level, not the UI level, and that isn’t really possible using the IDE add-on.

They get mis-sold. There are a lot of ‘record and play automation: so easy anyone can do it’ adverts and that annoys testers and leaves a bad taste in a lot of mouths (and rightly so). Testing is skilled work, and the idea that an automated record and play system can replace the work of a tester – even if that tester is solely an automation tester – is ridiculous. Writing them requires a human, and a tester at that. Someone who can bring all the skills of a tester; the curiosity, the need to gather information, things like that.

There are many reasons IDE is not a good choice for building an automation suite. But I’ve used IDE, and I found it useful, so I’m going to share my use case with you today.

I’ve literally just started Selenium, in bits and pieces, between other bits of work and podcasting, etc. and I had no idea where to start. I could find the header of the script from the documentation (where you call the webdriver and other bits you need from selenium), but after that I wasn’t sure. So I recorded a brief session around the site I wanted to get some automation on. The IDE has an option to export the scripts into code. So I exported the script into Python, which is the language I was planning to use, opened it in Sublime, and started to review the script.

The commands are slightly different, but it gave me a good starting point. From there I can make changes to make things robust. I wasn’t sure how to do something, but with the IDE commands in front of me, I could google those and then find out what they did, what they were called in webdriver, and if I could improve it in any way (IDE does implicit waits by default, for example, and I could change these to explicit waits.

In the world of Selenium based automation, it gave me a starting point. I converted the script to a ‘better’ webdriver script, and saved that out for repeated use. And once I got that down, it was a lot easier to start again, from scratch, to do another test. And yeah, it was hard and I still have no idea why people like coding, but I did it, and I felt I had a good grounding to do it and google about for help (read: copy and paste shit off stackoverflow as I am a true developer 😉 ). And now I’ve proven the concept, I’m doing more and it’s starting to form an automation strategy, which is incredibly exciting.

So IDE is a tool, and like any other it can be used or misused. For me, it’s a great starting point for Selenium, where I would probably still be bashing my head against a wall if I tried to build this from scratch. I have a lot to learn, but this was certainly a good springboard.


[1] http://leedstestingcommunity.co.uk/
[2] http://www.mwtestconsultancy.co.uk/cross-browser-checking-anti-pattern/
[3] https://www.youtube.com/watch?v=VGNxv9ilFbQ

Ep 58: Hot tub time machine

Or: The future of testing!

This super bonus episode is the second of my iterviews with Leigh Rathbone! We make several references to the previous episode (see here), so you may want to catch up on that episode first.

In this episode we discuss:

  • James Whitaker and the Hot Tub being the future of the internet of things
  • The changing landscape of testing and adding value
  • Crowdsourcing testing
  • Testing is dead, long live testing

Ep 57: Don’t go chasing waterfalls

This week is a guest episode! While I was in Liverpool for Liverpool Tester Gathering (http://www.meetup.com/Liverpool-Tester-Gathering/), I managed to grab some of Leigh Rathbone’s time for a couple of episodes! This is the first one and is about transitioning from Waterfall to Agile, with a focus on testers and test teams.

We cover the pros and cons of the different ways of working, and how to alleviate some of those cons. We reference the Agile Testing Quadrants:

The Agile Testing Quadrants, from The Agile Testing Book
The Agile Testing Quadrants, from The Agile Testing Book

And SMURFS: https://www.agileconnection.com/article/instead-mvps-maybe-we-should-be-releasing-smurfs

Ep 56: And our survey says

An interview with Rosie Hamilton!

This year Rosie did a large survey of software testers and then did the number crunching to produce a snapshot of testers in 2016. All the data and the scripts she used are up on GitHub for transparency, and for others to do their own analysis if they wish.

Rosie has now made a web app for people to explore the data she found without having to run scripts or learn R. She has just published a blog post about how she did this, with a link to the app.

I’ve been interested in the make up of testers; where we come from, when and how we decide to become testers for a while now, and I knew when I saw the survey that I wanted to invite Rosie on to the show to chat about it.

We talk about the survey process, from inception to analysis; what the survey taught her about testers, and about surveys, where testers come from, and the plans for the next iteration of the survey.

Ep 55: FOMO

Or, fear of missing out.

Quick news:

  • I’m at Liverpool Tester Gathering on Thursday 5th August. Come say hi if you’re there!
  • Next week I’m interviewing Rosie Hamilton!
  • The google form for telling me your origin story will be closed on Monday 1st August, so get going if you’ve not filled it in already!

This was not the episode I planned to record today. I couldn’t get that episode to flow properly, so instead I’m talking about FOMO. Now, this text is not as close to a transcript as normal, as I recorded this mostly on the fly, so it’s a bit rambly and not actually written down, but I’ve got the main points below.

On Tuesday, a group of us took part in the first #TuesdayNightTesting, which was a remote lean coffee evening. It was a lot of fun, and one of the questions was about getting testers in a slump into the community. And it got me thinking about the community, FOMO, and people that don’t want to be in the community. It’s been something I’ve been thinking about, and I’ve basically got a list of things I try to do to limit, control, and maximise my testing community involvement.

  1. Have a to-do list
  2. Use bulletjournal, use trello, use whatever tool you want but write or type out a to-do list. Even if you remember everything you need to do, visualising it will help you see how much space you have for new things.

  3. Find a niche
  4. I am a magpie of testing, I’ve spoken about this before; that I can find it hard to focus because everything looks so cool that I want to do it all. I think its easier for me to choose a niche for contributing, because it depends so much on what I enjoy, as opposed to what works well for my context, the problem I have, etc. But pick a contribution (blogs, twitter, podcast, speaking, code/tool development), and roll with it.

  5. Have a plan
  6. I am formulating a plan for the podcast. It has taken me a year to realise I need one, but I’m going to write down what the podcast is, what I want it to be, and how I’m going to continue it, so I know what I want to do. It doesn’t have to be detailed, but I think if you’re serious about doing something in a big way, you need a plan.

  7. Say no/Defeat FOMO
  8. Say no. You’re going to have to at some point so get used to it with small things. Or say ‘let me check’ and go check fully before saying yes or know. If seeing other people do and tweet and blog about things you’re missing out on is going to bother you, pull back a bit.

  9. Take a break
  10. Related to the above. Take a break. Self care is important, and self care whilst doing stuff and being part of tech is something I’m really interested in.

Ep 54: Origin Stories

A series of unconnected things:

  • I’ll be at the Liverpool Tester Meetup on 4th August! Come say hi!
  • I’ve got more interviews coming up and that is my plan for August – start reaching out for interviews.
  • 30 days of testing! This is a thing that Ministry of Test is doing, and though I got off to a strong start, I’ve started to flag a bit. I need to get back on course!
  • I’ve started working with selenium! I’ve picked python as I assumed a language I knew would be less of a learning curve, but that may change as we figure out what works for us.
  • I’ve been listening to a lot of creepy podcasts – The Magnus Archives, Archive 81. Highly recommended if you like creepy fiction!

I want to talk about getting into testing.

My session at Testbash will touch on getting into testing in less than ideal circumstances and learning testing in those circumstances, but today I just want to discuss getting into testing.

So many testers say they just ‘fell’ into testing, that they either didn’t see testing as their career, or that they didn’t even realise testing was a career, and I’m intrigued by it. I am a tester that did the same, and I’m sure there are other careers where people also just fall into it, but as a tester, it’s testing I’m interested in.

I’d be interested in collecting experiences of how people got into testing; not necessarily their background, but maybe the moment you realised this is what you wanted to do, or the moment you shifted to testing almost full time – whatever you want to think as the moment or series of moments you got into testing, I want to hear them.

I got into testing by saying yes. I was working as a customer support person in a digital agency. My co-worker, who was our only tester, had moved into release management but that meant there was less time for him to test (we were ostensibly a start up, so profit margins were slim). As the person who had bugs reported to them, and the person who had the most free time, relatively speaking, it seemed obvious that some of that work would come to me. I was still new to the job, and pitching in and helping out had never really gone wrong for me before so why not?

My testing was weak, relatively speaking. I mean, I have a science degree and can be pedantic about things, so you know, I wasn’t completely in the wild, but looking back I know I missed some things I would’ve caught now.

We were working off spreadsheets of test cases, which I actually found a useful starting off, because they gave me the foundations I needed to feel comfortable. There was nothing saying I couldn’t go off script, but we wanted to make sure we had everything down (incidentally, I think I only ever used the ones handed over to me, never ones I’d made myself. Seemed like a lot of faff for little reward).

I don’t remember the moment I became a tester, but I remember a moment I realised maybe I could do this.

I wasn’t happy with an implementation. It’s hard to describe without jumping into details but it was wrong. Firstly, I was proud of myself that I’d got enough knowledge of the system to know it was wrong. We pointed out the issue and got it fixed. However, the problem was twofold (maybe three). Firstly, we had offshore devs that had no real interaction or onboarding of the site. They were treated like code monkeys, and this led to a lack of investment and knowledge of the system. Two: the feature request had had no discussion, really. It came from the client, through the account manager, straight to said offshore dev, with no other input. So the feature hadn’t been technically planned, no real AC was on the ticket, nothing. Just ‘make this happen’.

That was the failure in my eyes – there was no guidance from the developer, and there wasn’t clear lines of communication on this stuff, so he didn’t ask for clarification. He was new to the system, so didn’t have the time or knowledge to really figure out what knock-on effects there were. So I started stepping in a little bit and trying to make sure we had a list of requirements and edge cases.

And I started to google. I wanted to see what other people did, how other companies handled this stuff (at this point our tester had left, leaving me as the go to tester person – did I mention the company was not so slowly going bankrupt?), and I found the testing community.

I think those two moments – realising I’d picked up a bug in the system and then finding the testing community (or the bits I inhabit anyway), were the moments I decided this is what I wanted to do.

And I think a lot of people get pulling into testing because ‘anyone can do it’, and for me it was realising that it goes deeper than that that brought me into the career properly.

I would absolutely love people to let me know how they got into testing. I can’t promise any fancy R scripts and graphs like Rosie Hamilton did, but I can promise at least a blog post on it!

Link to the form

Ep 53: You have no control who lives who dies who tells your story

Thiiings: I have 3 potential interviews coming up for the podcast and I am excited! More details when I get them. I am panicking about Testbash Manchester as I said I’d do a practice talk at the local Drupal User group in August and lols I’ve not even written the thing yet. Ministry of Test is doing a series of webinars, facilitated by Maaret Pyhäjärvi, which I’ve asked to take part in, so I need that topic as well.

Sooo, I want to talk about storytelling.

I’ve always been fascinated by stories. I was a reader as a child, I got lost in words more often than anything else. I read the fifth Harry Potter book the night before my Physics GCSE, staying up until the early hours because the alternative – not reading it – was unthinkable). I always thought it was some kind of magic, storytelling. Later, hearing stories became my obsession, through Ted talks, podcasts, and conference talks, and it’s still my obsession now.

Stories keep you engrossed, invite you on a journey. You’re not just telling someone something, you’re informing them, showing them, bringing them with you on something.

When I started hearing about testers as storytellers I sat up, and paid attention. I don’t have the imagination to write fiction – I’ve tried and I can’t hold a plot with two hands and a bucket, but writing has always brought me joy, and I am always trying to write and speak better, tell my own stories in a way that’s useful and engrossing.

And I realised, as I was reading about storytelling in testing and software development just how much space there is for storytelling in our work. I knew and had used personas before; and how this persona interacts with a product. What I hadn’t thought about is how writing bugs reports is storytelling.

Part of a tester’s job is to advocate for bugs. We report the bugs and in our reports are the whys and hows of raising them, and why we think they should be fixed. Sometimes this is nothing major – this doesn’t work at all, or doesn’t work in this environment. Sometimes it’s a bit more nuanced, “I would expect this here” “I shouldn’t see this because of this reason” and getting the details across fully and in a compelling way is important, because it will get the bug across to people who are in charge of fixing or scheduling the fix.

Sometimes it’s not a bug as much as it’s something that’s not been thought of, or something has been missed, and you need to explain why we need to include the change you’re asking for. Stories can help advocate for things.

So how can we make our stories compelling? First it needs to have a structure: a beginning, a middle, and end. This person went to here and did this and this happened. This is what we thought was going to happen, this is what should happen, this is what we need to happen, and this is why we think this should happen.

Tell the report to someone. This is one reason I’m inclined to talk to developers first when it comes to an awkward bug, or a bug that’s not obvious. I can talk through things and get a feel for what details are needed. If I can’t, or it doesn’t make sense to, I’ll write the bug out in bullet points, with an introduction and end to structure the bug. I make the title relevant (no puns in ticket titles, no matter how tempting. I save all my puns for you, listeners <3), and I make sure I include a desired resolution. Never leave a story on a cliffhanger, people will hate you. A pox on bug reports that are literally ‘the menu doesn’t work’.

Choose your evidence – annotate and refer to the images, videos, numbers you include, don’t just include them without explaining (this is something I am guilty of a lot). Otherwise it’s just decoration and something making the ticket bigger.

Exposition is allowed, and is preferred to a light touch when it comes to details. At the same time, we’re not Charles Dickens and none of us are being paid by the word (this was not actually true, but I like the idea of it). Choose your details wisely.

Write first, then edit before sending it to a developer. Choose what details make sense when you review the entire report.

When the bug comes back to you, detail the resolution – say x was done, or on reflection, y was done because of reasons a, b, c. Finish the story, close the loop. Start again.

We should tell clients more stories. Instead of saying ‘this will do x and y, say, this will allow you/your customers to do x and y, but not a, or b.” or, “we chose to implement this because of these advantages.”

And we should listen and help clients form their own stories. Point out plot holes, and suggest how to tighten the plot up. Offer different opinions, viewpoints, and expertise (you’d send a novel to a copy editor before printing, right?). Help them guide their clients or users or stakeholders through the journey of the site or the app, and make that journey part of a story. When something becomes part of your story it becomes something you care about, and engage with, which is important when it comes to developing successful software. Speak in a common language, and make sure the goal is the same on all sides.

Your user story statement should in fact tell people the story of the feature. As this person, this character, I want to do this thing to reach this goal. Break with the format if needed, but make sure the story elements are there.

Practice. I really like internal sprint walkthroughs. These happen prior to the sprint demo to the client, and it means that the team as a whole gets to look at the work we’ve done. We take each feature ticket, and demo how we meet that criteria. It’s practice for one; the lead dev can find the most sensible way to demo the entire sprint (to tell the story of the sprint, maybe?). It gives the team a chance to review progress as a whole, and make sure everything fits together well.

Hell, storytelling could be a good way of substituting metrics for useful data. Metrics don’t tell you much of anything, but you can at least supplement them with words. X% of stories were rejected with bugs, this is because of x, y, and z, is much better than x% of stories were rejected. Even better would be ‘we had issues with a few stories because of these reasons’, and then move forward, but that doesn’t fit into a graph nicely.

There’s a million things I haven’t done – I’ve not spoken about what happens when the story is taken out of your hands, or talking to people you don’t work with (interviewers, other testers, people who aren’t techy), but I wanted to focus on the sprint cycle.

The whole agile process it an exercise in storytelling, and I think we need to get better at telling them, and about helping other people develop them. Stories are fundamental to human nature – humans are rooted in narrative; we form lives and memories around stories, There’s no reason we can’t continue this in software development, and bring a bit more of the human into the software.

Further reading



Ep 52: And I say, Zangief you are bad guy, but this does not mean you are *bad* guy

Okay, I want to talk about something that rips me up when my mental health is being particularly hard, and when deadlines are looming.

I want to talk about being the bad guy. This was mentioned in Nicola Sedwick’s fantastic TestBash talk on testers being human; a talk that is well worth the watch if you can – it’s on the Dojo1.

It’s hard to be a tester and go with the flow, be under the radar. You’ve got to speak up because that’s the job – you have to point things out, and sometimes get people to explain their assumptions or decision making. I mean, you have to do that to yourself as well, but no one sees that. They see a contrarian person who wants to know why and how and when and everything else.

You’ve got to speak up and say that something’s wrong, or you think something might not be right, or that there’s a scenario that people haven’t thought of or what about on mobile etc. Most of the developers I work with see me asking the same question of clients as well, so I think that helps. I’m a dick to everyone!

Sometimes being that person, that dick, genuinely feels difficult.

Don’t get me wrong, if there’s a bug blocking the story, then it’s blocking the story, simple as. But that doesn’t mean it isn’t disheartening to reject someone’s work, especially if there are deadlines looming, or you know it’s been a tough piece of work to get through. Or this isn’t the first time you’ve sent a piece of work back to them. Or if you know everyone is stressed enough as it is. Or if you’re going to have to defend the bug – which is fine most of the time, I would much rather get called on shit and then come to a greater understanding or get other people to understand my thinking than not but there are times when that is a difficult conversation. And sometimes you want the smooth conversation.

This is the job we’ve signed up for, but it’s not always enjoyable.

I’ve finally got out of the ‘no bugs raised = bad testing’ mindset, but there is some satisfaction in finding and pinning down a nasty, weird, or just plain juicy bug.

When my mental health is bad, or I’m stressed, or I’ve got multiple deadlines or anything, I want to put my head down and get on with work. If I find a bug, chances are I can’t do that. The majority of small bugs I’ll file without talking to a developer, but a big bug warrants a conversation. I want to double check that the bug actually exists and isn’t an environmental issue or a user issue, and I want to do that before I file the bug if possible so I can keep admin down. Or I want to show the developer, make sure they understand my notes and repro steps; I find it’s useful to get that information on a ticket before sending it over.

Then there are scrums where I have to give updates on my testing. I want to make my testing visible2, and I need to give the team a heads up if something is blocking me or potentially blocking the sprint. I have to take the good with the bad.

And making my bugs known is an important part of the development process. Flagging up that I’ve had to send a story back to a developer is important information for everyone to have at the start of the day, as 1) they may not have seen it, and 2) they might need to rearrange their work plan. I just go for matter of fact, just like I’d mention any other fact.

Okay, so strategies!

I do try to keep my head down when I can. Headphones, moving away from my desk to work somewhere else, generally means that I can focus and keep my head down a bit, get back to where I need to be head-wise.

Find the lesson. I’ve had a week of fuckups recently, and so I’ve took a day where I wasn’t in work, where I was away from the project to evaluate and figure out what I could’ve done. This also can be done in retrospectives. For me, I need to balance time and thoroughness. A series of deadlines meant that I was too focused on quick and not on thoroughness. I’m forcing that time by hand writing some notes for each story I test instead of typing them. I find handwriting forces me to think more and I’m more likely to remember things if I’ve written them down. I can then review these notes when I type them up in my testing session, which means I can think about these again to ensure I’m covering the bases I can.

Mindfulness. I’m out of practise, if fact I’m pretty sure my last session was in 2015. So I’ve started doing 10 minutes sessions before I go to bed. I don’t think it’s a coincidence I managed to finish this episode the night after I did a session of mindfulness: It’s been hanging around for about a month and a half while I try to find the words to describe what I find hard.

Find the good. I’ve focused on fuckups but sharing praise can also be a good way to minimise feeling like a bad guy. It also encourages other people to share praise, which I think is such an important part of team building.

Honestly, most of the time I love my work, I do. There are just times it highlights the cracks in my mental health, so I need to update my coping strategies to make sure I can still do the best job I can.


[1] https://dojo.ministryoftesting.com/lessons/do-testers-need-a-thick-skin-or-should-we-admit-we-re-simply-human-nicola-sedgwick
[2] http://katrinatester.blogspot.co.uk/2016/03/use-your-stand-up-to-make-testing.html

Further reading

Dr. StrangeCareer or: How I Learned to Stop Worrying and Love the Software Testing Industry

Ep 51: Leave No Trace

If you’ve been following the soccer/football news last month1, you’ll have seen that that last match of the season at Old Trafford was suspended due to a suspicious package found in a toilet. After a controlled explosion, it was discovered that the suspicious package was one that was left in the stadium after a training exercise. Now I can think of at least three points of failure here (the training people for not counting devices in and out of exercises, the people being trained not finding the device, and any of the other staff who missed it between the training session and that match day). And it happened on the last match day of the season.

It was a clusterfuck, and it was a clusterfuck that reminded me of the importance of tidying up after yourself.

We try to silo our test environments away from the client – we have an internal test environment, and a staging one that the client works on. The stage site is where UAT happens, and possibly content population prior to go live depending on set up. We also tell the client to play with the site we’re building on stage – this is their opportunity to get to grips with the site before they have to worry about it going live.

The dev site is essentially a bit of a playground – we know the client can’t see it, so we know we can try things out that may break that environment or the client might not ever see, but we want to see what it looks like.

But we still are careful with what content goes on there. Example content should be at the very least bog standard lorem ipsum. Occasionally we’ll put a copy of live content on the staging site, if available, or we’ll make some relevant example content. This is because, firstly, it is a lot easier to test the usability and look and feel of the site with live data, secondly, it gives context to the client, who may finding looking at a bare or unfinihsed site difficult. It’s so easy to get distracted with the bareness of the site that what’s there doesn’t get a proper look over. You’ll note I said content there, not data. And that’s for a reason:

You’ve got to be careful regarding test data. If you need to use live data, and the live data is customer details, then you’ve potentially got some legal issues to contend with. If you’re using credit card information, then the PCI DSS (Payment Card Industry Data Security Standard) will apply to your test environment, and that is a pain in the arse from my brief dealings with PCI compliance. The UK Data Protection Act states that you have to tell people upfront specifically why you want to collect their data and what you plan to do with it. So unless you tell people you’ll be using their data for testing? You can’t do that.

The law sees no difference between live and test environments when it comes to data protection and breaches, and according to one source I found, 70% of data breaches are inside jobs2.

So what do you do?

Sometimes you need live data in order to get the job done, whether that be finding a bug or load testing. You can mask or anonymise data, scrub the sensitive bits out. IF you’re not using credit card details that takes a lot of the data out of the way, but if you are you shouldn’t be able to access it anyway. So that leaves names, addresses, emails, etc. The issue with that method is that can remove any usefulness from the data.

You can generate data; and this can usually be automated. The issue there can be relevance or context of data. Generating data that makes sense within the specific context could be anything from really hard to impossible, and so this might make the data useless.

Or, you make sure your test environments are up to scratch, privacy wise, and see if you can add a clause into your Terms and Conditions to mention testing using data.

Or do what the majority of companies seem to do and just…use the data and hope for the best? No the best plan, I don’t think.

My motto is always don’t test with any content you wouldn’t want a client to see. So I use bugmagnet to fill things with lorem ipsum and email addresses etc. Most of the time it’s not needed but I’d rather not have something just confusing rather than anything problematic.

So I feel that, as testers, we should try to adhere to the Leave No Trace3 set of ethics where it makes sense to:

    1. Plan ahead and prepare

I assume we all do this anyway, but it doesn’t hurt to have a plan, even if its just in your head, of what content or inputs you need to use for this testing. Plan for what you need to do when you need live data or something close to live data.

    1. Travel and camp on durable surfaces

Check your environment and/or devices are ready for you to test. Nothing like starting to test then realising the developers weren’t finished deploying and you end up losing a load of work, or, as in my case more than once, freaking out because the site looks wrong then realising I’ve still got javascript turned off from testing my last story.

    1. Dispose of waste properly

If you need to enter data that’s invalid to test something, clear it out. If you’ve got live data on a test environment, scrub it and/or delete it after use. Unless:

    1. Leave what you find

If you find a bug, sometimes leaving the content or data that caused it (if you can) can be useful for the dev to review. Try not to delete anything you didn’t make in case someone else is testing something.

    1. Minimize campfire impacts

If you do need to stress test, or load test, or security test something, make sure that you don’t break everything? Or give people a heads up so they know what to expect.

    1. Respect wildlife

Yeah, I got nothing. I guess if you have to test on a production environment, don’t change anything that might fuck up the live environment or content?

    1. Be considerate of other visitors

I don’t often test at the same time as other testers or users, but when we have we’ve let each other know and collaborated to ensure we don’t trip over each other while testing things. I try to make it as obvious as possible that I’m making test data and who I am so people don’t get weirded out by random content or changes being made.

If I do have to do something on live, I try to either keep it as inconspicuous as possible (most of the sites we do have the ability to view content without actually publishing it), and I ensure the username and content all scream THIS IS A TEST. And we inform the client beforehand, so everyone knows what to expect. That is rare in my line of work though, so I can’t speak to techniques here.

It’s something I’ve never really considered before, not in this depth, so I thank the three levels of fuckery that inspired this episode, as it’s surprisingly interesting.