Ep 73: There is no spoon!

This week on the show we’re talking to Maaike Brinkhof about cognitive biases! There are so many biases, but we talk about a few here, and share some resources for learning more.

Why are we biased?

Four categories

  • Too much information → We filter it
  • Not enough meaning → We fill in the gaps
  • The need to act fast → We want to feel in control
  • What should we remember? → We pick out what stands out

Is it a bad thing or a blessing in disguise?

As with all things it can be both. Biases are evolutionary as brains can’t process everything, but they can be a crutch, a mental shortcut which can cause issues.

If you view biases as a bad thing, then you’re missing the point. You can choose to view it as something you can learn more about and getting to know more about yourself – such as learning to recognise when you might be biased and trying to adjust your behaviour.

Biases in testing – its not just about your own biases

Often when we are testing, we are looking for problems caused by biases from people around us, such as the biases of developers or project managers. Often people will write their own biases into products without realising it.

We also have to be aware of our own biases guiding us, hiding or obscuring information. We might like to think we are objective in our thinking, but we are not perfect either.

Understanding biases can also help you explain and justify your testing, questions, problems or information that you’re providing to people.

Lets talk about some biases!

Confirmation bias (the biggest bias of all, and can be broken down into several ‘smaller’ biases, for example the ones below):

How do you deal with biases?

  • Work alone less and pair or mob more
  • Focus and de-focus
  • Self-awareness of your own concentration levels and behaviours
  • Awareness of biases
  • Referring to and using heuristics
  • Stepping back and examining your thinking, concentration, and working patterns to avoid relying on biases to take shortcuts.

Books and courses we’ve mentioned

Ep 52: And I say, Zangief you are bad guy, but this does not mean you are *bad* guy

Okay, I want to talk about something that rips me up when my mental health is being particularly hard, and when deadlines are looming.

I want to talk about being the bad guy. This was mentioned in Nicola Sedwick’s fantastic TestBash talk on testers being human; a talk that is well worth the watch if you can – it’s on the Dojo1.

It’s hard to be a tester and go with the flow, be under the radar. You’ve got to speak up because that’s the job – you have to point things out, and sometimes get people to explain their assumptions or decision making. I mean, you have to do that to yourself as well, but no one sees that. They see a contrarian person who wants to know why and how and when and everything else.

You’ve got to speak up and say that something’s wrong, or you think something might not be right, or that there’s a scenario that people haven’t thought of or what about on mobile etc. Most of the developers I work with see me asking the same question of clients as well, so I think that helps. I’m a dick to everyone!

Sometimes being that person, that dick, genuinely feels difficult.

Don’t get me wrong, if there’s a bug blocking the story, then it’s blocking the story, simple as. But that doesn’t mean it isn’t disheartening to reject someone’s work, especially if there are deadlines looming, or you know it’s been a tough piece of work to get through. Or this isn’t the first time you’ve sent a piece of work back to them. Or if you know everyone is stressed enough as it is. Or if you’re going to have to defend the bug – which is fine most of the time, I would much rather get called on shit and then come to a greater understanding or get other people to understand my thinking than not but there are times when that is a difficult conversation. And sometimes you want the smooth conversation.

This is the job we’ve signed up for, but it’s not always enjoyable.

I’ve finally got out of the ‘no bugs raised = bad testing’ mindset, but there is some satisfaction in finding and pinning down a nasty, weird, or just plain juicy bug.

When my mental health is bad, or I’m stressed, or I’ve got multiple deadlines or anything, I want to put my head down and get on with work. If I find a bug, chances are I can’t do that. The majority of small bugs I’ll file without talking to a developer, but a big bug warrants a conversation. I want to double check that the bug actually exists and isn’t an environmental issue or a user issue, and I want to do that before I file the bug if possible so I can keep admin down. Or I want to show the developer, make sure they understand my notes and repro steps; I find it’s useful to get that information on a ticket before sending it over.

Then there are scrums where I have to give updates on my testing. I want to make my testing visible2, and I need to give the team a heads up if something is blocking me or potentially blocking the sprint. I have to take the good with the bad.

And making my bugs known is an important part of the development process. Flagging up that I’ve had to send a story back to a developer is important information for everyone to have at the start of the day, as 1) they may not have seen it, and 2) they might need to rearrange their work plan. I just go for matter of fact, just like I’d mention any other fact.

Okay, so strategies!

I do try to keep my head down when I can. Headphones, moving away from my desk to work somewhere else, generally means that I can focus and keep my head down a bit, get back to where I need to be head-wise.

Find the lesson. I’ve had a week of fuckups recently, and so I’ve took a day where I wasn’t in work, where I was away from the project to evaluate and figure out what I could’ve done. This also can be done in retrospectives. For me, I need to balance time and thoroughness. A series of deadlines meant that I was too focused on quick and not on thoroughness. I’m forcing that time by hand writing some notes for each story I test instead of typing them. I find handwriting forces me to think more and I’m more likely to remember things if I’ve written them down. I can then review these notes when I type them up in my testing session, which means I can think about these again to ensure I’m covering the bases I can.

Mindfulness. I’m out of practise, if fact I’m pretty sure my last session was in 2015. So I’ve started doing 10 minutes sessions before I go to bed. I don’t think it’s a coincidence I managed to finish this episode the night after I did a session of mindfulness: It’s been hanging around for about a month and a half while I try to find the words to describe what I find hard.

Find the good. I’ve focused on fuckups but sharing praise can also be a good way to minimise feeling like a bad guy. It also encourages other people to share praise, which I think is such an important part of team building.

Honestly, most of the time I love my work, I do. There are just times it highlights the cracks in my mental health, so I need to update my coping strategies to make sure I can still do the best job I can.

Footnotes

[1] https://dojo.ministryoftesting.com/lessons/do-testers-need-a-thick-skin-or-should-we-admit-we-re-simply-human-nicola-sedgwick
[2] http://katrinatester.blogspot.co.uk/2016/03/use-your-stand-up-to-make-testing.html

Further reading

Dr. StrangeCareer or: How I Learned to Stop Worrying and Love the Software Testing Industry

Ep 49: Leading the Witness

Firstly, an announcement: LTATB is moving to fortnightly episodes. I need to level up my content game, and I can’t do that in weekly slots, so there will be an episode every two weeks. The next episode will be on 19th May.

People are really bad at telling you what they want. Really bad. They think they know, but I guarantee you they don’t. And it’s not because they’re stupid, if anything, it’s because they know their business and processes really well. Or, they know their business and processes as they stand really well. Translating that to a new system can be difficult to do.

How well do you know your commute to work? Or how to cook your favourite meal? Or the layout of your phone screen or desktop. The things you use all the time, you know what you’re doing well enough that you may not even think about every little step. You may not even realise if you’re doing something slightly (or not so slightly) inefficiently. Or, you may realise, but there’s a reason for it (I occasionally walk a longer way to/from work because it’s prettier and there’s more chance of seeing dogs being walked, for example).

Or, another way, changing computers. You get a new or different computer, and you start transferring/re-downloading files and programs. How many times after that initial set up do you realise you’re missing a program? I’ve had a week go by easily before realising there is something missing.

These little things are going to be the things that actually turn out to be really integral to a system. The stuff that isn’t the main points (browser of choice, or the pasta in a lasagna) but the stuff that just makes everything smoother (setting up key shortcuts, or adding basil or oregano). Technically you can get away without them, but it makes things just a little harder, or less great, and the people using the system will miss it and be less willing to engage with what you’ve built. So, how do you figure these things out? Ideally, you watch people interact with the system as it stands, and have a play yourself. I spoke last week about inheriting legacy systems, and some of those techniques apply here.

Another way of doing this is going through user journeys with the product owner and team.

People are really good at telling you what they don’t want. There’s gets a point in a discussion about a system where you can kind of tell that the client isn’t sure what part you’re not getting, so I’ll go through my assumption of the user journey. Suddenly, when I get to the bit I’m pretty sure I’m wrong on, they’ll re-engage and point out where my assumptions are wrong. Its easier to go ‘no, not like that’ then it is go to ‘this and this, and then this, except, shit, I missed a step here’.

However, this assumes that you’re wording things in the right way. Leading the witness is when a lawyer asks a leading question; or a question that puts words into the witness’ mouth. In this line of work, it could be as simple as assuming something and phrasing it as ‘and then you do x’ as opposed to ‘and after that, what happens? X? Or something else?’. The idea is you prompt them, but not fill in the gaps for them. In a situation where maybe people are feeling tired after a long meeting, or a bit nervous, or overwhelmed by techspeak, something like that could be just agreed to, and so you want to balance between making it go smoothly and easily and telling clients what they are going to get. We’ve implemented some things that on the surface of them make no sense, but for the context we’re working with, make perfect sense. You wouldn’t ever do it for a public facing site, but for the internal processes of the client, it made sense). And we’ve worked on a few government/public funded or charity sites, and their processes are larger than anything we can do or effect change, so we have to make them fit the system we build, not try to get them to change their processes for the new system.

The best and smoothest projects I’ve ever worked on are where the whole team has that understanding; we’re the experts in the tech side, but the PO knows their team and users better than we do, and so they say ‘we have this need, maybe we can have this?’ and we go either ‘sure’ or’ yes, maybe, but how about this?’ and then it works amazingly well.

Further Reading

https://softwaretestingnotesblog.wordpress.com/2016/04/10/ignorance-as-a-tool-to-frame-better-questions

eps1.38_d3bug.mp3

Has everyone watched Mr Robot? It’s a great Amazon Original series about a hacker who works for a security company and he [SPOILERS]. I’m going to assume you have. If not, you should go watch it now, and all you need to know for this episode is that in episode three the protagonist monologues1 throughout the episode about bugs and the nature of them, and because originality is for losers, I’m gonna use part of that monologue here to do some of my own monologuing. Monologue is no longer a word. MONOLOGUE.

The blog post will have the whole monologue but I’m not going to read it out. A lot of this is for dramatic effect, plot, and the nature of the character, but there are some points there.

A bug is never just a mistake.
It represents something bigger.

Okay, so sometimes a bug is just a mistake – mistakes happen, people are human etc. However, sometimes it’s worth just checking in and seeing if there is an issue. Sometimes it could just be the issue is a typo, distraction, something simple. Sometimes it could be a symptom of something deeper – fatigue, inexperience, maybe even something more serious, like a health issue. It’s worth the time to check in, even if it’s with yourself if you find you’re making mistakes. It could mean anything from taking a break and getting a drink of fresh air to more substantial action.

When a bug finally makes itself known, it can be exhilarating, like you just unlocked something. A grand opportunity waiting to be taken advantage of.

I assume this is actually more like finding the cause of a bug for a developer, right? I mean, finding a bug isn’t a huge revelation to me, but figuring out the pattern of a weird bug, or getting to grips with a particularly complex piece of functionality feels like a win. And having a piece of work come to me and it work as I expected it to also feels like a win.

It did take me a while to divorce not finding any bugs from productivity in my mind – if I’m not finding bugs how do I prove my worth? But I’ve realised that the work I do upfront reduces bugs in a way, or at least reduces ambiguity, which leads to less issues from the client, and that’s part of my worth.

Bugs are useful, or they can be, but I don’t think finding bugs is the only way I can be valuble.

The bug forces the software to adapt, evolve into something new because of it. Work around it or work through it.
No matter what, it changes.
It becomes something new.
The next version.
The inevitable upgrade.

I just really enjoy this section – the idea that the bug forces the software to evolve, to work around, like Darwinism in binary. The inevitable upgrade – code is never finished, it’s only ready to ship, then we continue working and ship, work and ship, a constant building, improving, changing of code. And that’s why we test, right? To ensure that the constant evolution is a good thing, not something likely to end up on WTF Evolution2

I just like the idea of bugs being good things; even if I don’t celebrate finding them. I like the philosophical approach, even if the cause does end up being a simple, silly mistake (because this isn’t tv, not everything has a deep meaning, nor is it burdened with glorious purpose).

It’s a good episode of a good series, and I highly recommend it.

Footnotes

[1]http://www.springfieldspringfield.co.uk/view_episode_scripts.php?tv-show=mr-robot-2015&episode=s01e03
[2]http://wtfevolution.tumblr.com/

Ep 37: Making the most of it

This job would be great if it wasn’t for clients, amiright?

One of the first challenges on a client project is scoping out what the client wants. What counts as a must have to go live, and what counts as a nice to have in the future. We start with a minimum viable product, and then go from there, based on budget, time, etc.

To do this we need to get to know the client, the business, the stakeholders. Their day to day needs as well as their plans for the future. We have the technological background and ideas we can bring to the table, but we need to ensure our suggestions are relevant and useful.

We build a relationship to build and handover a good project to them. We do this in a number of ways. When the commercial team hands a project over to production, we get a handover that give us an idea of how the commercial team see the project, and any relevant documentation; pitches, etc. We get an overview of who our contact points are, and feel for what the client expects, and what the relationship is like.

We then set up workshops with the client. These are whole or almost whole day affairs that bring the production staff and client together to meet and get to know each other.

The hardest part of getting to know a client’s business is getting the details that the client feels are unneeded or obvious. They sometimes feel that they don’t need to state what they see is the obvious. Most of the time we catch it, because sometimes it’s obvious, but sometimes it’s not and things get a little hairy.

Talking about all this plus an intro to how we work, and who the project team are can take all day, so after all this, we go away. We design a homepage, do some wireframes of internal pages, and any particular functionality (checkout pages, landing pages etc), start populating the backlog and getting the first sprint of basic stories together. We can do this fairly early as the basics are fairly generic depending on the type of project – the base install and theme always has to happen, then content types, the ecommerce side, things like that go into sprint one.

Then we have our second workshop, reviewing design, wireframes, and our plan. We email and talk between these, but the workshop is where we expect to make the most headway.

The wireframes and designs almost always change but we do them so we’ve got a jumping off point. We can illustrate our thinking and see where the gaps are. The wireframes are good to illustrate high level functionality, and help the client envision how information will look and flow. They’re a visual prompt for questions and discussion. This helps us get more granular information and fill in the gaps between our ideas and their needs.

At this point we expect to be in a good place to refine the backlog and tickets more, and start the process of signing off AC for the first sprint, and start planning the second sprint. The client knows the team they’ll be working with, and we’ve started to build that relationship of trust that’s needed when building a project for someone.

We plan a lot upfront, because it’s worth it, and we get good relationships with our clients, and the whole team has input into a project as early as possible. This means we’ve all bought into the project, and are invested in it. The client and stakeholders are faceless, and neither are we. We’re a team, making the best product we can.

Ep 36: Life Is A Lemon and I Want My Money Back

https://www.youtube.com/watch?v=BF1wVv8OnfE

The final bit of debt I want to talk about is testing debt. Part one. Part Two

I’m going to split this up into a few sections sections: ‘bad’ testing, and delayed testing.

So ‘bad’ testing, for me, is testing where I’ve not had the time, space, or ability/skill to fully test a feature. This could be caused by many things:
No time to cross-browser test
Especially if I need to test specific browsers for the stakeholders – versions of IE for example.
No time to cross device test
No time to do exploratory testing – only testing to the AC
Sometimes this can be done while testing the AC, but if I don’t get time to fully explore, just pass it on the basics, then it’s not really tested properly. The balance between exploring and signing off is skewed too much towards quick testing.
The insidious lack of care that I mentioned a couple of weeks ago
If the devs don’t care then I won’t feel motivated to point out issues to
Not having the context I need to feel fully connected to a project; so I can’t fully tell if the feature meets the needs of the client, or fits in with their branding etc
Not having the ability/skill to test fully: this could be a missing knowledge on the system or things like accessibility testing, performance testing etc

Delayed testing is when the story is shipped without testing, the testing is moved to a separate task, or simply pushed back to the next sprint. Delayed testing can also be caused when you’ve got all the integration testing as well as stories in a sprint to do, in the same amount of time.

Sprint one: 30 stories
Sprint two: 30 stories (each with integration testing)
Sprint three: see above.

Most of the time this is doable, especially with automated regression checks and testing alongside the development process, and closely to developers, as opposed to siloed testing. However, it does add up, and it’s something to take into account.

One way of tackling bad debt is getting a feel of what adds value to a project – what the business wants to get from the feature or project as a whole, and what the stakeholders see as important. You can then focus testing around those areas. This has the dual effect of both bringing more value from testing, as you’re testing what’s higher risk, or most important, so covers off the most bases efficiently, so even if you are pressed for time, you still feel like you’ve contributed something to the project.

Testing debt is the one I’m very familiar with; I’m more confident in my skills, but still learning, and there are huge gaps in my knowledge especially around automation. I find it really hard to test a project I have no context for, and even if I don’t miss anything I still feel like I’ve only tested the feature or issue shallowly. But I’m learning, and learning how to stay out of debt and I become a tester proper.

Footnotes

http://thetesteye.com/blog/2010/11/turning-the-tide-of-bad-testing/

Ep 35: Put Your Money Where Your Mouth Is

So, part two of my debt series is about conceptual debt, or product design debt, or UX debt. This is a subset of technical debt, but specifically related to the design of the product, or the UX. See part one here

Most iterative development cycles form a minimal viable product, which is then improved upon with each release. We experiment with design and features, and then either add or change based on feedback. The issue of course is if the feature is successful, then design and ux refactoring rarely happens – we move onto the next feature.

The prime example of this is the crowded homepage, especially for online stores, in my opinion. Navigation gets more complex as more lines, sales, and categories are added, but, if it’s an established site, changing the navigation may be detrimental to the traffic that goes through to various parts of the site.

Offers, signups, social media blocks are all added to the homepage, and also services like optimzely allow users to easily do A/B testing by changing various parts of the site (A/B testing is where you make a change to a site, but only have the change show to a certain percentage of users (you can target users from devices, or certain browsers, or just at random), then compare the results of the change).

I separate technical debt from UX debt as UX debt is sometimes harder to quantify – it doesn’t necessarily affect performance or functionality like technical debt might – or might not be as visible (slightly clumsy user journey vs. a small bug in a feature). That doesn’t mean it’s not as important, it’s just different, and requires a different approach.

Fixing UX debt may also be a large change that may be too scary or time-consuming for the stakeholders to be able to stomach, and so it stays as it is, of only gets fixed slightly. Refactoring code to reduce the technical debt I mentioned last week does not (or should not) affect the functionality.

A blog post I’ll link uses Vegas as an example of product design debt – there’s so much stuff that’s been built close to other existing things to get more attention, or add more features that it becomes a confusing and confused looking mess. And fixing that is not going to happen.

So, how do you avoid, or minimise this type of debt?

Firstly, not going into it in the first place, or going into it with a plan. This is easier said than done, obviously, but careful planning and future-proofing will take time, but also save time.

This really changes depending on whether you’re in house, or a SAAS company, or an agency. I’ve worked in both, but I have more experience in agency work, and its a different kettle of fish.

As a team, you need to interact with the stakeholders to get their overall vision, their brand, what they want from their product and what their users want. From there you can build up ideas of the product, design features and the UX, and start building.

Once you start building you need to refactor as you go, and this includes the UX if needed. If you add a new feature that needs to go in the menu and on the homepage, you and the stakeholders might need to consider that feature in the wider scale of the project.

UX is a little harder on an agency project, because you might have two layers of separation from the users – there’s the team building the product, the stakeholders, then the users. Sometimes the stakeholders are users, but often there are users that don’t interact with the build team, so a lot of debt might be unintentional, and inadvertent, which is why regular refactoring is needed.

Again, there’s no way to avoid debt, shit happens, clients change their minds, features require different user paths on reflection. You need to build in time to refactor this into your sprints, just like you would build in time to refactor code.

Footnotes

https://medium.com/@nicolaerusan/conceptual-debt-is-worse-than-technical-debt-5b65a910fd46#.j98euowli
http://andrewchen.co/product-design-debt-versus-technical-debt/
https://medium.com/@vijayssundaram/user-experience-debt-c9bd265d521b#.zhdjekfy6

Ep 34: Bills bills bills

Can you pay my bills, can you pay my telephone bills?

Technical debt is essentially the consequences of creating a system. Shortcuts are taken in the course of a sprint, and, if the isn’t time to fix them, then this is the technical debt of that sprint. It can cover anything from hardcoded values, missing tests, missing documentation, lack of experience on the team leading to the system not being developed efficiently.

I’m going to talk about 3 types of technical debt over the following weeks: technical debt in the ‘traditional’ sense, conceptual or UX debt, and finally, testing debt.

Technical debt can be prudent (we’ll have to ship and deal with issues as they rise), or reckless (we don’t need tests); deliberate (compromise based on educated knowledge of how the system will work), or inadvertent (shit happens). While the cause may be able to be evaluated and fixed, the result (the actual debt), is what’s important for the work that’s in front of you right now.

As an aside, its not always something that can be fixed – I got my developer partner to look over this for technical correctness (the best kind of correctness!) and he said that debt can’t always be fixed: Not always, times change, things that seemed good at the time turn out not to be. Building flexibility that turns out not needed complicates the code. Not building for flexibility (for simplicity/speed) which turns out to be needed later.

Developers often have to fix debt, and are best placed to know about said debt, but what, as part of the Test/QA team, can I do about it?

Good communication between developers and testers is always good, and if you as a tester, can learn about any technical debt present in the system, this may help with testing. For example, if the devs know they’ve taken a shortcut, then there may be some bugs that won’t come as too much of a surprise, or future development will be harder to do and this is good information to have.

Depending on the project setup, I as a tester, may be used as client/PO proxy. The inclusion of deliberate technical debt may be based on a decision I make, so that’s something I need to take into account. For example, if a developer asks me what I should expect from a feature, and gives me a couple of options, they might say ‘we can do x, but it will take longer. I can take a shortcut, but we’ll have to deal with it down the line, or we can do y, which is less ideal, but will be doable in the time and with no debt’ then that’s the information needed to make an educated call on where the priority lies.

Testing after refactoring. Refactoring is when a developer ‘tidies up’ the code for want of a better word. When developing iteratively, like in sprints, code is added on top of code. When the project is feature complete, there may be time for the developers to sit down and restructure the code, not making any changes to the functionality (in theory), but making the code better; tidier, more efficient, nicer. If a developer refactors to reduce debt, they may introduce different issues, and it means that the developers aren’t adding new features, or fixing bugs, so you need to weigh up the benefits here. Your automated checks may catch some issues, but visual testing and some exploratory testing may need to take place depending on what code has been altered and how risky it is, and so being aware of refactoring and what’s been done is essential.

Documentation of tests and/or any prudent/deliberate debt taken on. This means that there is a base for future development to work off.

Technical debt is also a nightmare when you’re on tight deadlines – in the Agilefall/ScrumBut clusterfuck where testers get all the stories in one batch 3 days before sprint end and you’re going by the skin of your teeth before sprint end (or testing is done next sprint) adds to debt, as any bugs and fixes are done in an even more compressed timeline, leading to more debt.

There is a post I’ll link that has a quadrant diagram of debt, with inadvertent/deliberate and reckless/prudent as the combinations, and the post discusses prudent and inadvertent debt, which is what happens on good teams. The idea is that; after a product is shipped successfully, the team has learned so much that they’re not happy with the code they shipped, as they now know better ways they could’ve handled it.

And it highlights that, it’s impossible to avoid debt; even the best teams have technical debt come the end of a project. And shit happens – a late client request, unexpected resource loss, etc. So you need to ensure you reduce and mitigate debt, and make sure the risks are as understood as possible before going forward to ensure you’re not enmeshed in debt forever.

Footnotes
http://www.logigear.com/magazine/agile/technical-debt-a-nightmare-for-testers/
https://conference.eurostarsoftwaretesting.com/2013/agile-tester-vs-technical-debt/
http://martinfowler.com/bliki/TechnicalDebtQuadrant.html

Ep 33: These ARE the testers you’re looking for

This week I talk to Amy Newton. Amy is Head Of Testing Practice at Vertical IT. She’s incredibly passionate about testers, and the testing community in the North West.

We discuss:

Ep 31: Tales Of Derring-Do Bad and Good Luck Tales

Or: Duck Tales, Woo-oo
Or: Rubber Duckie, you’re the one

Which children’s tv show to pay homage to? Such a tough choice!

You know the rubber duck debugging thing, right? The idea that you should explain a problem, or some code to an imaginary or otherwise rubber duck on your desk and that forces you to think about the problems, out loud, and explain your thinking and suddenly you realise you’ve missed something and things fall into place, there’s a chorus of angels etc.

I do not have the rubber duck. I have the unfiled bug report. So many bug reports get closed half way through typing when I realise I’m being an idiot, or maybe I should try x first instead to see if the issue isn’t that it doesn’t work, but that it works in a way I wasn’t expecting or needs better help/UX text. These discoveries may well be filed as bugs but, importantly, they’re very different to the bug I was in the middle of filing.

Sometimes I think out what I’ll say to a developer if I go over to ask – I tend to plan out my half of conversations in my head anyway, so I generally have a ‘well, if I do this and this, this doesn’t happen (or does happen)’ and that 1) makes sure I’ve got my steps down – especially if there’s a weird or complex issue (or issue on a complex feature) and 2) makes sure what I’m saying makes sense.

Forcing yourself to formulate your thoughts into full sentences means your brain is getting information in a different way. It’s why you can say something and only realise what you’ve said after you’ve said it. You have to hear the words for your brain to click, as it’s a different way of getting the information. Your brain is great at smoothing over cracks when it’s made the leaps subconsciously.

Typing words out is a similar exercise. You’ve got to slow down to write out the connecting bits your brain doesn’t specify, so you get the information slower, in a different format, and with all the details, even the unnecessary parts (which may turn out to be necessary). I often go through the steps that caused the bug again but stopping to write each step down as I go. Again, this forces me to consider why I’ve done that step and whether that could be the issue rather than the system itself.

The issue with your brain, is that occasionally it lets you get away with your own bullshit, or manufactures bullshit and you end up in a mess. The duck or a similar proxy inanimate object means you can clear it out without having to actually get up or make too much of an arse of yourself.

I prefer to use a computer or mythical brain!developer at start as real people are unpredictable and derail into all sorts of possibly unrelated issues, plus also I am distracting them from what they should be doing, or if they’re in the zone.

Of course, there are times where I do all this and I still have to go over to a dev or file a bug, but at that point I am prepared and can explain/file a good bug so the developers know what’s going on as easily as possible.

Its also good for documenting tests, so the test session has notes about what I’ve done. I often reference the AC in the bug/test session, so it’s laid out in a As then, when this, then this, which is a fairly clear way to write out issues in a cause and effect kind of way.