Ep 46: Reasonable Doubt

A couple of things:

TESTBASH MANCHESTER OMGGGG. Come to Manchester! Its awesome, I swear. Who needs a beach when you have the angry Beetham tower?

ANYWAY, I will be there and I am doing a thing on Saturday! As well as a session, that Richard Bradshaw kindly invited me to do I really want to do something podcasty, either a recording session, or a lean coffee style thing for podcasting/youtubing/blogging? I’ll build the idea more over the months, but get in touch if you want in!

Do you want to talk to me? Do you want to be on the ‘cast but only have a short message, or are in another part of the world? I now have a skype! I also have voice messaging switched on so you can leave me a message even if I’m not online, and I can put that into an episode. The username is letstalkabouttests, because brand synergy is important, so yeah. Let’s actually talk about tests!

I’ve been doubting myself a lot recently – both professionally and personally. Nothing like being told someone thinks you’re good enough to do a thing to make you insist you’re the opposite.

So let’s talk about doubt. Doubt can have many forms and many causes:

• Imposter syndrome

• Being a newbie/feeling like your skills aren’t extensive

• Not having contextual knowledge

Doubt is risky, no matter what the cause. Causes can be split into two broad categories:

‘Reasonable’ doubt in that you know you don’t have the resources you need to do your work to your best ability, be that time, tools, knowledge, etc.

This doubt is sortable – the testing community is huge and chock full of information, blog posts, podcasts, slack channels, meetups, conferences, everything. You may not know where to start, but chances are you’ll either find a post somewhere that does it, or find someone who’s willing to point you in the right direction.

The doubt can also apply situationally. If you inherit a site with a complex piece of functionality, you need to feel, as a team, that you understand it all. Sometimes that means you start testing without any test data or plan, just to see what you find.

This is where you need to manage your testing in a session, with a set time. You may not have a goal other than ‘find out more’ which is nebulous but also pretty easy to hit as goals go. You can even structure the session up to help mind map it out (incidentally, mindmaps, I cannot use them, I just find them really hard to do, compared to writing a list? I am clearly a weirdo, but if anyone can point me in the direction of other note taking/brainstorming style formations, that would be useful).

But sometimes you still can’t find all the information you need – you feel like you don’t know enough, which for me usually means I don’t know why this functionality was built, who uses it, and why. You may be able to get this from documentation, or the client/stakeholders. I guess if all this fails you could pick up bits and pieces from what the stakeholder report as issues or request as new features, and build up a picture of what they need from there.

Another kind of doubt is doubt in you, and of your work. People who are knew to exploratory testing, or who want test cases and test plans may not feel that you can ‘prove’ your testing when using exploratory, or session based testing. That requires education, and maybe presenting your testing notes to people, so you can show your testing (all of ours are saved on the session, which is associated with the ticket).

This type of doubt is sortable though, even if it is hard. Next week I want to talk about unreasonable doubt.

Ep 39 – I Can Teach You, But I’d Have To Charge

Okay, a few things before we get started:

I am going to be at the pre- and post- testbash meetups, come at me testing folks!
I will be bringing my mic along to testbash, if people want to be on the show! I can do full interviews or shorter vox-pop style segments, so hit me up if you want in on it! I’m also talking to Mark Tomlinson and others regarding a podcast mashup, so I’ll keep you upto date on things in the works there!
I finally got selenium (both webdriver and IDE) installed, so going to start playing around with that and get started on my 2016 goal of getting some automation under my belt. I’m armed with some resources from Richard Bradshaw12 and I am ready to go!

And now, on to the show!

At the moment I’m doing some brief preparation to have a couple of apprentice developers shadow me for a while as part of their training. It’s hard to know what to show them, and I want to make sure that I’ve got both planning and testing work I can do to show them the breadth of work the QA/test team do, and this will give them an overview of how we use jira.

But they’re very new to all this stuff, so I want to make this more broadly useful for them – not just showing them what I do and what tools I use, but how and why, and therefore help them to become good developers who do some of this themselves and also understand the worth of QA/test beyond breaking shit. Though I will also show them how I apply the knowledge I have to the system to try and break it.

My first port of call was Katrina’s pathway for non testers3. I’ll send this link to the department head in charge of their training, but for my session, I’ll use two or three specific parts, depending on how much time I have/how long we spend discussing these ideas.

The first thing I want to use is a blog post around the concept of testing being a team responsibility1. At our place, the devs own the automated checks – they add acceptance checks (based on AC) for all stories, and run those before it comes to me. If they don’t pass, they fix, then it’s ready for me to test. This means there are very few circumstances where I reject a story based on AC – either something has been missed because people are human after all, or there’s something in the AC that is 1) too complex to write automated checks for or 2) the automated checks can’t cover (visual stuff, or really custom things).

This means that the devs have some ownership and investment in their code and the tests.

The next thing I’m going to use is a pdf that covers sources for test ideas5, because ‘where do you even start testing?’ is something I’ve heard. This document is a list of things to consider when starting testing a feature, and covers everything from the obvious (the capabilities and purpose of the system) to the more obscure (rumours around what’s causing the dev team issues, domain knowledge etc).

This will also help explain why I’m involved in planning sessions, and scrums that have no relevance to me – the first few scrums of a sprint I have nothing constructive to add, more often than not, because it’s rare stories have come to me by then. But I’m still part of a team, and I’m still involved, and I learn a lot. Knowing what’s causing devs issues means I know what work might need extra testing, and may be risky. Its both a way for me to plan my work based on when I’m told to expect to get stuff sent to me, and recon for what I might face come testing.

The last thing I want to talk to them about is about testing in an agile environment6, which is useful, I think to all members of an agile team, especially for people new to agile. It covers how agile applies to a tester specifically, but the idea of agile is to promote team cross-functionality, and so knowing this stuff is useful. Its also good to know what to expect from working with testers outside of bugs, like being proxies for stakeholders and users, which will be useful.

I don’t think I’ll get through all of it, and it may be slightly overwhelming, but I’m hoping it will be mostly useful, and I can give them resources to learn from and help them become good developers, and that’s what we’re here for.

I think that’s a decent enough intro for people who aren’t testers and presumably have little interest in becoming testers, but I’d be interesting in hearing what people with more experience of this kind of thing think. Do you teach people differently? Have I missed anything?



Ep 33: These ARE the testers you’re looking for

This week I talk to Amy Newton. Amy is Head Of Testing Practice at Vertical IT. She’s incredibly passionate about testers, and the testing community in the North West.

We discuss:

Ep 5: Giving up the keys to the gate

If you asked me what I do, I’d say QA and testing. I wouldn’t say I do QC, though I almost definitely do, testing, after all, is part of QC. But I do far more QA than QC. QA is a process, from the start of the project, all the way to the end, not a hurdle to be passed over to get the product out the door.

Sometimes, when working with developers you get a very ‘over-the-wall, done’ feeling from them. They’ve done their bit and now they don’t have to think about it because its with you. Which generally I put down to a workload problem, or a lack of communication and team problem. I can’t do anything about the workload problem, but teamwork and communication? I can try to help out with that.

So ideally, when approaching a project, you need to go about team building, and not in a ‘trust fall’ or ‘human knot’ way or any other weird activities, but getting a team to communicate and work together well.

Bringing the team together early in the project, to plan, set up, and prepare, even if its a short kickoff meeting, means that communication can start to happen when the team may be apprehensive, but not stressed is the first step. And it sounds obvious but just a quick meeting of, this is the project, this is how the sprints are structured, this is the team (with intros if necessary), this is what we’ll be doing I find just breaks that ice a little bit.

Ideally a intro to the client by the PM would happen as well, just to bring everyone in on the loop and recognise names and roles. Simple stuff but it works.

Sprint planning is a great way to get everyone in on the act of QA – once AC is written up, the devs (all, or most relevant to the work, depending on team size) could look over AC before it hits the client to make sure the that the devs are on the same page as the tech lead and that all the devs are on the same page as QA.

This process also works when coming into a project that’s started already, or providing new work on a completed project. You can start your QA processes around a new sprint, and start planning there, which I think helps with getting the QA in as a team member, because sometimes that can be daunting, especially if you’re not a coder (remind me to tell you about my imposter syndrome sometime).

This week I sat down with some support devs to go through some of the new processes with them, because we’d finally got the processes down how we wanted them, and I wanted to go through and make sure they got what we wanted to do, and why, with examples of why we want them to follow this process. It seemed to go well (though that might be partly related to the chocolate cheesecake brownies I baked for the meeting?). And I really enjoy good process, so I enjoy explaining it, and I know if you don’t know why you’re doing a thing, you’re likely not to do it, or realise the process is there for a reason other than hoop-jumping, so you’ll do it grudgingly.

Another way of incorporating QA into the team is that we can act as proxies for the product owner in times when a quick, not business-critical decision needs to be made. This can be ‘use this and see if it’s intuitive enough or the help text makes sense’ or ‘what would you expect to see here’ or ‘we need a name for this’. Depending on the set up of the project QAs can also triage bugs coming in from the product owner and feedback there, or improve the bug report if needed.

The idea of QA is to oversee and implement processes to help the team make better products. Its not a step that’s done at the end of the build process. Hell, QC and testing shouldn’t be done at the end of the project, it should be as iterative as possible, but that’s still a step – this has been built, so lets test’. But I think QA, and getting everyone involved in the process is the best way to get everyone to buy in to the QA process, which leads to better work from everyone.