Ep 55: FOMO

Or, fear of missing out.

Quick news:

  • I’m at Liverpool Tester Gathering on Thursday 5th August. Come say hi if you’re there!
  • Next week I’m interviewing Rosie Hamilton!
  • The google form for telling me your origin story will be closed on Monday 1st August, so get going if you’ve not filled it in already!

This was not the episode I planned to record today. I couldn’t get that episode to flow properly, so instead I’m talking about FOMO. Now, this text is not as close to a transcript as normal, as I recorded this mostly on the fly, so it’s a bit rambly and not actually written down, but I’ve got the main points below.

On Tuesday, a group of us took part in the first #TuesdayNightTesting, which was a remote lean coffee evening. It was a lot of fun, and one of the questions was about getting testers in a slump into the community. And it got me thinking about the community, FOMO, and people that don’t want to be in the community. It’s been something I’ve been thinking about, and I’ve basically got a list of things I try to do to limit, control, and maximise my testing community involvement.

  1. Have a to-do list
  2. Use bulletjournal, use trello, use whatever tool you want but write or type out a to-do list. Even if you remember everything you need to do, visualising it will help you see how much space you have for new things.

  3. Find a niche
  4. I am a magpie of testing, I’ve spoken about this before; that I can find it hard to focus because everything looks so cool that I want to do it all. I think its easier for me to choose a niche for contributing, because it depends so much on what I enjoy, as opposed to what works well for my context, the problem I have, etc. But pick a contribution (blogs, twitter, podcast, speaking, code/tool development), and roll with it.

  5. Have a plan
  6. I am formulating a plan for the podcast. It has taken me a year to realise I need one, but I’m going to write down what the podcast is, what I want it to be, and how I’m going to continue it, so I know what I want to do. It doesn’t have to be detailed, but I think if you’re serious about doing something in a big way, you need a plan.

  7. Say no/Defeat FOMO
  8. Say no. You’re going to have to at some point so get used to it with small things. Or say ‘let me check’ and go check fully before saying yes or know. If seeing other people do and tweet and blog about things you’re missing out on is going to bother you, pull back a bit.

  9. Take a break
  10. Related to the above. Take a break. Self care is important, and self care whilst doing stuff and being part of tech is something I’m really interested in.

Ep 54: Origin Stories

A series of unconnected things:

  • I’ll be at the Liverpool Tester Meetup on 4th August! Come say hi!
  • I’ve got more interviews coming up and that is my plan for August – start reaching out for interviews.
  • 30 days of testing! This is a thing that Ministry of Test is doing, and though I got off to a strong start, I’ve started to flag a bit. I need to get back on course!
  • I’ve started working with selenium! I’ve picked python as I assumed a language I knew would be less of a learning curve, but that may change as we figure out what works for us.
  • I’ve been listening to a lot of creepy podcasts – The Magnus Archives, Archive 81. Highly recommended if you like creepy fiction!

I want to talk about getting into testing.

My session at Testbash will touch on getting into testing in less than ideal circumstances and learning testing in those circumstances, but today I just want to discuss getting into testing.

So many testers say they just ‘fell’ into testing, that they either didn’t see testing as their career, or that they didn’t even realise testing was a career, and I’m intrigued by it. I am a tester that did the same, and I’m sure there are other careers where people also just fall into it, but as a tester, it’s testing I’m interested in.

I’d be interested in collecting experiences of how people got into testing; not necessarily their background, but maybe the moment you realised this is what you wanted to do, or the moment you shifted to testing almost full time – whatever you want to think as the moment or series of moments you got into testing, I want to hear them.

I got into testing by saying yes. I was working as a customer support person in a digital agency. My co-worker, who was our only tester, had moved into release management but that meant there was less time for him to test (we were ostensibly a start up, so profit margins were slim). As the person who had bugs reported to them, and the person who had the most free time, relatively speaking, it seemed obvious that some of that work would come to me. I was still new to the job, and pitching in and helping out had never really gone wrong for me before so why not?

My testing was weak, relatively speaking. I mean, I have a science degree and can be pedantic about things, so you know, I wasn’t completely in the wild, but looking back I know I missed some things I would’ve caught now.

We were working off spreadsheets of test cases, which I actually found a useful starting off, because they gave me the foundations I needed to feel comfortable. There was nothing saying I couldn’t go off script, but we wanted to make sure we had everything down (incidentally, I think I only ever used the ones handed over to me, never ones I’d made myself. Seemed like a lot of faff for little reward).

I don’t remember the moment I became a tester, but I remember a moment I realised maybe I could do this.

I wasn’t happy with an implementation. It’s hard to describe without jumping into details but it was wrong. Firstly, I was proud of myself that I’d got enough knowledge of the system to know it was wrong. We pointed out the issue and got it fixed. However, the problem was twofold (maybe three). Firstly, we had offshore devs that had no real interaction or onboarding of the site. They were treated like code monkeys, and this led to a lack of investment and knowledge of the system. Two: the feature request had had no discussion, really. It came from the client, through the account manager, straight to said offshore dev, with no other input. So the feature hadn’t been technically planned, no real AC was on the ticket, nothing. Just ‘make this happen’.

That was the failure in my eyes – there was no guidance from the developer, and there wasn’t clear lines of communication on this stuff, so he didn’t ask for clarification. He was new to the system, so didn’t have the time or knowledge to really figure out what knock-on effects there were. So I started stepping in a little bit and trying to make sure we had a list of requirements and edge cases.

And I started to google. I wanted to see what other people did, how other companies handled this stuff (at this point our tester had left, leaving me as the go to tester person – did I mention the company was not so slowly going bankrupt?), and I found the testing community.

I think those two moments – realising I’d picked up a bug in the system and then finding the testing community (or the bits I inhabit anyway), were the moments I decided this is what I wanted to do.

And I think a lot of people get pulling into testing because ‘anyone can do it’, and for me it was realising that it goes deeper than that that brought me into the career properly.

I would absolutely love people to let me know how they got into testing. I can’t promise any fancy R scripts and graphs like Rosie Hamilton did, but I can promise at least a blog post on it!

Link to the form

Ep 49: Leading the Witness

Firstly, an announcement: LTATB is moving to fortnightly episodes. I need to level up my content game, and I can’t do that in weekly slots, so there will be an episode every two weeks. The next episode will be on 19th May.

People are really bad at telling you what they want. Really bad. They think they know, but I guarantee you they don’t. And it’s not because they’re stupid, if anything, it’s because they know their business and processes really well. Or, they know their business and processes as they stand really well. Translating that to a new system can be difficult to do.

How well do you know your commute to work? Or how to cook your favourite meal? Or the layout of your phone screen or desktop. The things you use all the time, you know what you’re doing well enough that you may not even think about every little step. You may not even realise if you’re doing something slightly (or not so slightly) inefficiently. Or, you may realise, but there’s a reason for it (I occasionally walk a longer way to/from work because it’s prettier and there’s more chance of seeing dogs being walked, for example).

Or, another way, changing computers. You get a new or different computer, and you start transferring/re-downloading files and programs. How many times after that initial set up do you realise you’re missing a program? I’ve had a week go by easily before realising there is something missing.

These little things are going to be the things that actually turn out to be really integral to a system. The stuff that isn’t the main points (browser of choice, or the pasta in a lasagna) but the stuff that just makes everything smoother (setting up key shortcuts, or adding basil or oregano). Technically you can get away without them, but it makes things just a little harder, or less great, and the people using the system will miss it and be less willing to engage with what you’ve built. So, how do you figure these things out? Ideally, you watch people interact with the system as it stands, and have a play yourself. I spoke last week about inheriting legacy systems, and some of those techniques apply here.

Another way of doing this is going through user journeys with the product owner and team.

People are really good at telling you what they don’t want. There’s gets a point in a discussion about a system where you can kind of tell that the client isn’t sure what part you’re not getting, so I’ll go through my assumption of the user journey. Suddenly, when I get to the bit I’m pretty sure I’m wrong on, they’ll re-engage and point out where my assumptions are wrong. Its easier to go ‘no, not like that’ then it is go to ‘this and this, and then this, except, shit, I missed a step here’.

However, this assumes that you’re wording things in the right way. Leading the witness is when a lawyer asks a leading question; or a question that puts words into the witness’ mouth. In this line of work, it could be as simple as assuming something and phrasing it as ‘and then you do x’ as opposed to ‘and after that, what happens? X? Or something else?’. The idea is you prompt them, but not fill in the gaps for them. In a situation where maybe people are feeling tired after a long meeting, or a bit nervous, or overwhelmed by techspeak, something like that could be just agreed to, and so you want to balance between making it go smoothly and easily and telling clients what they are going to get. We’ve implemented some things that on the surface of them make no sense, but for the context we’re working with, make perfect sense. You wouldn’t ever do it for a public facing site, but for the internal processes of the client, it made sense). And we’ve worked on a few government/public funded or charity sites, and their processes are larger than anything we can do or effect change, so we have to make them fit the system we build, not try to get them to change their processes for the new system.

The best and smoothest projects I’ve ever worked on are where the whole team has that understanding; we’re the experts in the tech side, but the PO knows their team and users better than we do, and so they say ‘we have this need, maybe we can have this?’ and we go either ‘sure’ or’ yes, maybe, but how about this?’ and then it works amazingly well.

Further Reading

https://softwaretestingnotesblog.wordpress.com/2016/04/10/ignorance-as-a-tool-to-frame-better-questions

Ep 48 – (Tron) Legacy

Gotta have a Wanz reference in there, though he may hate that it’s a reference to the new Tron…

We inherit a few legacy systems in our line of work. We have to get to grips with them across the board – PMs, QA, devs. We all need to know what the system does, how, and most importantly, why.

How do we do that?

Firstly, we do an audit of a site we’re inheriting, before it comes over to us. This will help the developers and system admins get to grips with any quirks of the code, or hosting needs. We can also do a security audit, make sure modules are upto date, if it’s magento make sure we know of any module licenses that need sorting out, etc.

Then we start on the actual functionality:

At this point we’ve had interaction with the client, so we’ve got a sense of what the site is for, who it’s for, what the business area is. We can get an idea of what they business needs are and how the site meets them (or doesn’t). But sometimes the client doesn’t have the in depth knowledge of the site, functionality, or code that’s needed to support the site.

Documentation is always useful, even if you can only see a user guide or other internal documentation, because that gives you insight to what bits of the system are used most often and what features or information is needed by stakeholders.

If documentation isn’t present or is old, then the code is another form of documentation. You can talk to the devs about what they’ve found, or even sit with them while they figure it out.

Finally, there’s the galumphing and seeing what you can find option. Paired with either of the previous techniques, it’s a good way to get to grips with the system, and start to test it; even without anything to test against, you can test your assumptions.

If you need to do it by itself, it that’s how you need to find out about a system, then it’s not going to be as comprehensive, but still useful. While you may not have any requirements, you can still get a basic structure for your test session, so you can time box and manage it properly.

So a basic structure may be something like:

Figure out inputs (valid, invalid, multiple, etc). Even if you know nothing about what you’re testing, if there’s a UI you’ll have cues or types of input that is accepted, then you may be able to make guesses on valid/invalid inputs from there
Outputs (with all the inputs above)
These can take the form of reports, data, messages, all sorts of outputs based on what you put in
Dependencies and interactions. From the above, can you see the flow of information. Can you see what happens if something fails along the way? Is it possible for the system to fail in that way?
Hypothesize on what you’ve learned, what inputs and outputs connections you’ve found, any other information that’s pertinent.
Repeat the above until you’ve narrowed down the expected results

Take notes. This can be the start of your documentation if needed. You can write up your findings and then talk to the team (and I include stakeholders in the team here), see where the gaps in your knowledge are.

You may be able to start writing tests from there for regression, if there are no tests present, or not enough tests. All new functionality should have tests, and if there are obvious tests that can be added, add them when you can. Each sprint should have a section for updating tests as needed.

Worst case it the code isn’t up to the standards of your agency/developers, or it could be under maintained. You may or may not be able to refactor it to be to your standards or liking, either way you can add tests asap to build the quality in and start to improve as needed.

Legacy systems may be awkward, and I’ve focused on the most awkward here, but it can be interesting, and you can learn a lot from picking up an old system and seeing where you can run with it.

Ep 45 – Can we test it?

Yes we can! (Maybe?)

Want to come work with me? We’re hiring a junior tester!

Leading on from the Science! bit, I want to talk about testability.

So testability is how testable a product or system is by a given person, in a given context. Having software that’s testable makes testing quicker, because not only can we test quicker, we can also be confident that our testing has been effective.

Testability requires certain things:

We need to have a definition of right or correct behaviour, so we can form a test plan or strategy. If we don’t know how the system is meant to work, we can’t ensure it’s working as expected.

We need to put some work into defining features separately, so each can be discussed in isolation (or as much as is possible). This iterative process means testing can happen as early as possible, otherwise we’re in a position when all we can test is all of it in one go, which makes it harder to test.

There are some things that are only really testable in certain circumstances – whether that be a quirk of the system that means we can only monitor the results when live, or features that have an effect over a long period of time. I recently had a feature that tracked results over 7 days, and it wouldn’t be feasible for us to leave off deploying to the test environment for 7 days to see if that works, so there was some hackery involved so I can ensure that it removed data from older than 7 days as expected, things like that. It wasn’t ideal, but it was good enough.

And this is an issue that I run into occasionally. Where possible we try to figure out a way to test it as close to reality as possible, and put those frameworks or systems in place.

Linked to this is being taught how a feature works. Sometimes devs will put notes on a ticket to point me in the right direction, and then will provide release notes for the client. One lead dev recently asked whether we would be worth putting the notes they’d pass onto the client on the ticket when passing the story for testing, so I can both test the feature and the release notes. Now, this is an idea I want to implement, because again, the devs know how to use the feature, so the instructions may be either too broad, or use terminology that’s not well known to the client (referencing entities, MIME types, etc).

I prefer just having an idea of what the feature should do or should allow me to do, so I can see how intuitive the feature is, and make sure I’m not following the smooth path and looking for edge cases. If it’s a complex piece of back end functionality that we would provide instructions for, then I’ll use those to see if they work as expected.

This way, I’m testing both the instructions and the feature at the same time.

There’s more to testability than usability though. The definition is testing by a given person in a given context. Everyone tests differently, and everyone uses different tools to do so. Testability in this sense can be raised by someone helping out with testing, or by a test strategy, product, system, or domain knowledge; anything that means the tester can test more, or feel more comfortable and confident testing increases testability. And gaining knowledge of the product and the client will give greater understanding of risk, both to the specific product and to the system you use more broadly.

For example, I generally speaking, test Drupal sites. Drupal does categorisation in the form of taxonomies and vocabularies out of the box. Specific categories have to be set up, but the framework is there. That takes roughly two minutes for me to look over while I’m testing something else because I know that it doesn’t need testing, really. I feel comfortable taking that risk.

And managing risk, and reducing the distance between what we currently know and what we need to know is really the cornerstone of testing.

Footnotes

http://www.infoq.com/news/2016/02/testability-teams-faster
https://www.testingcircus.com/lessons-usability/
http://www.satisfice.com/tools/testable.pdf

Ep 30 – Oh, Arse

Listeners!

Firstly, there is a donate button over yonder -> If you can help me get to my first ever test conference next year, I will be incredibly grateful.

Secondly! Next week will be the last episode of the year, and I will be announcing the guest that will be on the show for the first episode back in the new year! Exciting!

Okay, so this week I finally watched the first Whiteboard testing video (about testing), which was about the FART module (Flawed Approach to Regression Testing)1.

The idea is that you can’t necessarily rely on automated checking for your regression testing, you should be doing testing as well as part of regular testing.

He also mentioned the RC mnemonic2, which means taking the following into account when testing:

R: Recent – how recent was the change?
C: Core – what central functionality has to stay the same/be unaffected?
R: Risky – how risky was the change made? Where does the risk lie with this change?
C: Configuration – what environmental configuration do I need to be aware of when testing this? Are there any config factors that will be present when the code is live that I need to take into account?
R: Repaired – What has been repaired here? What functionality has been changed and what possible effects could it have on other parts of the system?
C: Chronic – What (if any) issues keep popping up? How can I test them? Can I add any specific automated checks to catch the most frequent issues?

This is a fantastic mnemonic that I’ve not heard before, and I think its going to be incredibly useful to me – I mostly test new features to existing projects, and so regression testing is what I spend a lot of time doing. I do this anyway, but its nice to have something to help tick off areas so I miss less.

We have some automate checks – and the devs add new checks for new features, and update the checks if features are changed, but for complex code bases, especially with custom features, or OOB modules that have been extended, there’s so much that can go wrong that we can’t assume that this is enough.

The automated checks concern base, centralised knowledge of the system – ours are based on the acceptance criteria, so they are for the functionality as it stands. The context of the project is missing – we need this to work in IE8 because that’s one of the browsers used more often, or we can’t assume the users will know what this means, they’re not tech-y. Things that can’t necessarily be coded, that require knowledge of the project as a whole, not just the system.

This also means that regressions around these features may be missed if the AC-based checks still pass. Yes, it may technically pass the AC, but that’s not the be all and end all of a story (much the the annoyance of at least one developer I’ve worked with), and I can and will send something back even if it does pass the AC if it needs work elsewhere, and it’s the same here.

Knowledge is needed for testing and knowledge is constantly updating and changing as a project does, and while most of it can be coded and codified, not all of it can be done so as checks.

And as Richard says, if you spend all your time adding to your checks, where do you get new knowledge?

Footnotes

[1]https://www.youtube.com/watch?v=P2PUXqasvGI&feature=youtu.be
[2]http://karennicolejohnson.com/wp-content/uploads/2012/11/KNJohnson-2012-heuristics-mnemonics.pdf

Ep 29 – Next Best Thing?

I’m crowdfunding my trip to TestBash net year! Donation link in the sidebar, if you can give. I’m also getting stickers sorted for the ‘cast, AND starting to organise another guest! Exciting times in LTATB towers.

Today though, I want to discuss usability testing, and what if you can’t do testing with actual users.

As a QA I’m partly customer facing – I have to be – I’ve spoken before about how we’re a bridge between users or POs and devs, and as such we need to interact with both.

This means that we can start to test for usability.

When I spoke to Chris in episode 11, he mentioned the user testing lab they have at the BBC and I’m ridiculously jealous. I’d love to watch people testing things in the wild, so to speak, that I’ve tested, just to see what I’m missing.

The closest I’ve personally come into contact with is things like user testing1, which is a company that essentially has a catalogue of people who will test your site or app. The people are from various demographics, and you can also get heatmaps/videos/audio feedback on your product.

The issue here is people are being paid to test. They feel like they have to give feedback, and it’s not the most natural form of testing, compared to just letting users use the product as they would normally.

Thoughtworks wrote a blog on guerilla testing, which is getting people off the street to test your product as a way of getting that candid user feedback and testing2. Which is an interesting concept, and the best way to do user testing, but requires both logistics and some handholding to get useful feedback (I expected x not I think this should be a darker blue).

One of our projects in currently in public beta, with a link to a feedback survey, before go live towards the end of the year, which is one way of doing this kind of testing.

But how do you tackle this when you can’t do user testing for whatever reason?

The team I work with are pretty good as this. We’re happy to go to another member of the test team and ask them to look at a feature, to see if it makes sense to someone coming in cold to the site and feature.

We’ve also had apps tested by the majority of the office at the same time to do real time load testing and where the bugs are across platforms and users.

But that’s only good for finding where things aren’t quite intuitive enough. It’s not good for specifically testing if what you’ve built actually meets the needs of the user. That’s what the workshops and domain knowledge is for.

But, what if your user is the public? You’ve got myriad different contexts, abilities, knowledge bases to take into account, and that’s before you start thinking about accessibility, devices, breakpoints; the technical or physical attributes that you need to take into account to cater to your users. There’s a whole bunch of other contexts that you need to think about when going through the build process. So you workshop and some clients have information on browsers/devices used, and personas, which is information that we can take into account when testing, but is a first approximation at best.

So, you’ve got UAT, and that means you’ve got your product owners and stakeholders testing it, and they may have a decent to good idea who their users are likely to be, and, possibly more usefully, bring their own set of contexts and quirks to the testing (this is not always a good thing, but it’s interesting). Having sprint based UAT means that testing happens early and often, so you can catch usability issues early on.
We have internal walkthroughs every sprint, where the lead dev, front end dev, QA, design, and PM will sit down and go through that sprint’s work, to look at it as a whole before passing over to the client. This sometimes catches things, and is a useful marker of how far along in the project we are.

You can’t test everything, but you can plan, get your quality baked in, and grab enough different eyes on a project to catch as many issues as possible.

Footnotes

[1]https://www.usertesting.com/product
[2]http://www.uxbooth.com/articles/the-art-of-guerrilla-usability-testing/

Ep 26: Nobody’s Perfect

Testing can never be 100% done. Websites in general, in fact, never really reach a state of ‘done’ which is something that’s hard to get across to clients, sometimes. There will always be parts you’ve not touched, or not put as much effort into – it’s a trade off of a fixed timeline or budget or whatever.

Some clients get it – they’ll be putting ideas into a backlog for the retainer or a second phase of development. Other clients will want everything ‘done’ before something goes live. A definition of done helps, as well as lots of work up front in the planning stage.

We lay down what we’ll provide, including device and browser support, and level of accessibility.

Of course the client wants perfect. The site is an extension of their brand, a way of bringing revenue and people to them. And we don’t want to give them shoddy work, we just have to be realistic about what we can provide in the time given, along with any other constraints we may have (the system we’re using, the systems the client need may us to integrate with, designs, functionality needed etc).

Sprint based testing, with automated regression testing means I mostly focus on the sprint and story in front of me. I do exploratory testing, and focus on the parts that are the highest risk, including integration with other parts of the system. But that still means I might miss something.

It’s impossible to test all scenarios, all data, in all contexts, and even if it was, it wouldn’t be sensible.

Testing cannot show the absence of bugs. Finding no bugs doesn’t mean there aren’t any – just we’ve not found them. You can’t prove an absence. It becomes a risk assessment, a list of priorities of what needs to be tested, then what should be tested, and some exploratory testing based off what the first two bits expose.

Over the course of a project you can learn more about what the client wants from the project, their biases and how they affect their view and usage of the site, and that can feed into how you test; the parts you focus on.

We can also guide clients to what we follow as best practice, and what we think is best for them, and we’ll mould our way of working to theirs. We manage their expectations (which is a terrible phrase). We make sure nothing is a surprise, that they know that 1) they will probably find some issues 2) not all of them will be bugs, but some of them will 3) we will push back on some of them if we think its necessary.

And there are definitely some clients who think the site should be perfect and complete, whose reaction to finding a bug is to point the finger and question our abilities. We shrug and explain and move on, and it makes me so so glad that my employer, my teams, don’t react the same way.

Quality is not testing. Or at least, not only testing. Quality is a team effort; it has to be. Every member of the team brings their own speciality, bias, and knowledge to a project and their inputs are what make a quality project. Quality is built in to the project, not tested into it. Testing mops up the edges, makes sure we’ve not lost sight of any of the details whilst putting the site together, makes sure we’re hitting the compatibility/accessibility targets we need to hit.

Testing is not perfect.

Ep 17: Risky

Leading on from last week’s show, I want to talk about prioritising and risk assessment. I’ve spoken a little bit before about how I use AC as a base for my testing – that’s the highest priority, then I spread out from there.

I think there’s some merit to timeboxing things off and focusing your time for a tight time period, like 15mins of focused testing and see where you get. This I think is true for security updates that might have an effect on various parts of the site as well (talking from a Drupal point of view, we recently had an update that may have affected some custom work we’d done, so I gave that custom work a test through). As an aside, the ten minute rule for test plans1I think is useful, and I have definitely found that AC take less time to write and review once I’m in that focused mindset, and it cuts the fluff that doesn’t need writing, that can be left to the test sessions.

But how do you prioritise what you test after AC? How do you prioritise bugs when you report them (and defining blocking bugs as opposed to ones that need fixing, but don’t block the story from passing, if you even differentiate them. I try to keep blocking bugs to bugs that I wouldn’t want the work going live with, but that doesn’t always mean the AC isn’t being met. So sometimes that requires a discussion of ‘is this a blocking bug, or a bug we’ll tackle as part of the bug fixing part of the sprint? Should we log it but throw it to the client to prioritise?) Which comes down to:

• Risk of fixing it vs. Risk of keeping it as it is

• Complexity of the bug and fix

• The timings of the project


For example, we work with Drupal, one bug we came across recently had a fix that involved upgrading a module to the dev version. This was the version after the latest stable version of the module, and so there was some risk attached to it; it wasn’t used widely, hadn’t been crowd tested like the stable version had. The site we had the bug on is a large site with lots of users. The bug was relatively small, and had a workaround. We decided to spend some time investigating alternatives, but if we couldn’t easily find an alternative fix within the time we had, we could offer a workaround, and wait until the dev version of the module became the stable release. It just wasn’t worth the risk of upgrading to a potentially unstable module version that could have unknown effects over a large site, that we didn’t have teh time to fully investigate, given we could offer a decent workaround to the client.

On the flip side, a client asked us to spend some time and effort looking into a complex change request – changing how out of hte box functionality worked and looked for them. For us it didn’t seem like a high priority, and was not complex, but not easy, but the client prioritised it, so we did the work, anf the client was happy. We also was able to contribute this back to the community, for other people to use if they found it useful.

Which I guess highlights that there will sometimes be conflicting priorities from a developer and client point of view, and sometimes concessions and compromises have to be made. This does depend on the client, as there will always be some who will think everything is top priority all the time, but sometimes a discussion is needed.

I think conversation is one of the most important things when it comes to prioritising and risk assessment, and I do think I’m getting better at spotting which bug I need to get developer input on before marking it as blocking or not.

And sometimes the conversation ends up like my second anecdote back there, where we ended up contributing to a larger community, and that’s always a good thing.

As an aside, I want to throw a podcast recommendation at you. All through recording this episode, I’ve had the soundtrack to a podcast called Risk!2 In my head. Its the opposite of everything this podcast is. The tagline is: True Tales, Boldly Told, and its “where people tell true stories they never thought they’d dare to share in public”. Its often NSFW or children, but its also funny, or touching, or any other number of things. Its well worth a listen, if you enjoy listening to stories.

Footnote

[1]http://googletesting.blogspot.co.uk/2011/09/10-minute-test-plan.html
[2]http://risk-show.com/

Ep 16: Creativity in Time and Space

Today I’ve got a sort of mix of an episode, covering two different but related topics, that I tried to expand into two episodes, but couldn’t make them work, then I read Simon Prior’s blog post on the Testing Mindset:
https://priorsworld.wordpress.com/2015/08/19/the-testing-mindset/ and realised that the two topics were inter-related and maybe only needed one episode.

In the blog post, Simon talks about QA and testing as a process, starting as soon as possible, and that, combined with me reading about exploratory testing sparked this idea. I want to talk about testing and creativity, and testing as a creative process.

QA and testing often has a bit of a stiff, filling in boxes/checking things off a list image. Stuffy gatekeepers or killjoys saying things like ‘well technically’. And don’t get me wrong, sometimes I revel in the checking section of it, it’s good for me to do while I’ve got something more complex in the back of my mind, or to familiarise myself with a feature before I jump into the meaty part of actually testing it.

Then the more creative part of testing can happen. Exploratory testing involves creative, investigative thinking. Features are very rarely ‘correct’ but they can be right or wrong for what you are building for, and its looking at that, at how it works and why that is the creative part of testing. There are many bits that fall into this ‘rightness’, some of which is subjective enough that some bugs may come back from the client but it can also be things like consistency of user pathways, and how it flows. Patterns in usability and context on the feature I’m looking at, stuff that’s sometimes harder to write AC for, but important nevertheless.

There’s also looking at what you’re doing as a whole project. As well as testing each feature as it comes to you, you need to test how it fits into the project as a whole – the same principles of looking at flow and consistency apply, but on a different scale, and in different ways.

(As an aside, this kind of testing starts right at the beginning of the project.

I’ll test out the requirements, and the plans, even if it is just mentally, just to see what’s happening. It helps me get to grips with the project, provides some background and context. It might help the client refine requirements, or bring new ideas to the table. Then I’ll help writing the AC and I’ll be mentally testing them as well, all before we’re out of the initial planning).

This kind of links to another issue that’s been occupying my time at the moment, and that’s one of timeboxing my testing off. Balancing priorities and making sure I’ve tested all the parts of the feature and ensure I’ve not spent far too long on testing can be difficult.

Its hard to say ‘yes, this is done’ definitively. I often say to myself that I’ve found all the obvious bugs, but at the end of the day, the user is going to know how they’re going to use the system best, so I generally expect usability changes/bugs that are going to come from the way the client interacts with the system both from an individual point of view and from an internal process point of view.

But there’s always a niggle of doubt, of waking up at 3am with a scenario I haven’t tested in my head, which I then go on to test next day (no lie, I have realised at 3am that I misses a scenario, went to work, got back a story I had previously passed, tested my 3am scenario, found a bug that was deemed valid enough to reject the story. I am such a newbie sometimes). I’m responsible for carrying out the testing, and yeah, while this isn’t a gatekeeper scenario, I am still a tester, on the QA team. There are expectations there. And I want to contribute to good code. I want to make people admire our work.

But, I also need to not burn all the project’s budget by testing each feature too extensively, to the point of not being able to prioritise correctly. or focusing on the wrong priorities.

I generally stick to a rule of thumb of aiming to spend 20% of the time spent developing a feature testing it. So twelve minutes testing for every hour spent developing. Again, very vague, depends on a lot of things, but it helps. Because I could spend a lot of time testing, sometimes.

And there has to be a point when you decide that you’ve tested enough. You’ve gone through the most obvious test cases, you’ve gone away and come back to it a few minutes that if needed.

This is where a definition of done, and risk assessment comes in, I guess, otherwise I think I’d never be done.