Ep 45 – Can we test it?

Yes we can! (Maybe?)

Want to come work with me? We’re hiring a junior tester!

Leading on from the Science! bit, I want to talk about testability.

So testability is how testable a product or system is by a given person, in a given context. Having software that’s testable makes testing quicker, because not only can we test quicker, we can also be confident that our testing has been effective.

Testability requires certain things:

We need to have a definition of right or correct behaviour, so we can form a test plan or strategy. If we don’t know how the system is meant to work, we can’t ensure it’s working as expected.

We need to put some work into defining features separately, so each can be discussed in isolation (or as much as is possible). This iterative process means testing can happen as early as possible, otherwise we’re in a position when all we can test is all of it in one go, which makes it harder to test.

There are some things that are only really testable in certain circumstances – whether that be a quirk of the system that means we can only monitor the results when live, or features that have an effect over a long period of time. I recently had a feature that tracked results over 7 days, and it wouldn’t be feasible for us to leave off deploying to the test environment for 7 days to see if that works, so there was some hackery involved so I can ensure that it removed data from older than 7 days as expected, things like that. It wasn’t ideal, but it was good enough.

And this is an issue that I run into occasionally. Where possible we try to figure out a way to test it as close to reality as possible, and put those frameworks or systems in place.

Linked to this is being taught how a feature works. Sometimes devs will put notes on a ticket to point me in the right direction, and then will provide release notes for the client. One lead dev recently asked whether we would be worth putting the notes they’d pass onto the client on the ticket when passing the story for testing, so I can both test the feature and the release notes. Now, this is an idea I want to implement, because again, the devs know how to use the feature, so the instructions may be either too broad, or use terminology that’s not well known to the client (referencing entities, MIME types, etc).

I prefer just having an idea of what the feature should do or should allow me to do, so I can see how intuitive the feature is, and make sure I’m not following the smooth path and looking for edge cases. If it’s a complex piece of back end functionality that we would provide instructions for, then I’ll use those to see if they work as expected.

This way, I’m testing both the instructions and the feature at the same time.

There’s more to testability than usability though. The definition is testing by a given person in a given context. Everyone tests differently, and everyone uses different tools to do so. Testability in this sense can be raised by someone helping out with testing, or by a test strategy, product, system, or domain knowledge; anything that means the tester can test more, or feel more comfortable and confident testing increases testability. And gaining knowledge of the product and the client will give greater understanding of risk, both to the specific product and to the system you use more broadly.

For example, I generally speaking, test Drupal sites. Drupal does categorisation in the form of taxonomies and vocabularies out of the box. Specific categories have to be set up, but the framework is there. That takes roughly two minutes for me to look over while I’m testing something else because I know that it doesn’t need testing, really. I feel comfortable taking that risk.

And managing risk, and reducing the distance between what we currently know and what we need to know is really the cornerstone of testing.

Footnotes

http://www.infoq.com/news/2016/02/testability-teams-faster
https://www.testingcircus.com/lessons-usability/
http://www.satisfice.com/tools/testable.pdf

Ep 44 – Now for the Science bit

I felt a bit sciency, so let’s discuss the scientific method.

We all use it, whether we realise it or not. The basics:

  1. Oh, this is odd, is it because of this?
  2. Update your hypothesis based on experimental results
  3. Keep going til you have nailed down a cause and effect.

As testers, we can’t prove there are no bugs, we can only say we’ve not discovered or encountered any, backed up with the evidence of the tests we’ve done.

And then you have to gather the evidence of bugs you have found, and maybe use that evidence to advocate for those bugs. Not all bugs will be fixed, or even can be fixed, and so a decision has to be made. As well as what makes sense technologically or for the budget, we also need to choose what bugs to fix on the basis of what will be most useful to the users or clients, and the way to gauge this is to have evidence about the bug.

So the first step to this is, once you’ve got a bug, information gathering. Sometimes this is easy, and if you know the system you may know what the bug is (permissions issue, something missing, fairly obvious stuff), or you may have to do some digging to find out the details of the bug, At minimum I like to include steps to reproduce, what I expect, what actually happens, with screengrabs if appropriate. If I need to put my reasoning as to why I expected what I expected, I’ll add that. That report is my findings and essentially justification as to why I think this bug should be fixed. Then I can either make the bug a blocker to the story or not.

This then goes to the devs and if a conversation is needed about the bug it can happen with all the information documented in a (hopefully) clear and helpful manner. The exception to this is if I’m not sure if the bug is actually a bug, or if I’m unsure if it should be a blocker, in which case I may talk to the lead dev and go from there, sometimes talking to the client if we want their opinion on the bug.

So that’s the basics, what else can we learn from the scientific community with regards to testing.

Scientific papers go through peer review before publishing. The idea is that the work is independently reviewed before it’s published (deployed to live), and then the world gives feedback (UAT).

In the software dev world, this is provided both by the tech and code review by another dev, and then testing by the testers*.

There is one section of the scientific method that testing (and exploratory testing especially) does deviate from. The experiment design or plan. A lot of work goes into an experiment plan, including setting the statistical significance level, p values against the null hypothesis, or confidence intervals (these are all ways of determining the level of confidence needed that what you’re seeing is not chance, essentially). And all this has to be done before you start, because this can determine your sample size.

None of this stuff is done in testing outside of a lab, really, and even a formal test plan isn’t really done in exploratory testing, or it’s there but not as rigorous.

That’s not to say we don’t have a plan. I use the AC plus my knowledge of the system and the feature I’m testing, plus any feedback or results I get from the feature as I’m testing. We know the areas that have the most risk and can focus on those. At the end of the session, we can review our notes and see if there’s any areas we’ve skipped over and go back and do further testing if needed.

What happens essentially is we build a robust plan we go along through the testing process, building and adding as we go. We may start with a framework, but we end up with a full test plan and execution report to use as evidence to support our hypothesis, which is there are no bugs that we can find in the system we are testing.

*Let’s not talk about the failings of peer review in the scientific community, or the worrying trend of burying results that don’t fit the hypothesis – if you’re interested, read Ben Goldacre’s stuff, it’s fascinating and angering in equal measure:
http://www.badscience.net/
http://www.amazon.co.uk/Bad-Pharma-How-Medicine-Broken/dp/000749808X

Ep 43 – Under the (Brighton) Dome

Testbassssh was last week and although Brighton apparently hates me (I suffered some minor mishaps), I had the best time!

I’ve never actually been to a conference that only has one track of talks but the talks were so wonderfully curated that I’m glad there wasn’t the choice paralysis that occasionally occurs at multiple track conferences; and I genuinely wouldn’t have wanted to miss any of the talks I saw.

The venue was great, Brighton is gorgeous and the atmosphere in the big theatre room that the talks were in was genuinely wonderful (especially when the raving started!)

I was bricking it, and spent the registration period pointedly not talking to anyone, but that soon abated after the first few talks. I got talking to people easily, both with me inserting myself next to people, and people coming up and talking to me, everyone was really friendly!

The talks were all filmed, so they will be out and available for watching but the highlights for me were: Michael Wansley (Wanz! From Thrift Shop!) talking about his work as a gatekeeper working on Vista and MS camera scanner wizard. Katrina’s great intro to test pairing, and Nicola Sedgwick’s fantastic talk on tester’s being human.

But, as always, my favourite bits were the bits in between – the people I met, the energy I found after the conference. That’s why it was particularly gutting that I had a migraine from hell hit on Friday afternoon, so I missed the last talks and the afterparty. As it was, I had a couple of breaks and lunch to meet people, and share some information. I met Mark Tomlinson, finally, who has consistently been an encouraging voice during this podcast; I met people who wanted to know more about podcasting, and my podcast in particular. I met Leigh, who I’d seen a video of a talk he’d done for the Manchester meet up, and encouraged him to go start his own podcast because podcasts are great.

And the effects linger, even from only a short time there – I’ve written this episode, I wrote the skeleton of a talk for testbash Manchester and submitted my proposal on Saturday. I have no idea whether I’ll get in or not; to be honest I’m trying to put it from my mind otherwise I’ll drive myself up the wall second guessing myself. If I don’t get it, I still have an idea I think I can develop further.

The energy I’m currently feeling is amazing, and that’s why I love going to events like this. I can’t wait til testbash comes to Manchester!

Ep 42: Always Look on the Bright Side of Life

Okay so on the day this comes out I will be travelling down to testbash, armed with y mic, laptop, business cards, and stickers! I am so excited, I’ll be bouncing throughout the 4 hour train journey down. I’ve also never been to Brighton before, so I’m hoping to get some walking on the beach and exploration done. Still hoping to get some field recording done, so come find me! I’ll be the short, wide-eyed girl with dark purple hair and will be wearing a geeky shirt.

I like to think I’m pretty good at pointing out the positive of the work I test, and the people I work with as well as logging bugs or pointing out issues. I always feel that passing a story isn’t as good as logging a bug is bad if that makes sense? Passing a bug happens without fanfare unless I make fanfare. And I don’t want fanfare for every story passed – sometimes there are small ones that are routine, so that would take away from the larger work, but when something has been tricky, or when it works particularly well, or is something truly custom, then I’ll praise the work that’s been done.

I’ll also mention passing stories as well as bugs in the scrum and flag up anything particularly good there.

In fact, generally I try to put good points forward first, and this is because I know if I don’t I’ll get distracted by bugs and other things and end up just not vocalising these compliments, and I want to be positive, because no one likes being told only the bad things.

This is a thing I learned when I proof read fiction for a friend of mine – I’d get so lost in the things I’d picked up that I wouldn’t say what I liked, and that was not the best way to give feedback. So I try to start with the good.

I also don’t want to be overbearing and condescending, but even a ‘looks good, not seeing any obvious bugs’ is enough. I’m so used to that look or fear/resignation on devs faces when I walk over that sometimes it’s nice to meet that with: ‘no, this looks good I just need to ask a question’.

And sometimes the devs compliment me! Just last week I was giving a status update and mentioned a small bug I’d found and the dev who’d done the work said ‘Ah, didn’t think of that – good catch’. And it’s a good feeling, and part of being a good team. You’re working together, not against each other.

Ep 41: Friends don’t let friends use IE6 and Ep 40 revisited

I want to revisit mobile testing as I realised I focused a lot on choosing the correct devices and tools and then was just …’and then you test stuff? Websites are easy to test LOL’ which is a little reductive, so I’m gonna talk about actual testing. I think the reason I skipped over it is I realised that my input into mobile stuff generally comes in at requirement capture. We’ve had more that one client completely forget that you can’t hover on a mobile device, so if they want that feature, we’ll have to consider how to implement it on a mobile device in a way that makes sense.

There are also things to consider like how user journeys and expectations change on mobile devices. You need to make sure links aren’t too small or close together, you need to make sure you don’t rely on hovers or dragging, or anything gesture-y that might either go against what gestures mean on a phone or that people would have issues carrying out on a phone screen. Swiping is about as complex to go outside of games, I think.

I also missed some great tools you can get from using emulators – and the emulation of the touch screen is one of them – if you can’t test on devices, then you can use chrome’s dev tools and the cursor will be replaced by a small circle, to show the touch area.

I also like the network and location emulation. You can enter a latitude and longitude and the phone will act like you’re in that spot for any geo-location dependant features you may need (good for looking at shipping on shop sites).

Footnotes:
https://dojo.ministryoftesting.com/series/mobile-software-testing

Part two, and we hit the desktop versions.

You can’t fully support every browser; it’s like trying to hit WCAG AAA, it’s impossible for all areas of a site or application, and even then you’ll still miss some users. So you hit the most you can sensibly, and then try to degrade as gracefully as possible. As long as people can use it on IE7, then does it matter if it’s not as pretty as on IE 11, or Firefox?

Cross browser testing is important because people have their own weird set ups (One of my coworkers installed Fedora on his mac, because he couldn’t get on with OSX, for example), and so you want to make sure as many people can access and use your site or app.

We’ve been revisiting our browser support policy in light of IE kicking most versions of their browsers out of support, and from reviewing the stats it does look like most IE using people are on IE11. Oddly enough, IE8 is the next most popular IE version, but that’s only 2% market share, and no longer supported, so that’s definitely in the ‘gracefully degrade’ category of browsers.

Global Stats
http://gs.statcounter.com/#browser_version_partially_combined-ww-monthly-201501-201601

And market share is how you define the browser you’re willing to support. You can check both on a global scale, and , if you’ve got access to a current site with Google Analytics, you can pull data that’s directly relevant to the client to put a proposal together.

There are more than just browser versions and browser/OS combinations to think of, there things like HTML standards, which browsers support which parts of CSS3/HTML5? What about JS? Webfonts? OH GOD. It’s a pain in the arse, but it’s got to be done.

So once you’ve got a policy of browsers we will fully support, and ones we’ll ensure the system works and is useable, but it may not look as good, then I can base my testing on that policy.

You can save more time by only testing on the latest version of Chrome and Firefox, as these are often upgraded automatically, and even then, unless someone has brought out a really old version, the differences are minimal. It’s only Safari and IE that we have to take note of version with.

Again, Ghostlab can be used to test multiple browsers in one go. I have some issues with using it with a VM so testing IE on my mac is a pain with it, but even if I just test Opera/Chrome/Firefox/Safari in one go, then do IE separately, it’s still reducing the workload a good amount.

I have two strategies for cross browser testing. I test each story individually on browsers – rare that anything other than visual bugs come out of this testing – and then, towards the end of the project, I do a couple of test sessions just to give the site a full integration check, and I’ll do cross-browser and device testing then. This is just to catch the bits of integration or visual bugs that may have been missed during testing individual stories, and it just makes sure everything is more polished.

Like with device testing, I tend to screenshot and make notes as I go then create issues/write up afterwards.

Footnotes:
http://apps.testinsane.com/mindmaps/uploads/Cross%20Browser%20Compatibility%20Testing%20Basics%20-%20TestInsane%20-%20Santhosh%20Tuppad.png

Ep 40 – All the small things

Can I say it’s one of my least favourite part of testing? Absolute pain in the arse. We’ve been revisiting our devices and device testing policy (we try to revisit either ad hoc when something new is announced and on a regular basis regardless). We’ve had some interesting bugs from users on an old Nokia Lumia or clients using an older version of android and a stock browser.

Not only do you have the various combinations of hardware, screen size, OS, and browser combinations when on a mobile device you’ve got things like networks, and connectivity to consider.

There are tools like Browserstack which give you all the combos you could need, plus mobile emulators for all of the major mobile operating systems out there. Now, I always feel a bit weird testing in emulators. I know browserstack says that the emulators are the same as actual devices1, but they’re clean installs on devices. They aren’t devices that are used on a regular basis and visiting many sites and downloading things, so I do wonder how realistic they are but I also realise that realism is hard to get in devices, so you can hit most issues using these tools.

There are also tools like Ghostlab, which allows you to hook up multiple devices and browsers, and have the site you’re testing displaying on all of them at the same time. They are linked, so selecting a link on the android phone you’ve got hooked up presses the same link on the other devices, and browsers you’ve got hooked up. This means you’ve got the snapshot of realism in testing a selection of real devices, but without needing a day to do device/cross browser testing.

Combined I think this is a decent testing strategy. But how do you decide what devices you need?

For phones it’s pretty easy. Apple + Samsung is a hefty chunk of the smartphone market, then you can add some of the rarer devices in (a HTC windows phone, for example), and you’re done on the phones. We’ve got a couple of iPhones and Samsungs for greater coverage.

Then a few tablets, following the same methods as above.

Android and iOS versions are another story. iOS has a decent matrix for what versions are supported on what devices2, and windows is similar, with IE linked to their OS version (though they’ve gone from version 8 to version 10 to match their windows pc version numbers, which is annoying), but Android has different stock browsers, and different devices get the version updates at different times. I have a Wileyfox that runs bloody CyanogenOS so I’m part of the problem. You can do the market research to see what sort of devices are hitting your site, or use something like browserstack’s guide to testing the correct devices3.

We also get some feedback during UAT testing as clients will test using whatever devices or browser/OS setup they have, and then we can decide whether it’s something that we missed and need to fix, or if it’s not in our list of set ups we’ll support.

So that’s tools sorted, what about the actual testing?

We generally do wireframes and then component-based designs. So we’ll have a styletile that has examples of all the elements – menus, logos, blocks – and then wireframe the pages using desktop/mobile breakpoints to illustrate layout. This is what I’ll use to visually check the designs on different devices.

As I test websites, not apps, there is less for me to worry about functionalty-wise. I don’t have to worry about security, or access to various parts of the mobile device, or push notifications or things like that. I need to test if the website looks and works as expected on a smaller viewport, with different interactions.

That’s not to say it’s easy, or unimportant – last year Ofcom released a report that Smartphones were the most popular way to access the internet in the UK4, so it’s important that users can get to your site on those platforms.

Once you’ve decided on what you’re going to test on, and how you’re going to get the devices (actual or in the cloud), you then need to decide how to actually test.

You can apply whatever heuristic you normally use for testing on the web to mobile web testing, with some extra stuff for touch and gyroscopic testing.
Session based testing works best with this – taking notes and screengrabs as you go along and then creating bugs as needed afterwards rather than as you go.

Footnotes

[1]https://www.browserstack.com/question/476
[2]http://iossupportmatrix.com/
[3]https://www.browserstack.com/test-on-the-right-mobile-devices
[4]http://www.theguardian.com/technology/2015/aug/06/smartphones-most-popular-way-to-browse-internet-ofcom

Ep 39 – I Can Teach You, But I’d Have To Charge

Okay, a few things before we get started:

I am going to be at the pre- and post- testbash meetups, come at me testing folks!
I will be bringing my mic along to testbash, if people want to be on the show! I can do full interviews or shorter vox-pop style segments, so hit me up if you want in on it! I’m also talking to Mark Tomlinson and others regarding a podcast mashup, so I’ll keep you upto date on things in the works there!
I finally got selenium (both webdriver and IDE) installed, so going to start playing around with that and get started on my 2016 goal of getting some automation under my belt. I’m armed with some resources from Richard Bradshaw12 and I am ready to go!

And now, on to the show!

At the moment I’m doing some brief preparation to have a couple of apprentice developers shadow me for a while as part of their training. It’s hard to know what to show them, and I want to make sure that I’ve got both planning and testing work I can do to show them the breadth of work the QA/test team do, and this will give them an overview of how we use jira.

But they’re very new to all this stuff, so I want to make this more broadly useful for them – not just showing them what I do and what tools I use, but how and why, and therefore help them to become good developers who do some of this themselves and also understand the worth of QA/test beyond breaking shit. Though I will also show them how I apply the knowledge I have to the system to try and break it.

My first port of call was Katrina’s pathway for non testers3. I’ll send this link to the department head in charge of their training, but for my session, I’ll use two or three specific parts, depending on how much time I have/how long we spend discussing these ideas.

The first thing I want to use is a blog post around the concept of testing being a team responsibility1. At our place, the devs own the automated checks – they add acceptance checks (based on AC) for all stories, and run those before it comes to me. If they don’t pass, they fix, then it’s ready for me to test. This means there are very few circumstances where I reject a story based on AC – either something has been missed because people are human after all, or there’s something in the AC that is 1) too complex to write automated checks for or 2) the automated checks can’t cover (visual stuff, or really custom things).

This means that the devs have some ownership and investment in their code and the tests.

The next thing I’m going to use is a pdf that covers sources for test ideas5, because ‘where do you even start testing?’ is something I’ve heard. This document is a list of things to consider when starting testing a feature, and covers everything from the obvious (the capabilities and purpose of the system) to the more obscure (rumours around what’s causing the dev team issues, domain knowledge etc).

This will also help explain why I’m involved in planning sessions, and scrums that have no relevance to me – the first few scrums of a sprint I have nothing constructive to add, more often than not, because it’s rare stories have come to me by then. But I’m still part of a team, and I’m still involved, and I learn a lot. Knowing what’s causing devs issues means I know what work might need extra testing, and may be risky. Its both a way for me to plan my work based on when I’m told to expect to get stuff sent to me, and recon for what I might face come testing.

The last thing I want to talk to them about is about testing in an agile environment6, which is useful, I think to all members of an agile team, especially for people new to agile. It covers how agile applies to a tester specifically, but the idea of agile is to promote team cross-functionality, and so knowing this stuff is useful. Its also good to know what to expect from working with testers outside of bugs, like being proxies for stakeholders and users, which will be useful.

I don’t think I’ll get through all of it, and it may be slightly overwhelming, but I’m hoping it will be mostly useful, and I can give them resources to learn from and help them become good developers, and that’s what we’re here for.

I think that’s a decent enough intro for people who aren’t testers and presumably have little interest in becoming testers, but I’d be interesting in hearing what people with more experience of this kind of thing think. Do you teach people differently? Have I missed anything?

Footnotes

[1]https://dojo.ministryoftesting.com/lessons/podcast-richard-bradshaw-test-automation
[2]http://www.thefriendlytester.co.uk/search/label/automation
[3]http://katrinatester.blogspot.co.uk/2015/11/testing-for-non-testers-pathway.html
[4]http://testobsessed.com/2011/09/testing-as-a-whole-team-activity/
[5]http://thetesteye.com/posters/TheTestEye_SourcesForTestIdeas.pdf
[6]https://dzone.com/articles/agile-testing-principles

eps1.38_d3bug.mp3

Has everyone watched Mr Robot? It’s a great Amazon Original series about a hacker who works for a security company and he [SPOILERS]. I’m going to assume you have. If not, you should go watch it now, and all you need to know for this episode is that in episode three the protagonist monologues1 throughout the episode about bugs and the nature of them, and because originality is for losers, I’m gonna use part of that monologue here to do some of my own monologuing. Monologue is no longer a word. MONOLOGUE.

The blog post will have the whole monologue but I’m not going to read it out. A lot of this is for dramatic effect, plot, and the nature of the character, but there are some points there.

A bug is never just a mistake.
It represents something bigger.

Okay, so sometimes a bug is just a mistake – mistakes happen, people are human etc. However, sometimes it’s worth just checking in and seeing if there is an issue. Sometimes it could just be the issue is a typo, distraction, something simple. Sometimes it could be a symptom of something deeper – fatigue, inexperience, maybe even something more serious, like a health issue. It’s worth the time to check in, even if it’s with yourself if you find you’re making mistakes. It could mean anything from taking a break and getting a drink of fresh air to more substantial action.

When a bug finally makes itself known, it can be exhilarating, like you just unlocked something. A grand opportunity waiting to be taken advantage of.

I assume this is actually more like finding the cause of a bug for a developer, right? I mean, finding a bug isn’t a huge revelation to me, but figuring out the pattern of a weird bug, or getting to grips with a particularly complex piece of functionality feels like a win. And having a piece of work come to me and it work as I expected it to also feels like a win.

It did take me a while to divorce not finding any bugs from productivity in my mind – if I’m not finding bugs how do I prove my worth? But I’ve realised that the work I do upfront reduces bugs in a way, or at least reduces ambiguity, which leads to less issues from the client, and that’s part of my worth.

Bugs are useful, or they can be, but I don’t think finding bugs is the only way I can be valuble.

The bug forces the software to adapt, evolve into something new because of it. Work around it or work through it.
No matter what, it changes.
It becomes something new.
The next version.
The inevitable upgrade.

I just really enjoy this section – the idea that the bug forces the software to evolve, to work around, like Darwinism in binary. The inevitable upgrade – code is never finished, it’s only ready to ship, then we continue working and ship, work and ship, a constant building, improving, changing of code. And that’s why we test, right? To ensure that the constant evolution is a good thing, not something likely to end up on WTF Evolution2

I just like the idea of bugs being good things; even if I don’t celebrate finding them. I like the philosophical approach, even if the cause does end up being a simple, silly mistake (because this isn’t tv, not everything has a deep meaning, nor is it burdened with glorious purpose).

It’s a good episode of a good series, and I highly recommend it.

Footnotes

[1]http://www.springfieldspringfield.co.uk/view_episode_scripts.php?tv-show=mr-robot-2015&episode=s01e03
[2]http://wtfevolution.tumblr.com/

Ep 37: Making the most of it

This job would be great if it wasn’t for clients, amiright?

One of the first challenges on a client project is scoping out what the client wants. What counts as a must have to go live, and what counts as a nice to have in the future. We start with a minimum viable product, and then go from there, based on budget, time, etc.

To do this we need to get to know the client, the business, the stakeholders. Their day to day needs as well as their plans for the future. We have the technological background and ideas we can bring to the table, but we need to ensure our suggestions are relevant and useful.

We build a relationship to build and handover a good project to them. We do this in a number of ways. When the commercial team hands a project over to production, we get a handover that give us an idea of how the commercial team see the project, and any relevant documentation; pitches, etc. We get an overview of who our contact points are, and feel for what the client expects, and what the relationship is like.

We then set up workshops with the client. These are whole or almost whole day affairs that bring the production staff and client together to meet and get to know each other.

The hardest part of getting to know a client’s business is getting the details that the client feels are unneeded or obvious. They sometimes feel that they don’t need to state what they see is the obvious. Most of the time we catch it, because sometimes it’s obvious, but sometimes it’s not and things get a little hairy.

Talking about all this plus an intro to how we work, and who the project team are can take all day, so after all this, we go away. We design a homepage, do some wireframes of internal pages, and any particular functionality (checkout pages, landing pages etc), start populating the backlog and getting the first sprint of basic stories together. We can do this fairly early as the basics are fairly generic depending on the type of project – the base install and theme always has to happen, then content types, the ecommerce side, things like that go into sprint one.

Then we have our second workshop, reviewing design, wireframes, and our plan. We email and talk between these, but the workshop is where we expect to make the most headway.

The wireframes and designs almost always change but we do them so we’ve got a jumping off point. We can illustrate our thinking and see where the gaps are. The wireframes are good to illustrate high level functionality, and help the client envision how information will look and flow. They’re a visual prompt for questions and discussion. This helps us get more granular information and fill in the gaps between our ideas and their needs.

At this point we expect to be in a good place to refine the backlog and tickets more, and start the process of signing off AC for the first sprint, and start planning the second sprint. The client knows the team they’ll be working with, and we’ve started to build that relationship of trust that’s needed when building a project for someone.

We plan a lot upfront, because it’s worth it, and we get good relationships with our clients, and the whole team has input into a project as early as possible. This means we’ve all bought into the project, and are invested in it. The client and stakeholders are faceless, and neither are we. We’re a team, making the best product we can.

Ep 36: Life Is A Lemon and I Want My Money Back

https://www.youtube.com/watch?v=BF1wVv8OnfE

The final bit of debt I want to talk about is testing debt. Part one. Part Two

I’m going to split this up into a few sections sections: ‘bad’ testing, and delayed testing.

So ‘bad’ testing, for me, is testing where I’ve not had the time, space, or ability/skill to fully test a feature. This could be caused by many things:
No time to cross-browser test
Especially if I need to test specific browsers for the stakeholders – versions of IE for example.
No time to cross device test
No time to do exploratory testing – only testing to the AC
Sometimes this can be done while testing the AC, but if I don’t get time to fully explore, just pass it on the basics, then it’s not really tested properly. The balance between exploring and signing off is skewed too much towards quick testing.
The insidious lack of care that I mentioned a couple of weeks ago
If the devs don’t care then I won’t feel motivated to point out issues to
Not having the context I need to feel fully connected to a project; so I can’t fully tell if the feature meets the needs of the client, or fits in with their branding etc
Not having the ability/skill to test fully: this could be a missing knowledge on the system or things like accessibility testing, performance testing etc

Delayed testing is when the story is shipped without testing, the testing is moved to a separate task, or simply pushed back to the next sprint. Delayed testing can also be caused when you’ve got all the integration testing as well as stories in a sprint to do, in the same amount of time.

Sprint one: 30 stories
Sprint two: 30 stories (each with integration testing)
Sprint three: see above.

Most of the time this is doable, especially with automated regression checks and testing alongside the development process, and closely to developers, as opposed to siloed testing. However, it does add up, and it’s something to take into account.

One way of tackling bad debt is getting a feel of what adds value to a project – what the business wants to get from the feature or project as a whole, and what the stakeholders see as important. You can then focus testing around those areas. This has the dual effect of both bringing more value from testing, as you’re testing what’s higher risk, or most important, so covers off the most bases efficiently, so even if you are pressed for time, you still feel like you’ve contributed something to the project.

Testing debt is the one I’m very familiar with; I’m more confident in my skills, but still learning, and there are huge gaps in my knowledge especially around automation. I find it really hard to test a project I have no context for, and even if I don’t miss anything I still feel like I’ve only tested the feature or issue shallowly. But I’m learning, and learning how to stay out of debt and I become a tester proper.

Footnotes

http://thetesteye.com/blog/2010/11/turning-the-tide-of-bad-testing/