Ep 72: Mayday! Mayday!

Today, in a first of what will probably be a semi-regular series: ‘How is [career] like testing’, we cover Air Crash Investigations.

Investigating what has caused an accident/bug

  • First action?
    • Find the black boxes! (logs/monitoring)
    • Collect all of the evidence (effects of the bug, clues in data that led to it, eye witness accounts)
    • Come up with hypotheses with the data and attempt to disprove
    • Simulate the problem/Steps to reproduce
    • Sometimes its not possible to identify the exact problem, only give a summary of findings and suggest any possible improvements

Implement new procedures

  • Providing evidence and explanation for new procedures
  • A tester/investigator only provides recommendations, they do not implement them

Post-incident investigation and action

  • Mentality
    • Focus on facts
    • Thinking outside of the box, outside of the immediately obvious
    • Gathering as much information as possible – even it appears irrelevant – then assessing with all the information
    • Coming up with hypotheses and attempting to disprove it
    • Being unafraid of uncovering the truth, despite politics

 

  • Needing to work fast
    • Evidence may disappear over time
    • A need to quickly discover if there are any fatal design flaws that could affect other similar planes
  • Always considering human factors
    • Considering the mental state of those involved
      • What were they thinking?
      • Were they stressed or under pressure? Did they have other motivations which drove their decisions?
      • What was their experience?
    • Psychology, sociology
      • Seniority affecting juniors?
    • Popularity (“no one questions this person”)
      • Cultural or language barriers
        • Tenerife accident – “Ok” meaning both confirmation of receiving and giving clearance.
        • Korean Air Cargo Flight 8509 – communication breakdown because junior officers wouldn’t take control from captain.

Not like testing?

  • Its only investigating incidents, not really being involved in the process of building directly
  • We have to explore what hasn’t happened yet and try to create the problems, as well as triage and investigate them.

Some anecdotes…..

Test environment not the same as production

Atlantic Southeast Airlines Flight 2311

  • Propellers got stuck in a position that caused drag
  • Was tested for this scenario, but only on the ground in a lab
  • Accident happened because it behaved differently at altitude, in the air
  • Investigator (tester) felt sure this was the case, but had to prove it with live test that shocked the engine suppliers

Checklists – how difficult it is to write them

1999 South Dakota Learjet crash

  • Checklist for depressurisation warning started with diagnosing the problem
  • By the time the crew got to the check about donning their oxygen masks, they had already lost their ability to think
  • Checklist should start with donning masks no matter what

The interaction of humans with automation – people tend to assume software is smarter than it really is

Asiana Airlines Flight 214

  • Plane crashed due to landing short of the runway
  • Caused by the pilots inputting the wrong setting into the autopilot
  • Pilots were trained to let the automation assist them
  • This broke down when they gave the automation the wrong command and they didn’t understand its behaviour
  • Lesson here on abstracting important procedures and relying on automation
  • Automation cannot think or second guess, but people can believe its more intelligent than it really is

Ep 70: The Novella Of Testing

This week I am talking to Mel Eaden AKA Mel the Tester!

This is gonna be a two parter as we spoke for a while about many many things, so part one today, part two next week, then back with our normal fortnightly schedule the week after that. That’s right, three episodes in three weeks! That’s how much we love you, listeners <3

As well as being a tester, Mel organises the Ministry of Testing’s writing community, which is an awesome place for anyone who wants to write, edit, proof-read or in someway contribute to the Dojo on the Ministry of Testing. If you’re interested, ping Mel on either the Testers.io or Ministry of Testing Slack, or email melthetester@gmail.com

The basis of this episode was the conflict between testers as story tellers and being concise, and was based on this blogpost from Mel:

http://testingandmoviesandstuff.blogspot.co.uk/2016/10/my-november-goal-and-now-for-rest-of.html

My personal goal for November is to answer a question first and then ask if the person wants more of a story or explanation of why I gave the answer I did. This is my personal goal based on feedback. I was made aware of the round about way I approach my answers, thinking I have to give an explanation up front for everything I’m going to say when a person doesn’t have context.

This intrigued me – I often feel like I’m just spaffing on at people, and figuring out what’s too much info and what’s the info they actually need is something I think about a lot, and so I reached out to Mel to be on the show.

We also discuss when soapboxing is needed, testers as journalists and translators, and ‘technical testing’.

TMI?

  • Balance between context and not blinding people with details
  • The issue is how do you know if they have enough context?
    • Ask them if they need more info
    • Could point them towards a document/place/email with more context if possible?
  • Cultural habits – people get stuck in conversations they can’t seem to get out of. “If you stand there like you don’t have somewhere else to go, I’ll keep talking.”
  • Getting annoyed with the short version of the answer – I keep asking questions because I want to have the conversation and flesh it out completely when someone is explaining something to me. I like details, no matter how long it takes to get them.

Bug demos

  • Agree there is some bugs that don’t need demoing
  • Hard to know which without domain knowledge sometimes but can learn the lines on the job – see what prompts devs to come over to you to ask
  • Stopping the demo train – call a meeting? Demo to everybody at once? Ask what’s missing from the defect? Email chains?

Story tellers

  • I have a habit of overthinking decisions and so arming myself with lots of reasons for why, which then turn into a massive response when some asks ‘so what are you doing with x?’ when all they needed was my decision
  • Over-thinking the solution
  • Big picture thinking vs small picture thinking (Hammer/Nail problem for some people while for me it’s more “Why do I have a hammer?” or “Are there only nails?”)
  • Writing and Using examples and stories in writing (My current business card says Quality Storyteller)

Ep 69: Talking ‘Bout My Generation

AKA: Test Idea Generation

News! I have a Patreon. This covers both LTATB and my other, new podcast Inner Pod. If you do want to Patreon the show specifically you can choose the LTATB tier, and then the money will go firstly to hosting costs, then for equipment for me and Matt, travel, etc.

Testsphere cards are here! I’ve shown my pack to a few people and they’ve wanted to know where they can buy them. Well you can back them here and you should get the cards around March!

Now, onto the show!

This week we’re discussing test idea generation. Mindmap can be found here in various formats:

.docx
Mindmup

d64731fb-4f31-463b-bfd1-d468f892983b

We talk about:

Where to start? Talk to people!

  • “Just play with the product” seems like an easy answer but do you always know how to do this? Maybe its not the best place to start…
  • You can start by talking to people – specifically customers about they use the product, project managers about the risks they’re concerned about or what their expectations are. You could even talk to other testers who might already have a lot of test ideas to hand!
  • Talking to customer support can also help give you a feel for what problems are regularly plaguing customers – not to mention problems that cost the support team time too!

Where next?

  • Observe usage, watch an end user use the product, listen to sales calls and what people might be looking for or look at logs and monitoring to see how people might be using the system.
  • Read user guides or documentation of the product, or try and write some! Writing documentation can help you think about further test ideas as you think about how to explain the product’s behaviour.
  • Maybe there are laws, regulations  or standards that the product has to adhere to? These can generate ideas for tests, such as ideas around how the product might not comply with them or could be interpreted in different ways.
  • Existing test documentation such as test cases or mind maps can help provide ideas too!
  • Sometimes looking at existing bug reports or customer complaints can provide ideas that you may not have thought of.
  • Consider what kind of testing you are doing? Are you performing mainly functional testing? Could you apply a different lense and think from a different perspective such as from a performance, security, usability or accessability perspective? What about localisation or compliance?
  • Considering boundary testing specifically can help generate ideas as we tend to overlook this quite often.

Test Community

  • You can find test ideas around the community such as great cheat sheets like this one.
  • There are also lots of testing mnemonics.
  • Attending meetups and conferences can help provide new ideas to try in your work.
  • Naturally reading blogs and podcasts can also help!

I’ve honestly tried everything I can think of? What else is there?

  • Maybe you can learn more about the technology that the product is built upon. Understanding how it works underneath can provide ideas (such as the boundary testing example if you knew that the product relied on a database that was set to only allow 255 character strings).
  • Some ideas can come from the perculiarities or weaknesses of the technologies underneath. An obvious example of this is SQL Injection with MySQL databases.
  • Different technology comes with different risks, understanding more about these definitely helps guide your testing too. For example, with a system that depends upon message queues, what happens when the queue is full? What happens if the queue is not available?
  • What about pair programming or code reviewing with developers? Understanding specifics regarding how the product has been developed can help inform you of areas of weakness and has other benefits too.

Links

Ep 68: Automatic for the people

This week I talk to Angie Jones about automation!

We talk about:

Getting into Automation and the different mindsets

  • Automation seems like a black box at times, especially moving from testing a GUI (Graphical User Interface) to testing the underlying code, it feels like a mental shift. Even to back-end developers, automation can seem like a black box.
  • What “test automation” is depends on the context you’re in. Are you scripting scenarios? Are you building GUI test automation? Web service test automation? Mobile test automation?
  • Is there an automation mindset? It’s a bit of both, a hybrid of a tester’s mindset with a developer’s skillset. In building test automation, I still need to keep my tester’s hat on – figuring out how to best find information and provide that to the team. I need a developer’s skillset to understand how all the pieces work together and what tools available to me to build automation.
  • Automation can’t explore, it mainly re-tests known functionality like a sanity check.
  • Google Chrome dev tools are a great place to start for understanding what is going on under the GUI, such as being able to inspect the HTML and the elements that make up the page. This can help you build GUI test automation but also help your exploratory testing.

How to know where to start with automation

  • Managers tend to decide that they want to automate all of the regression testing – how to know where automation fits? There is a cost to automation in same way there is for development code. Try to think more strategically, what should you automate? What is risky to you or your customers? Those are good things to build automation scripts around. Also, things that take your testers a long time to do, such as data setup can be automated.
  • In encouraging people to shift left and get more involved with the writing of code or automation tests, we’re pushing everyone to be able to write code and this isn’t necessarily everyone’s interest or strong point. There are so many things that go into automation engineering that testers can contribute to. Monitoring the test runs and whether they are passing is something can testers can help with, testers already have skills in looking at logs and working out what is going on. Getting involved looking at automation test results is taking that a step further.
  • You can start getting better at reading code and helping review developers code – starting to continuously test earlier in the process.
  • If testers were more involved in the triage and investigation of bugs with automated tests (as opposed to the product they are checking), they could help guide developers on which automation tests may need to be improved. For example, if there is a test that regularly fails over and over, it might need changing or even throwing away if it is costing a lot of time in maintenance.
  • Testers can also gain information by investigating the failures that are occurring with automation. If testers look at these failures, it might be useful information to know where to focus exploring more if there is an automation test that regularly fails in a particular area of a product.
  • You shouldn’t just rely on the licensed tools that you’re provided with at work, figure out what you need and look to get that. There are plenty of open source tools out there and there’s a great community who are vocal about their usage of tools and who are willing to help if you’re stuck. You can also help contribute fixes to open source tools and help improve the tools for others. The testers.io slack team is a great place to start asking.

Should automation code be peer reviewed?

  • Should automation code be peer reviewed by developers or testers? Absolutely, automation code is still code and it still needs to be done well. It makes a world of difference in terms of the code quality, automation code needs to be top quality. People don’t seem to treat their automation code as well as their production code, but they really should be treating the automation code more seriously so that you don’t have flakey or false-positive tests.
  • Code reviews are also useful for mentoring, so you can provide feedback on how code can be improved or made cleaner or even re-used for other work.
  • By encouraging developers to review and care about automation code, it encourages empathy, which allows them to understand what you might be struggling with and how they can change the product to become easier to automate around.

Useful links

  1. Angie’s twitter
  2. Angie’s blog
  3. testers.io slack team
  4. Google Chrome dev tools

Ep 67: Diving Deep Into Domains

This week I talk to Lisa Crispin about domain knowledge, and the value testers can provide here.

I love getting deep into domain knowledge, whether that be internal or with external clients, and I really feel this is where testers can provide some great insights and value.

What?

Get deep knowledge about the area you’re working in – whether that’s specific to your place of work, or more broad. In my case, my domain knowledge is the CMS we work with, and general web dev knowledge as well as specific knowledge about the processes and tools used within our company.

How?

  • Embedded in teams makes it easier to cross role boundaries
  • Another plus for testers being in the team, not silos
  • Working on planning/requirement gathering also makes it easier – have an eye on the business requirements and needs of biz stakeholders
  • Can find allies who are often left out of the process/overlooked and work together
  • Ask questions: can I sit with you for a bit and you help me with this/explain this to me?

Why?

  • Can find others the issue affects if trying to change things and use them – show budget implications to PM, usability/customer satisfaction implications to marketing people
  • Correcting misconceptions can be helpful (sales people finding out the actual process for example)
  • Less seriously, you get a reputation – I get told a lot that I’m the person who knows things at work, because I’m all over the place

Ep 66: When all you have is a hammer

So I am a manual tester. I’ve dipped my toes into automation, and I’ll continue to do so, but for now, I’m a manual gal. I enjoy exploring a system, figuring out what works and what doesn’t. But that can only take you so far.

Sometimes I feel like I’m so far behind everyone else. I hear people talk about automation in terms I can’t get my head around. I hear people talking about looking at logs and all this stuff I can’t do. Pair that with the shift towards technical testers, and shift right, and I started to panic a little. Panicking gets me nowhere, I have years of practice to know that, so I tried to take a step back and see what I can do. If I have a future in testing, I need to try and catch up.

I’m probably not going to be managing builds and releases to QA any time soon, but what I can do is dig into the system in front of me. I’m already pretty familiar with the systems we use, and what the most obvious issues are caused by, but there’s always more to be done, more to learn.

I saw a post a while ago – maybe even last year – on Chrome developer tools for testers. I was familiar with them already, viewing elements, using the emulator for mobile devices, but I knew there was more in there I could utilise. I decided to start with dev tools as it’s something I can learn by myself. I don’t need to request any structural changes to how we work to use them, and learn what I can find out with them. Once I know what they can do, what I can use them for, and by extension, what else would be useful, then I can make further progress with more technical stuff.

Baby steps, is what I’m saying.

I’m starting with what I know, and what I have forgotten I know, and I’ll do a part two (or three) of this work as I learn more.

I’m hoping to do this more often next year: I pick a topic, do either an episode or a segment on what I know, and what I’ve chosen to learn, I learn it for 2 weeks, do a retrospective. If I think it’s worthwhile, or feel I need more time, then I carry on for two weeks. Rinse, repeat. Then I move onto another topic.

So, onwards into the world of dev tools. I’ll be talking Chrome here, as that’s what I use, but I know firefox, IE, and other browsers have similar tools available.

Inspect

This perfect for figuring out what that weird-ass element on the page is that’s causing issues. You can also live edit the page here. If you click on an element, you can change the HTML or the CSS.

Device emulator

When you’ve got dev tools open, there is an icon that looks like a phone in front of a tablet. Clicking this will start a device emulator. You can change device type to see how it looks on different devices, from blackberry phones to laptops with a touch screen. Now, this isn’t perfect by any means, but if you’ve not got access to devices or browserstack, it’s a good start. It’s also good for grabbing screenshots when getting one from a device to your computer is a pain in the arse.

Things you can also do here: internet throttling. Want to see what happens when your site or app is viewed on a device with 2G, or GPRS? Click the list of devices at the top of the screen, go to edit, then click throttling in the resulting settings screen. Boom, you don’t have to leave the office to figure out if the app’s not gonna work on a train.

Geolocation

Location spoofing in Chrome: a thing you can do!

So, under the Sensors tab, there is a Geolocation option: This will spoof your location to a number of preset ones around the world, and a location not found option. Useful for checking localisation without the need for an offshore person.

Network

There’s a network tab, and this shows how long things take to load. Refresh the page you’re on and the network tab will fill what’s being loaded on the page, and how long it takes. Wondering why a page is suddenly taking an age and a half to load? The network tab is your friend.

I’m gonna leave this here. As I said, I’ll be learning more over the coming weeks, I’ll report back if I find anything particularly nifty. If anyone knows any resources, ping them my way. I feel weird announcing that I basically know nothing about technical testing, but I figure I can’t be the only one feeling like this, so let’s learn stuff together.

Want to learn more? There’s a great course here: http://discover-devtools.codeschool.com/

Ep 65: A Gordian Knot of Numbers

Or: This week Matt and I talk ourselves in circles about metrics:

Opening thoughts:

  • Why measure? Because everyone wants to improve.
  • What to measure? Very very difficult to measure “soft skills” like testing.
  • Several perspectives on this.

Matt:

I’ve worked in places where they counted bugs and I’ve made the mistake of counting test cases.

Further thoughts:

  • Power to change the need for overly simplistic metrics?
  • Asked for metrics but allowed to choose what to measure?
  • Try to identify why they want those metrics and use critical thinking to explain why it is flawed.
  • Counting bugs – project that is near completion vs a new project?
  • Counting test cases – no test case is equal, all written differently. What is a “test case”?
  • To the junior testers – help your test lead
  • To the test managers – don’t rely on metrics
  • Consider the danger of KPIs, what behaviours do you want to encourage?

Testing could be measured by:

  • Are customers happy?
  • Is the product selling?
  • Are there very few critical issues found in production?
  • Are budgets and schedules being met?
  • Are developers happy?
  • Are product managers happy?
  • Are testers happy?
  • If you have to count bugs then can you add some context to those metrics?

Asked around testersio slack team:

Feelings of metrics not making sense for individuals, only departments
Could measure % of production deployments without a critical defect reported in 24 hours/week?

Ep 64: This Modern Testing

This week I’m talking to Andy Tinkham about his testing philosophy: Modern Testing.

You may be familiar with this alreadY: Andy has been on both Test Talks and the Testing Show talking about it. I highly recommend both those episodes if you’d not already listened to them.

That’s where I first heard about Andy’s philosophy, which has 4 pillars at its core:

  • Context First
  • Testers are not Robots
  • Uses Multiple Lenses for Test Design
  • Providing Useful and Timely Information

This philosophy also has a challenge: Write your testing manifesto. Define what testing is or isn’t to you. Publish it if you want. Get feedback, get challenged, find your own path. I’ve published mine below.

The angle I wanted to tackle was my thoughts on the philosophy as a new tester: For new testers, defining what you do can be intimidating: I feel like I’m on the cusp of understanding what testing is to some extent but it also feels almost ineffable. The pillars here are great for a starting point – the details can be different dependent on context but the names are good guidance.

It also provides a framework for learning the more ‘fuzzy’ parts of testing: looking for context, thinking outside of tests cases, learning mnemonics and how to apply them, methods of efficient communication, etc).

All in all, it’s a great philosophy, and a great conversation, I hope you enjoy it!

Gem’s testing manifesto challenge response:

This was hard, but interesting for me anyway as I have absorbed so much about testing in such a short space of time, that while I had loads of ideas about what testing was, I hadn’t really truly considered what I thought testing was, and how I practised testing, so I think it’s really invaluable to do that, even if you never share it with anyone.

My attempt, in bulleted form:

  • Context is important
  • Providing value is the most important thing
  • The above two will be almost constantly changing
  • Avoid absolutes (except this one)
  • Communication is key
  • Everyone is human
  • Vulnerability can be powerful
  • Ask the stupid questions
  • Knowledge is our greatest tool
  • Balance learning and applying that learning
  • Breathe
  • No, I don’t break things

Ep 63: Testbash bonus!

Or Testbash Manchester!

That’s right, the Testbash review episode you didn’t know you needed. I’m here with Matt Bretten to discuss the whats, whens, whos, and hows of what can only be described as one* of the best weekends of the year. This is as close to ‘live’ as we get here at LTATB towers, so enjoy!

Testbash had a one day traditional conference, then one day of open space where the schedule was made by the attendees. I had a spot pre-selected thanks to Richard Bradshaw, where I spoke about helping testers in a bad situation. See my mindmap here. Matt lead a great discussion on Test in Devops, which had a lot of food for thought.

We discuss all aspects of the conference, from the pre-meetup to the open space, plus 99second talks, and a diversion on bread nomenclature.

*There are many testbashes. Who are we to rate them against each other?