Ep 92: Everybody’s a critic

This week its just me, talking about critiquing my own testing! I’ve been doing this a lot in my new job – figuring out the priorities of the team and where my focus needs to be, and where this intersects with my weaknesses and biases. It’s been really interesting, and I’ve been trying to be really conscious about it.

And here’s the mindmap I used for this episode:
Critiquing your own testing

And a text version:

  • Critiquing your own testing
    • Plan your testing
      • Tell team what you want to test
      • Tell the team what you think you need
      • Get feedback
    • Put on different hats
      • Customers
      • back end users
      • marketing
      • support
      • Security
      • Accessibility
    • Document what you have tested
      • This will help illuminate gaps
    • Look at the big picture
      • Where does this feature fit?
        • in the sprint?
        • In the workflow?
    • Review with as many people as feasible
      • Design
      • UX
      • PO
      • Devs
      • Customers
      • Business users
    • Use heuristics
      • Elizabeth Hendrickson’s list
    • Take notice of bugs that others find
      • Own your weaknesses
        • Checklists
        • Learning

Ep 91: Stop, collaborate and listen

This week I have two guests! Maaret and Franzi come talk to me about the call for papers collaboration for the European Testing Conference!

The call for collaboration is a fascinating way to build a conference, where all participants get a 15 minute call with two members of the team, where they can tell their story, get real time feedback, and improve on their idea. Maaret and Franzi talk about what they learned from the process, their most memorable people, and what they wish to see at testing conferences!

The European Testing Conference is at the Amsterdam Arena, Amsterdam, Netherlands on Feb 19th-20th, 2018

Franzi is involved in Software Crafters, and Maaret blogs at A Seasoned Tester’s Crystal Ball

Ep 90: New job weirdness

This week we talk about new job weirdness! This was recorded about a month ago, so some of our points may no longer be true of our workplace anymore, but still useful info!

Here is a list of things we mention/thought about mentioning:

  • In jokes/rituals/other
  • Trying to move into existing groups/teams
  • Figuring out who does what/who to ask for what
  • Office feel – casual, professional/reserved
  • Language and words
  • New starter guides/documentation
  • Finding lunch buddies
  • History of testers at the company
  • What have I been hired for? Why do people want testers or me specifically? What is my mission?
  • Learning the history of the product
  • What doesn’t get said – processes, ways of working, business as usual
    • E.g. release processes
    • E.g. job roles (what is a “tester” here? What are “product owners”, “agile coaches”, “business analysts”, etc)
  • Management & metrics (“you should email so and so when we do this”, “you need to fill in these Jira fields”)
  • Managing your own natural instincts and biases
  • Just because things appear terrible to you, doesn’t mean they are (“omg you release without testing? Omg you’re writing user stories about database tables? You don’t walk through the board during stand-ups?”)
  • You don’t know the past yet, this might be a really good place for them having improved from an even worse position
  • Some ideas or experience that you have had in the past doesn’t always apply in every context
  • I always fall back to “what is the problem?”, rather than worry about how or what people are doing, even if I believe it will lead along a bad path. I try to tell myself I will learn from everyone and my opinions can always be changed.
  • But if there is an obvious problem and I think I know how to fix it or how to diagnose it, I try to be diplomatic and try to avoid being too direct.
  • You have to trust people at first and work with some assumptions
  • The nice thing is that the more places I work and teams I work with, there are some common aspects. Its just they get buried in the specifics of process or problems sometimes and you have to try and see through that.

Ep 88: There’s a hack for that

This week I talk to Jahmel (Jay) Harris. Jahmel is a Penetration tester/Security consultant at Digital Interruption. He also runs Manchester Grey Hats.

  • Things to consider before starting security testing
    • App permissions?
      • Information users need to give the app
      • Push notifications?
        • Fine usually, but be aware if anything sensitive if sent – shoulder surfing
  •  Wearables
    • New ways of interacting with devices
    • They are becoming more secure but issues at the start
    • With Android we found lots of ways to recover the data
    • Bluetooth LE and other radio protocols can be insecure.
  • Testing considerations iOS vs Android
    • Root vs non root
    • Jail break vs non jailbreak
  • Common vulnerabilities
    • WebViews
    • Sensitive data over HTTP
    • Javascript vulnerabilities – used to be able to get full shell in an app via advert in webview. Coffee shop or hotel wifi
    • How secure are these webview frameworks such as cordova
  • Vulnerable IPC (Inter-process communication)
    • Things like SQL injection or file traversal
    • Lack of protection/permissions
  • Logging
  • Auth
    • Fin tech (financial tech) app – could steal all money. They didn’t think about the auth on web services
  • Binary Checks
    • Is it worth checking for root detection/doing ssl pinning etc? It took someone over a year to bypass these controls on one of our client’s app. Then they need to look for vulns.
    • Obfuscation? Worthwhile? When I did the research into Android Wear, it took me weeks just to RE.
  • They stack. Easy to bypass one but hard to bypass all. Think about the risk of the app. Does it need that protection?
  • Tooling
    • Drozer
    • Needle
    • Frida
    • decompilers
  • Automation
    • Tooling isn’t quite there. There needs to be a big push by both devs and infosec. InfoSec can’t write good code but devs aren’t always aware of the latest threats.
    • Security shouldn’t be dev->pen test. Security needs to be considered at every stage. In requirements gathering etc
      https://www.digitalinterruption.com/secure-mobile-development (https://goo.gl/P1WYcV)- reduce the cost of pen testing

Ep 87: Players gonna play play play play play

Podcast news: Friend of the show, Neil Studd has a new podcast! It’s called Testers Island Discs and there’s an intro episode out tomorrow: https://twitter.com/TestersIsland?lang=en

Now, on with the show.

So this is all Richard Bradshaw’s fault.

He came and did a talk at the BBC back in…August, I think? And there was a slide in it about the phrase ‘I’ll have a play’ and how that isn’t what testers do, and it undersells our skills. He said (and I’m paraphrasing here as it was months ago) that we could write charter for that session and having a play becomes exploratory testing.

I kind of disagreed when I first heard that, because to me, a charter for Exploratory Testing is specific. There’s an aim, or something specific being tested: a certain area, or a certain user journey, or even a user journey as a user (a bad actor as security testing, or a visually impaired user or similar).

When I ‘have a play about’, I am doing just that – there’s no real charter. There are a couple of reasons I ‘play about’:

I’m getting familiar with a product
I’m not confident that I’ve tested everything but I’m not sure what I’m missing

The first one is pretty easy – I’m just seeing what I can see without getting too much help – what makes sense going in cold, what could be improved, are there any quick wins, things like that. I could definitely wrap that up in a charter and a time block and call that exploratory testing.

But the second one is much more nebulous. I don’t know what I’m looking for, other than a sense of being more comfortable with my testing. It usually happens that I’ve tested a story and not found any issues, but I’m not happy. I’ll go make a cup of tea, and head back to my desk and start looking around a bit wider. Sometimes I think up a scenario for the original story that I’d missed previously, sometimes I find completely unrelated issues, sometimes I get a sense that I understand the system a little more. It’s a lot hard to wrap that in a charter when I don’t know why I’m not happy. I can’t formulate the words for bits I’m not happy with sometimes.

Does this make any sense? Do other people have this?

But I at the same time, I’m no’t just clicking about, am I?

That’s what it feels like, but it’s not.

There’s context there – pre-existing domain knowledge, or client knowledge, or team knowledge. There’s knowledge of what apps or websites or programs are meant to do or how they’re meant to look. It’s knowing which browsers are relevant, and which are awkward to work with. I may not be able to put words to why I’m not happy, but if I let myself wander, a little defocused, and aimless, I might find something.

It’s playing around with all the knowledge of being a tester in the background.

And that was a weird thing to realise.

I had split my work mentally into structured testing that was in a testing mindset and just playing about which was freeform, nothing intense, more of a safety net, or a get to know the product thing. I think it’s because it’s fun as well – it’s fun to explore a system and check out all its nook and crannies, and I think I’d separated that out from structured, goal-driven testing?

But it’s not really separate. Playing about, clicking about, when you’re a tester, is testing. It might not be structured even as much as exploratory testing it, but it’s still testing. And you could probably wrap it up in a mission, maybe break it down into charters, and put a time limit on it. Then you’re doing exploratory testing. It’s like the difference between science and dicking around is taking notes. The difference between playing about and exploratory testing is writing a charter.

The next time I have a play about, I’m going to write a charter or a mission (even if it’s just ‘get more comfortable’), and take notes. This will also hopefully improve my notetaking skills, which is a bonus as my notes have gone to shit recently.

So, I hate to say it, but I guess I agree with Richard now. I really enjoy the phrase ‘playing about’ or ‘noodling about’ because it’s fun, and I think testing can be fun, but I also see that it means a lot. I might start switching it out ‘I’ll have an investigate’ or ‘I’ll take a look’, as that still suggests more structure and skill.

Ep 85: Sound Effects and Overdramatics

This week is the Manchester Testbash crew!

Matt, Claire, and I talk about out experiences of putting our first successful conference submissions together and how we’re preparing. We’re planning on doing a part two post Testbash to reflect on the days themselves.

You can find Claire on twitter and she writes for Ministry of Testing.

First, what we’re doing at Testbash:

Matt is giving a workshop(!) on APIs for beginners
Claire is giving a talk about her personal experience with imposter syndrome
I’m giving a talk on mental health and anxiety in testing

Tickets are still available!

Step one: Submission

Where can I start to gain confidence and experience before submitting?

  • Write blogs but don’t publish them
  • Take something you’ve read/watched/heard and create a small talk to a couple of people at work that you feel comfortable presenting to
  • Lightning talks/90 second talks
  • Attend meetups and just talk to other people – you will have something to share and you will be surprised at how different your experience can be.
  • Build up to bigger talks – present a talk to more of your company, or a local meetup (plenty of meetups that encourage new speakers).
  • If you get the opportunity – go to a peer conference
  • Outside encouragement – people will cheerlead you if you want to get into public speaking

What to submit

  • Technical talks
  • Overcoming a challenge / solving a problem
  • Personal experience report
  • Workshop

Where to submit

  • Consider whether you will have to pay to speak
  • MoT Open CFP
  • Other conferences which have a deadline for submissions
  • Smaller events like Leeds Testing Atelier
  • At home or abroad ?
  • Will your employer allow you to attend or will you need to book time off ?
  • Do you know anyone who has spoken at an event you are interested in ?
  • What was their experience like?
  • Can you submit the same talk to multiple conferences ?

How to put a submission together

  • What do I want to say
  • Why do I want to say it
  • What will other people get out of listening to me

Step two: oh god, what have I done (putting the talk/workshop together)

  • Scripted talk or more free-form ?
    • Notes
  • Avoid slides that you just read out
  • Using keywords or images as prompts

Step three: Practice AKA the countdown

  • Talk through it yourself – friends / family etc
  • What reads well on paper isn’t always very easy to say. Better to compromise your language so that it flows more naturally than tripping over pronunciations.
  • Practice at smaller events / internal company audience
  • Resources if you are new to speaking (i.e.: James Whittaker videos: one, two)

Episode 82: You’ve Got All These Great Answers

To All These Great Questions

WE’RE BACK and with a bumper episode of Let’s Talk About Tests, Baby.

Matt and I have both gone for interviews, so we decided to do an episode about it. We talk about applying to jobs, sussing out the language if job adverts, CVs and cover letters, and the dreaded interview. Matt also has experience on the other side of the table, so he brings that insight to the episode as well.

Mindmap
Interview MindMap

Things we mention:

BugFinders
uTest
http://www.testingeducation.org/BBST/
http://www.satisfice.com/info_rst.shtml
http://www.istqb.org/
http://www.askamanager.org/

Episode 81: It’s the small things that count

This week I talk to Andrew Morton about unit tests!

We cover the basics of units tests: what they are, and what they do. We also cover unit tests as documentation, pitfalls to avoid, and some tools you might see in the wild.

References and resources we mention:

https://github.com/testingchef/bad-units – A repo of bad unit tests
https://www.youtube.com/watch?v=q8I8-hVS5JI – Andrew’s Whiteboard Testing video on unit tests as documentation