Ep 90: New job weirdness

This week we talk about new job weirdness! This was recorded about a month ago, so some of our points may no longer be true of our workplace anymore, but still useful info!

Here is a list of things we mention/thought about mentioning:

  • In jokes/rituals/other
  • Trying to move into existing groups/teams
  • Figuring out who does what/who to ask for what
  • Office feel – casual, professional/reserved
  • Language and words
  • New starter guides/documentation
  • Finding lunch buddies
  • History of testers at the company
  • What have I been hired for? Why do people want testers or me specifically? What is my mission?
  • Learning the history of the product
  • What doesn’t get said – processes, ways of working, business as usual
    • E.g. release processes
    • E.g. job roles (what is a “tester” here? What are “product owners”, “agile coaches”, “business analysts”, etc)
  • Management & metrics (“you should email so and so when we do this”, “you need to fill in these Jira fields”)
  • Managing your own natural instincts and biases
  • Just because things appear terrible to you, doesn’t mean they are (“omg you release without testing? Omg you’re writing user stories about database tables? You don’t walk through the board during stand-ups?”)
  • You don’t know the past yet, this might be a really good place for them having improved from an even worse position
  • Some ideas or experience that you have had in the past doesn’t always apply in every context
  • I always fall back to “what is the problem?”, rather than worry about how or what people are doing, even if I believe it will lead along a bad path. I try to tell myself I will learn from everyone and my opinions can always be changed.
  • But if there is an obvious problem and I think I know how to fix it or how to diagnose it, I try to be diplomatic and try to avoid being too direct.
  • You have to trust people at first and work with some assumptions
  • The nice thing is that the more places I work and teams I work with, there are some common aspects. Its just they get buried in the specifics of process or problems sometimes and you have to try and see through that.

Ep 89: Tubular bells and whistles

This week I talk to Abby Bangser about pipelines!

This episode is based a bit on a workshop that Abby and previous guest Lisa Crispin will be giving at ETC next year.

Things we cover:

  1. Brief definition of and difference between:
    1. Continuous Integration
      1. The practice of merging all developer working copies to a shared mainline several times a day
        I think it was Jez Humble who likes to ask a set of questions when discussing continuous integration. He has everyone who is doing CI raise their hands. Then he asks an array of questions like: anyone who has branches that live more than a week lower their hands. Then anyone who does not merge back to master at least daily lower their hands. Then anyone who does not run automated tests on every check in lower their hands. And when I saw him do this about a year ago he said it was a much better turn out (maybe 50%?) than when he had first started that years ago.
    2. Continuous delivery
      1. The practice of treating every commit as if it could be pulled into a release to production at any time.
        The challenges here are that if any commit could go to production, you need to be able to have WIP safely committed. This usually begins the debate of branching vs toggling. Additionally, toggling usually gets a big boost when looking at continuous delivery for two reasons. 1) sending fixs through a branching scheme requires those issues to be fixed on trunk as well as the release branches causing risk of regression. And 2) the ability to turn off new features through flexible toggles is one way to handle release risk.
    3. Continuous Deployment
      1. The practice of pushing any proven check in to production without manual intervention. The biggest point that should be made here, is that we are not just pushing anything to production. We are pushing only fully tested and fully proven changes. This can include automated security, performance, and accessibility testing. It can also include automated documentation and auditing functions as well.
        While zero downtime deployment strategies can be employed at any time, when choosing to follow continuous deployment this is a definite necessity.
  2. Key differences
    1. They kind of build on each other. There is some great work by Maaret P a while ago in 2014 about how regular (though maybe not continuous) deployment can be possible with some manual testing included (http://visible-quality.blogspot.co.uk/2014/04/continuous-releases-are-way-forward.html). In any case, it is important to remember the reason for each of the practices. CI is to get fast feedback and high collaboration, Delivery is to think sustainably and provide a faster MTTR. And deployment is to remove technical limitations to business goals of releasing value to the end user with specific / quick timelines.
  3. What do we mean by pipeline?
    1. Pipelines: Dictionary style: a direct channel for information / a process or channel of supply
    2. Tech style: progressively giving feedback to the team and visibility into the flow of changes to everyone involved in delivering the new feature/s.
    3. So really the hidden value: is when you treat them more like sifters. When trying to sort through stones, you can get a set of sifters with different sizes until what is left is the stones which pass properly through each sifter or “test”.
  4. And where are they hiding in your project work?
    1. Yes we of course have the major one…Idea to production, but realistically isn’t it split?
      1. Idea to ready for dev
      2. Development to prod
  5. Why identify them?
    1. Before you can optimise, you have to know where bottlenecks are (great example in the Phoenix Project). Use long standing value stream mapping techniques to identify stages, actors, queue times, rework etc to identify which parts of the pipeline have snags and then look to zoom in and figure out how to improve them.
  6. How are they useful for teams, and testers in particular? My thoughts here are testers who may have previously not been involved in the release of software at all, and maybe looking at growing in DevOps/CD/CI and not sure why they should be looking at this
    1. Absolutely! Pipelines are about reducing risk. If when you get the application into “QA” environments you could already be sure that it is properly configured, passing basic regression tests, able to connect to external dependencies, etc, you could look to focus on the more interesting risks.
    2. Why go through the (sometimes painful) exercise of deploying an application to a test environment if you could have found out that it was not passing basic validations ahead of time?
    3. Look at current feedback loops and try to evaluate if your sifters are out of order. Or maybe the feedback loops test completely different things (rather than increasing in risk/complexity/cost issues) and you can find ways to parallelise. An example could be doing static analysis reviews while also running unit tests rather than waiting to kick off unit tests until after static analysis is done.
    4. They can work like sifters making things more and more refined along the process
    5. Find pain points in the pipeline and then gather information about that pain point
    6. Even if you can’t fix the issue, gathering information is an important step to go towards fixing or improving the issue
    7. Talk to people! The pipeline is made of people, so find out what people do, what they want to do, what they need, and what they find difficult or a blocker. They will be happy to share!

Other things we mention:
Kim Knup on testing in a CD environment.

Abby’s blog

Ep 88: There’s a hack for that

This week I talk to Jahmel (Jay) Harris. Jahmel is a Penetration tester/Security consultant at Digital Interruption. He also runs Manchester Grey Hats.

  • Things to consider before starting security testing
    • App permissions?
      • Information users need to give the app
      • Push notifications?
        • Fine usually, but be aware if anything sensitive if sent – shoulder surfing
  •  Wearables
    • New ways of interacting with devices
    • They are becoming more secure but issues at the start
    • With Android we found lots of ways to recover the data
    • Bluetooth LE and other radio protocols can be insecure.
  • Testing considerations iOS vs Android
    • Root vs non root
    • Jail break vs non jailbreak
  • Common vulnerabilities
    • WebViews
    • Sensitive data over HTTP
    • Javascript vulnerabilities – used to be able to get full shell in an app via advert in webview. Coffee shop or hotel wifi
    • How secure are these webview frameworks such as cordova
  • Vulnerable IPC (Inter-process communication)
    • Things like SQL injection or file traversal
    • Lack of protection/permissions
  • Logging
  • Auth
    • Fin tech (financial tech) app – could steal all money. They didn’t think about the auth on web services
  • Binary Checks
    • Is it worth checking for root detection/doing ssl pinning etc? It took someone over a year to bypass these controls on one of our client’s app. Then they need to look for vulns.
    • Obfuscation? Worthwhile? When I did the research into Android Wear, it took me weeks just to RE.
  • They stack. Easy to bypass one but hard to bypass all. Think about the risk of the app. Does it need that protection?
  • Tooling
    • Drozer
    • Needle
    • Frida
    • decompilers
  • Automation
    • Tooling isn’t quite there. There needs to be a big push by both devs and infosec. InfoSec can’t write good code but devs aren’t always aware of the latest threats.
    • Security shouldn’t be dev->pen test. Security needs to be considered at every stage. In requirements gathering etc
      https://www.digitalinterruption.com/secure-mobile-development (https://goo.gl/P1WYcV)- reduce the cost of pen testing

Ep 87: Players gonna play play play play play

Podcast news: Friend of the show, Neil Studd has a new podcast! It’s called Testers Island Discs and there’s an intro episode out tomorrow: https://twitter.com/TestersIsland?lang=en

Now, on with the show.

So this is all Richard Bradshaw’s fault.

He came and did a talk at the BBC back in…August, I think? And there was a slide in it about the phrase ‘I’ll have a play’ and how that isn’t what testers do, and it undersells our skills. He said (and I’m paraphrasing here as it was months ago) that we could write charter for that session and having a play becomes exploratory testing.

I kind of disagreed when I first heard that, because to me, a charter for Exploratory Testing is specific. There’s an aim, or something specific being tested: a certain area, or a certain user journey, or even a user journey as a user (a bad actor as security testing, or a visually impaired user or similar).

When I ‘have a play about’, I am doing just that – there’s no real charter. There are a couple of reasons I ‘play about’:

I’m getting familiar with a product
I’m not confident that I’ve tested everything but I’m not sure what I’m missing

The first one is pretty easy – I’m just seeing what I can see without getting too much help – what makes sense going in cold, what could be improved, are there any quick wins, things like that. I could definitely wrap that up in a charter and a time block and call that exploratory testing.

But the second one is much more nebulous. I don’t know what I’m looking for, other than a sense of being more comfortable with my testing. It usually happens that I’ve tested a story and not found any issues, but I’m not happy. I’ll go make a cup of tea, and head back to my desk and start looking around a bit wider. Sometimes I think up a scenario for the original story that I’d missed previously, sometimes I find completely unrelated issues, sometimes I get a sense that I understand the system a little more. It’s a lot hard to wrap that in a charter when I don’t know why I’m not happy. I can’t formulate the words for bits I’m not happy with sometimes.

Does this make any sense? Do other people have this?

But I at the same time, I’m no’t just clicking about, am I?

That’s what it feels like, but it’s not.

There’s context there – pre-existing domain knowledge, or client knowledge, or team knowledge. There’s knowledge of what apps or websites or programs are meant to do or how they’re meant to look. It’s knowing which browsers are relevant, and which are awkward to work with. I may not be able to put words to why I’m not happy, but if I let myself wander, a little defocused, and aimless, I might find something.

It’s playing around with all the knowledge of being a tester in the background.

And that was a weird thing to realise.

I had split my work mentally into structured testing that was in a testing mindset and just playing about which was freeform, nothing intense, more of a safety net, or a get to know the product thing. I think it’s because it’s fun as well – it’s fun to explore a system and check out all its nook and crannies, and I think I’d separated that out from structured, goal-driven testing?

But it’s not really separate. Playing about, clicking about, when you’re a tester, is testing. It might not be structured even as much as exploratory testing it, but it’s still testing. And you could probably wrap it up in a mission, maybe break it down into charters, and put a time limit on it. Then you’re doing exploratory testing. It’s like the difference between science and dicking around is taking notes. The difference between playing about and exploratory testing is writing a charter.

The next time I have a play about, I’m going to write a charter or a mission (even if it’s just ‘get more comfortable’), and take notes. This will also hopefully improve my notetaking skills, which is a bonus as my notes have gone to shit recently.

So, I hate to say it, but I guess I agree with Richard now. I really enjoy the phrase ‘playing about’ or ‘noodling about’ because it’s fun, and I think testing can be fun, but I also see that it means a lot. I might start switching it out ‘I’ll have an investigate’ or ‘I’ll take a look’, as that still suggests more structure and skill.

Ep 86: Don’t be afraid to catch feels

I’ve been thinking a lot recently about human interaction with computers, the way humans connect to and feel about the tech in their lives, and how we evoke emotions using technology (both purposely and incidentally).

I’ve started work at the BBC, for the BBC Taster website, which serves experiences like a journey of a family from Syria to the UK, or an interactive graphic novel that has you making choices that will change the way world war 2 plays out (highly recommend this one, it’s still available, and I’m not saying I fucked it, but the Americans nuked the Germans in my attempt, so see if you can do better than me). These are obviously intended to make you feel things, and that’s important to look at and consider, but I’m also intrigued by mundane day to day aspect of human emotion in tech.

I recently backed an app called Aloe – it’s based on an online service that offers a check in and prompts you to do some self care. Seemingly obvious things like, taking a walk, or drinking water or a reminder to take your meds.

The app will allow you to set your own self care goals, things that may not be obvious, or work for everyone, but you know work for you – for me it would be to go to spin class once a week, as exhausting myself at least once a week is great for my mental health.

One thing the online version prompts you to do is to pick a plant that represents how you feel that day – things like a succulent, or a cactus, or a sunflower. And that’s great, sometimes you don’t want to acknowledge your feelings enough to put a word to it or can’t boil it down to words, but looking at a series of images and choosing one that speaks to you is pretty awesome.

Another thing I’m interested in the esoteric, about how tech is replacing or augmenting aspects of spirituality in certain spaces. Leigh Alexander has spoken about this, about how humans compulsively sweep their thumbs over the screen as if their phone is a modern day religious tablet or token bringing luck or solace in anxious times (also come talk to me about emoji as sigils in modern spell casting or predictive text as tarot). Humans cling to superstition, to things that reduce anxiety (regardless of how healthy that may be).

Binky, the entirely fake app that provides swiping and refreshing but without any actual connection started as a satire of people’s need to be connected, to refresh, but people have found it really useful for curing their need to swipe. A placebo, if you will, that you can wean off as you’re literally not missing anything.

I’m not entirely sure where I’m going with this. As we put more and more of our lives into tech, we put our feelings there as well, and that’s something we should be thinking about

You might not be working on an evocative project, but your users and your team will still feel something about your project. Maybe the best thing you want from your app is for your users to feel that small satisfaction of something going as smoothly. A check off a to do list. But if that means that users have more time everywhere else, that will elicit a good feeling. Maybe you want people to engage in something that isn’t fun or interesting, but necessary (If I could have an app that tells me which bin I need to put out which week, I’d be happy with that. I should set up calendar notifications, but my council keep changing my bin collections. Yes, I do sound like a daily mail reader, what of it?).

I guess, my challenge to you is: think about what you’re working on right now. I bet you know a fair amount about the purpose and use cases for it. But how will it make people feel? Excited? Anxious? The small and seemingly inconsequential satisfaction of something going smoothly or being completed easily?

Think about the current state of your work: Is it making (or going to make) users feel the way you want it to?

Next: what do you feel about it? Happy? Angry? Why? What can you do to change or amplify or even just maintain these feelings? What feelings are users going to have coming into your app or project, and what should you do about those?

I advocate stepping back from work a lot as for the most part, the bugs and work don’t ‘belong’ to us, they belong to the product owners, the business. But sometimes, thinking about how it makes you feel can open up new avenues of investigation. Or open up new insights into your emotional state, and that’s something that is useful to take note of. We get so used to the Monday: boo, friYAY cycle that sometimes I think we forget that we spend a lot of time at work, and maybe we should consider if we’re actively unhappy more often that not.

Basically: FEELS. Think about them <3

Ep 85: Sound Effects and Overdramatics

This week is the Manchester Testbash crew!

Matt, Claire, and I talk about out experiences of putting our first successful conference submissions together and how we’re preparing. We’re planning on doing a part two post Testbash to reflect on the days themselves.

You can find Claire on twitter and she writes for Ministry of Testing.

First, what we’re doing at Testbash:

Matt is giving a workshop(!) on APIs for beginners
Claire is giving a talk about her personal experience with imposter syndrome
I’m giving a talk on mental health and anxiety in testing

Tickets are still available!

Step one: Submission

Where can I start to gain confidence and experience before submitting?

  • Write blogs but don’t publish them
  • Take something you’ve read/watched/heard and create a small talk to a couple of people at work that you feel comfortable presenting to
  • Lightning talks/90 second talks
  • Attend meetups and just talk to other people – you will have something to share and you will be surprised at how different your experience can be.
  • Build up to bigger talks – present a talk to more of your company, or a local meetup (plenty of meetups that encourage new speakers).
  • If you get the opportunity – go to a peer conference
  • Outside encouragement – people will cheerlead you if you want to get into public speaking

What to submit

  • Technical talks
  • Overcoming a challenge / solving a problem
  • Personal experience report
  • Workshop

Where to submit

  • Consider whether you will have to pay to speak
  • MoT Open CFP
  • Other conferences which have a deadline for submissions
  • Smaller events like Leeds Testing Atelier
  • At home or abroad ?
  • Will your employer allow you to attend or will you need to book time off ?
  • Do you know anyone who has spoken at an event you are interested in ?
  • What was their experience like?
  • Can you submit the same talk to multiple conferences ?

How to put a submission together

  • What do I want to say
  • Why do I want to say it
  • What will other people get out of listening to me

Step two: oh god, what have I done (putting the talk/workshop together)

  • Scripted talk or more free-form ?
    • Notes
  • Avoid slides that you just read out
  • Using keywords or images as prompts

Step three: Practice AKA the countdown

  • Talk through it yourself – friends / family etc
  • What reads well on paper isn’t always very easy to say. Better to compromise your language so that it flows more naturally than tripping over pronunciations.
  • Practice at smaller events / internal company audience
  • Resources if you are new to speaking (i.e.: James Whittaker videos: one, two)

Episode 82: You’ve Got All These Great Answers

To All These Great Questions

WE’RE BACK and with a bumper episode of Let’s Talk About Tests, Baby.

Matt and I have both gone for interviews, so we decided to do an episode about it. We talk about applying to jobs, sussing out the language if job adverts, CVs and cover letters, and the dreaded interview. Matt also has experience on the other side of the table, so he brings that insight to the episode as well.

Mindmap
Interview MindMap

Things we mention:

BugFinders
uTest
http://www.testingeducation.org/BBST/
http://www.satisfice.com/info_rst.shtml
http://www.istqb.org/
http://www.askamanager.org/

Episode 81: It’s the small things that count

This week I talk to Andrew Morton about unit tests!

We cover the basics of units tests: what they are, and what they do. We also cover unit tests as documentation, pitfalls to avoid, and some tools you might see in the wild.

References and resources we mention:

https://github.com/testingchef/bad-units – A repo of bad unit tests
https://www.youtube.com/watch?v=q8I8-hVS5JI – Andrew’s Whiteboard Testing video on unit tests as documentation