Ep 94: Sound Effects and Overdramatics: The retrospective

This week the Testbash MCR crew is back! Claire, Matt, and I discuss post Testbash feels: how we felt we did, what we’d do differently next time, and our plans for the future.

We also talk about Softeare Testing Clinic Manchester! Claire is running it with Richard Bradshaw, and Matt and I will be mentors! If you’re in the area, come join us! The first meeting is Monday 8th January

ALSO: Come ask me questions for the 100th episode! You can ask me via email, twitter, slack, in person, or at my curiouscat account.

Claire and I wrote about our prep for the club: https://club.ministryoftesting.com/t/preparing-to-give-a-talk-as-a-new-speaker/11476/5

Matt notes

Awesome things

  • The tasks I designed to generate discussion paid off
  • Good balance of timing
  • Self-hosting the application and removing most of the tech issues getting setup was a good move
  • Application was destroyed near the end!

Lessons learnt

  • I was going to make a text based adventure game with an API interface as
  • I really wanted to make something really fun like Richard Bradshaw’s
  • Lego Automation.I went with a much simpler “game” that better fit a typical API behaviour and I think this paid off.
  • Lights! When to prepare and when to just ask
  • Having backups – practicing backup procedure
  • Wording can always be tweaked
  • 30 people/large room might not be so good for this particular workshop
  • Felt discussions were harder to generate on the spot with 30 people

Would I do it again?

  • Hell yes, I can definitely repeat this workshop.
  • The work paid off but it was a lot of work, I enjoyed designing it and it was very satisfying to see the ideas I’d come up with actually work the way I’d hoped.
  • Plenty of requests for more advanced versions.
  • I just wanted to have a go at a technical workshop as there were topics I’d like to talk about but they don’t fit in a presentation style and I prefer mentoring.

Would I recommend it? Tips?

  • If you want to share knowledge about more technical subjects, absolutely!
  • Even for softer skills workshops can be far more engaging and memorable.
  • Mostly the same tips and advice as for talks/presentations, but the big differences being its longer and more interactive!
  • Consider the balance of teaching versus mentoring
  • Be ready to spend a lot longer preparing it and practicing it!

Claire Notes

What was good?

  • I felt like I was well prepared. I’d done a bunch of practice at home as well as doing a dry run of the talk at another event.
  • Once I got going I felt much less nervous than I thought I would, finding familiar faces in the audience really helped. I felt like i was looking round the room and not down at the floor or just at one person
  • I made a couple of jokes and people laughed! Which I think helped me feel more confident.
  • I could see some of the audience nodding when I was talking. Made me feel like what I was saying resonated with at least some people
  • People liked some of the slides – the Venn diagram I stole borrowed
    I skipped out the talk just before mine, which i was good about but having a bit of quiet time really helped
  • Loads of people chatted to me afterward about their own similar feelings
  • My colleagues who were there didn’t think I was a lunatic

What could have been better?

  • I’m not the strongest at slides. Think i need to practice this and get better at it
  • I tripped over my words a few times
  • Because i was on quite late in the day I had got myself pretty worked up by the time i went on.
  • I felt I didn’t spend as long on some of the slides as I should have. Possibly rushed a bit, even though i ended up finishing in time for questions

Would I do it again?

  • Definitely!! I can see how people get the speaking bug
  • Thinking of ideas for new talks is tricky !!

Gem notes

Awesome

  • Felt prepared
  • Had people in the audience that were smiling and nodding and who I knew wanted me to succeed
  • Not the only Hamilton reference!
  • Great feedback
  • I didn’t trip over my own feet!

Lessons Learned

  • Prepare for questions
  • Escape afterwards for a bit – overwhelmed a bit by the adrenaline rush + emotional feedback

Would I do it again?

  • Absolutely!

Ep 93: 2017 in review

2017 has been a year!

24 episodes (inc. this episode and the rebroadcast)
12 interviews
13,544 downloads as of recording
28 Patreon posts, 14 of which were Patreon only
More tweets and slack messages than I care to think about
1 new microphone

Favourite moments: All of my interviews – finally got some names that I’ve been organising for a while. Having Maaret come back on was great!

Favourite personal moments: Giving my talk, getting a new job, which left me feeling simultaneously more and less secure as a tester, starting Inner Pod, which is a labour of love.

I got a semi-regular co-host in Matt who has been great in getting ideas together and putting mindmaps together and all that good stuff. It’s made it a lot easier to do shows when there’s fresh ideas coming in.

I was on Screen Testing, where I got weirdly aggressive about the concept of a metaphor, and you’ll be hearing me around a few other podcasts in 2018.

What lies ahead?
2018! What wonders will this year bring?

Loads more interviews! I’ve got a few that I want to do and am in various stages of setting up.
Inner Pod season 2 is coming out as well, that’s happening in the background (you’ll hear some familiar names on the show!)
More automation! More process stuff! Maybe a workshop or two~~~

I’ll be at Testbash Brighton and will have my portable new microphone so will almost definitely end up shoving that (consensually!) in people’s faces 😀

I am terrifyingly close to episode 100. If I don’t take a break or miss an episode, then episode 100 will be released on 29th March 2018. That’s ridiculous. It’s amazing. I have no idea what I’m going to do for that episode. I’d love to do an AMA (Ask me anything) if anyone wants to ask me questions? IN FACT that episode will the episode after Testbash Brighton so I might try to get some people to say some things into a mic then for my landmark 100th episode? Yes, okay, so the plan! You ask me questions, I and anyone who wants in at Testbash Brighton can answer them, and I’ll put it all together for episode 100. Deal? Deal! I will remind you of this, and if I don’t get questions I’ll have to make them up and literally no one wants that.

This is how I end up volunteering for everything btw, I get carried away with things. To make the Ask part of the Ask Me Anything easier, I’ve set up a Curious Cat account: https://curiouscat.me/Gem_Hill. This allows you to ask questions anonymously, without an account anywhere. You can also submit questions through any other means: letstalkabouttests@gmail.com, @LetsTalkTests, here on the site, on slack, in person, anywhere!

I hope 2017 hasn’t been a complete trash fire, and I hope the holiday break grants you some respite. If it doesn’t, feel free to reach out to me at the above contact methods, or there is a twitter hashtag #joinin that is essentially a twitter chat for people who need some company, or kind words, or a reprieve from whatever holiday chaos is about.

I’ll return the first week of January with a post-testbash Manchester chat with Claire Reckless and Matt Bretton!

Love you all <3

Ep 92: Everybody’s a critic

This week its just me, talking about critiquing my own testing! I’ve been doing this a lot in my new job – figuring out the priorities of the team and where my focus needs to be, and where this intersects with my weaknesses and biases. It’s been really interesting, and I’ve been trying to be really conscious about it.

And here’s the mindmap I used for this episode:
Critiquing your own testing

And a text version:

  • Critiquing your own testing
    • Plan your testing
      • Tell team what you want to test
      • Tell the team what you think you need
      • Get feedback
    • Put on different hats
      • Customers
      • back end users
      • marketing
      • support
      • Security
      • Accessibility
    • Document what you have tested
      • This will help illuminate gaps
    • Look at the big picture
      • Where does this feature fit?
        • in the sprint?
        • In the workflow?
    • Review with as many people as feasible
      • Design
      • UX
      • PO
      • Devs
      • Customers
      • Business users
    • Use heuristics
      • Elizabeth Hendrickson’s list
    • Take notice of bugs that others find
      • Own your weaknesses
        • Checklists
        • Learning

Ep 91: Stop, collaborate and listen

This week I have two guests! Maaret and Franzi come talk to me about the call for papers collaboration for the European Testing Conference!

The call for collaboration is a fascinating way to build a conference, where all participants get a 15 minute call with two members of the team, where they can tell their story, get real time feedback, and improve on their idea. Maaret and Franzi talk about what they learned from the process, their most memorable people, and what they wish to see at testing conferences!

The European Testing Conference is at the Amsterdam Arena, Amsterdam, Netherlands on Feb 19th-20th, 2018

Franzi is involved in Software Crafters, and Maaret blogs at A Seasoned Tester’s Crystal Ball

Ep 90: New job weirdness

This week we talk about new job weirdness! This was recorded about a month ago, so some of our points may no longer be true of our workplace anymore, but still useful info!

Here is a list of things we mention/thought about mentioning:

  • In jokes/rituals/other
  • Trying to move into existing groups/teams
  • Figuring out who does what/who to ask for what
  • Office feel – casual, professional/reserved
  • Language and words
  • New starter guides/documentation
  • Finding lunch buddies
  • History of testers at the company
  • What have I been hired for? Why do people want testers or me specifically? What is my mission?
  • Learning the history of the product
  • What doesn’t get said – processes, ways of working, business as usual
    • E.g. release processes
    • E.g. job roles (what is a “tester” here? What are “product owners”, “agile coaches”, “business analysts”, etc)
  • Management & metrics (“you should email so and so when we do this”, “you need to fill in these Jira fields”)
  • Managing your own natural instincts and biases
  • Just because things appear terrible to you, doesn’t mean they are (“omg you release without testing? Omg you’re writing user stories about database tables? You don’t walk through the board during stand-ups?”)
  • You don’t know the past yet, this might be a really good place for them having improved from an even worse position
  • Some ideas or experience that you have had in the past doesn’t always apply in every context
  • I always fall back to “what is the problem?”, rather than worry about how or what people are doing, even if I believe it will lead along a bad path. I try to tell myself I will learn from everyone and my opinions can always be changed.
  • But if there is an obvious problem and I think I know how to fix it or how to diagnose it, I try to be diplomatic and try to avoid being too direct.
  • You have to trust people at first and work with some assumptions
  • The nice thing is that the more places I work and teams I work with, there are some common aspects. Its just they get buried in the specifics of process or problems sometimes and you have to try and see through that.

Ep 89: Tubular bells and whistles

This week I talk to Abby Bangser about pipelines!

This episode is based a bit on a workshop that Abby and previous guest Lisa Crispin will be giving at ETC next year.

Things we cover:

  1. Brief definition of and difference between:
    1. Continuous Integration
      1. The practice of merging all developer working copies to a shared mainline several times a day
        I think it was Jez Humble who likes to ask a set of questions when discussing continuous integration. He has everyone who is doing CI raise their hands. Then he asks an array of questions like: anyone who has branches that live more than a week lower their hands. Then anyone who does not merge back to master at least daily lower their hands. Then anyone who does not run automated tests on every check in lower their hands. And when I saw him do this about a year ago he said it was a much better turn out (maybe 50%?) than when he had first started that years ago.
    2. Continuous delivery
      1. The practice of treating every commit as if it could be pulled into a release to production at any time.
        The challenges here are that if any commit could go to production, you need to be able to have WIP safely committed. This usually begins the debate of branching vs toggling. Additionally, toggling usually gets a big boost when looking at continuous delivery for two reasons. 1) sending fixs through a branching scheme requires those issues to be fixed on trunk as well as the release branches causing risk of regression. And 2) the ability to turn off new features through flexible toggles is one way to handle release risk.
    3. Continuous Deployment
      1. The practice of pushing any proven check in to production without manual intervention. The biggest point that should be made here, is that we are not just pushing anything to production. We are pushing only fully tested and fully proven changes. This can include automated security, performance, and accessibility testing. It can also include automated documentation and auditing functions as well.
        While zero downtime deployment strategies can be employed at any time, when choosing to follow continuous deployment this is a definite necessity.
  2. Key differences
    1. They kind of build on each other. There is some great work by Maaret P a while ago in 2014 about how regular (though maybe not continuous) deployment can be possible with some manual testing included (http://visible-quality.blogspot.co.uk/2014/04/continuous-releases-are-way-forward.html). In any case, it is important to remember the reason for each of the practices. CI is to get fast feedback and high collaboration, Delivery is to think sustainably and provide a faster MTTR. And deployment is to remove technical limitations to business goals of releasing value to the end user with specific / quick timelines.
  3. What do we mean by pipeline?
    1. Pipelines: Dictionary style: a direct channel for information / a process or channel of supply
    2. Tech style: progressively giving feedback to the team and visibility into the flow of changes to everyone involved in delivering the new feature/s.
    3. So really the hidden value: is when you treat them more like sifters. When trying to sort through stones, you can get a set of sifters with different sizes until what is left is the stones which pass properly through each sifter or “test”.
  4. And where are they hiding in your project work?
    1. Yes we of course have the major one…Idea to production, but realistically isn’t it split?
      1. Idea to ready for dev
      2. Development to prod
  5. Why identify them?
    1. Before you can optimise, you have to know where bottlenecks are (great example in the Phoenix Project). Use long standing value stream mapping techniques to identify stages, actors, queue times, rework etc to identify which parts of the pipeline have snags and then look to zoom in and figure out how to improve them.
  6. How are they useful for teams, and testers in particular? My thoughts here are testers who may have previously not been involved in the release of software at all, and maybe looking at growing in DevOps/CD/CI and not sure why they should be looking at this
    1. Absolutely! Pipelines are about reducing risk. If when you get the application into “QA” environments you could already be sure that it is properly configured, passing basic regression tests, able to connect to external dependencies, etc, you could look to focus on the more interesting risks.
    2. Why go through the (sometimes painful) exercise of deploying an application to a test environment if you could have found out that it was not passing basic validations ahead of time?
    3. Look at current feedback loops and try to evaluate if your sifters are out of order. Or maybe the feedback loops test completely different things (rather than increasing in risk/complexity/cost issues) and you can find ways to parallelise. An example could be doing static analysis reviews while also running unit tests rather than waiting to kick off unit tests until after static analysis is done.
    4. They can work like sifters making things more and more refined along the process
    5. Find pain points in the pipeline and then gather information about that pain point
    6. Even if you can’t fix the issue, gathering information is an important step to go towards fixing or improving the issue
    7. Talk to people! The pipeline is made of people, so find out what people do, what they want to do, what they need, and what they find difficult or a blocker. They will be happy to share!

Other things we mention:
Kim Knup on testing in a CD environment.

Abby’s blog

Ep 88: There’s a hack for that

This week I talk to Jahmel (Jay) Harris. Jahmel is a Penetration tester/Security consultant at Digital Interruption. He also runs Manchester Grey Hats.

  • Things to consider before starting security testing
    • App permissions?
      • Information users need to give the app
      • Push notifications?
        • Fine usually, but be aware if anything sensitive if sent – shoulder surfing
  •  Wearables
    • New ways of interacting with devices
    • They are becoming more secure but issues at the start
    • With Android we found lots of ways to recover the data
    • Bluetooth LE and other radio protocols can be insecure.
  • Testing considerations iOS vs Android
    • Root vs non root
    • Jail break vs non jailbreak
  • Common vulnerabilities
    • WebViews
    • Sensitive data over HTTP
    • Javascript vulnerabilities – used to be able to get full shell in an app via advert in webview. Coffee shop or hotel wifi
    • How secure are these webview frameworks such as cordova
  • Vulnerable IPC (Inter-process communication)
    • Things like SQL injection or file traversal
    • Lack of protection/permissions
  • Logging
  • Auth
    • Fin tech (financial tech) app – could steal all money. They didn’t think about the auth on web services
  • Binary Checks
    • Is it worth checking for root detection/doing ssl pinning etc? It took someone over a year to bypass these controls on one of our client’s app. Then they need to look for vulns.
    • Obfuscation? Worthwhile? When I did the research into Android Wear, it took me weeks just to RE.
  • They stack. Easy to bypass one but hard to bypass all. Think about the risk of the app. Does it need that protection?
  • Tooling
    • Drozer
    • Needle
    • Frida
    • decompilers
  • Automation
    • Tooling isn’t quite there. There needs to be a big push by both devs and infosec. InfoSec can’t write good code but devs aren’t always aware of the latest threats.
    • Security shouldn’t be dev->pen test. Security needs to be considered at every stage. In requirements gathering etc
      https://www.digitalinterruption.com/secure-mobile-development (https://goo.gl/P1WYcV)- reduce the cost of pen testing

Ep 87: Players gonna play play play play play

Podcast news: Friend of the show, Neil Studd has a new podcast! It’s called Testers Island Discs and there’s an intro episode out tomorrow: https://twitter.com/TestersIsland?lang=en

Now, on with the show.

So this is all Richard Bradshaw’s fault.

He came and did a talk at the BBC back in…August, I think? And there was a slide in it about the phrase ‘I’ll have a play’ and how that isn’t what testers do, and it undersells our skills. He said (and I’m paraphrasing here as it was months ago) that we could write charter for that session and having a play becomes exploratory testing.

I kind of disagreed when I first heard that, because to me, a charter for Exploratory Testing is specific. There’s an aim, or something specific being tested: a certain area, or a certain user journey, or even a user journey as a user (a bad actor as security testing, or a visually impaired user or similar).

When I ‘have a play about’, I am doing just that – there’s no real charter. There are a couple of reasons I ‘play about’:

I’m getting familiar with a product
I’m not confident that I’ve tested everything but I’m not sure what I’m missing

The first one is pretty easy – I’m just seeing what I can see without getting too much help – what makes sense going in cold, what could be improved, are there any quick wins, things like that. I could definitely wrap that up in a charter and a time block and call that exploratory testing.

But the second one is much more nebulous. I don’t know what I’m looking for, other than a sense of being more comfortable with my testing. It usually happens that I’ve tested a story and not found any issues, but I’m not happy. I’ll go make a cup of tea, and head back to my desk and start looking around a bit wider. Sometimes I think up a scenario for the original story that I’d missed previously, sometimes I find completely unrelated issues, sometimes I get a sense that I understand the system a little more. It’s a lot hard to wrap that in a charter when I don’t know why I’m not happy. I can’t formulate the words for bits I’m not happy with sometimes.

Does this make any sense? Do other people have this?

But I at the same time, I’m no’t just clicking about, am I?

That’s what it feels like, but it’s not.

There’s context there – pre-existing domain knowledge, or client knowledge, or team knowledge. There’s knowledge of what apps or websites or programs are meant to do or how they’re meant to look. It’s knowing which browsers are relevant, and which are awkward to work with. I may not be able to put words to why I’m not happy, but if I let myself wander, a little defocused, and aimless, I might find something.

It’s playing around with all the knowledge of being a tester in the background.

And that was a weird thing to realise.

I had split my work mentally into structured testing that was in a testing mindset and just playing about which was freeform, nothing intense, more of a safety net, or a get to know the product thing. I think it’s because it’s fun as well – it’s fun to explore a system and check out all its nook and crannies, and I think I’d separated that out from structured, goal-driven testing?

But it’s not really separate. Playing about, clicking about, when you’re a tester, is testing. It might not be structured even as much as exploratory testing it, but it’s still testing. And you could probably wrap it up in a mission, maybe break it down into charters, and put a time limit on it. Then you’re doing exploratory testing. It’s like the difference between science and dicking around is taking notes. The difference between playing about and exploratory testing is writing a charter.

The next time I have a play about, I’m going to write a charter or a mission (even if it’s just ‘get more comfortable’), and take notes. This will also hopefully improve my notetaking skills, which is a bonus as my notes have gone to shit recently.

So, I hate to say it, but I guess I agree with Richard now. I really enjoy the phrase ‘playing about’ or ‘noodling about’ because it’s fun, and I think testing can be fun, but I also see that it means a lot. I might start switching it out ‘I’ll have an investigate’ or ‘I’ll take a look’, as that still suggests more structure and skill.

Ep 86: Don’t be afraid to catch feels

I’ve been thinking a lot recently about human interaction with computers, the way humans connect to and feel about the tech in their lives, and how we evoke emotions using technology (both purposely and incidentally).

I’ve started work at the BBC, for the BBC Taster website, which serves experiences like a journey of a family from Syria to the UK, or an interactive graphic novel that has you making choices that will change the way world war 2 plays out (highly recommend this one, it’s still available, and I’m not saying I fucked it, but the Americans nuked the Germans in my attempt, so see if you can do better than me). These are obviously intended to make you feel things, and that’s important to look at and consider, but I’m also intrigued by mundane day to day aspect of human emotion in tech.

I recently backed an app called Aloe – it’s based on an online service that offers a check in and prompts you to do some self care. Seemingly obvious things like, taking a walk, or drinking water or a reminder to take your meds.

The app will allow you to set your own self care goals, things that may not be obvious, or work for everyone, but you know work for you – for me it would be to go to spin class once a week, as exhausting myself at least once a week is great for my mental health.

One thing the online version prompts you to do is to pick a plant that represents how you feel that day – things like a succulent, or a cactus, or a sunflower. And that’s great, sometimes you don’t want to acknowledge your feelings enough to put a word to it or can’t boil it down to words, but looking at a series of images and choosing one that speaks to you is pretty awesome.

Another thing I’m interested in the esoteric, about how tech is replacing or augmenting aspects of spirituality in certain spaces. Leigh Alexander has spoken about this, about how humans compulsively sweep their thumbs over the screen as if their phone is a modern day religious tablet or token bringing luck or solace in anxious times (also come talk to me about emoji as sigils in modern spell casting or predictive text as tarot). Humans cling to superstition, to things that reduce anxiety (regardless of how healthy that may be).

Binky, the entirely fake app that provides swiping and refreshing but without any actual connection started as a satire of people’s need to be connected, to refresh, but people have found it really useful for curing their need to swipe. A placebo, if you will, that you can wean off as you’re literally not missing anything.

I’m not entirely sure where I’m going with this. As we put more and more of our lives into tech, we put our feelings there as well, and that’s something we should be thinking about

You might not be working on an evocative project, but your users and your team will still feel something about your project. Maybe the best thing you want from your app is for your users to feel that small satisfaction of something going as smoothly. A check off a to do list. But if that means that users have more time everywhere else, that will elicit a good feeling. Maybe you want people to engage in something that isn’t fun or interesting, but necessary (If I could have an app that tells me which bin I need to put out which week, I’d be happy with that. I should set up calendar notifications, but my council keep changing my bin collections. Yes, I do sound like a daily mail reader, what of it?).

I guess, my challenge to you is: think about what you’re working on right now. I bet you know a fair amount about the purpose and use cases for it. But how will it make people feel? Excited? Anxious? The small and seemingly inconsequential satisfaction of something going smoothly or being completed easily?

Think about the current state of your work: Is it making (or going to make) users feel the way you want it to?

Next: what do you feel about it? Happy? Angry? Why? What can you do to change or amplify or even just maintain these feelings? What feelings are users going to have coming into your app or project, and what should you do about those?

I advocate stepping back from work a lot as for the most part, the bugs and work don’t ‘belong’ to us, they belong to the product owners, the business. But sometimes, thinking about how it makes you feel can open up new avenues of investigation. Or open up new insights into your emotional state, and that’s something that is useful to take note of. We get so used to the Monday: boo, friYAY cycle that sometimes I think we forget that we spend a lot of time at work, and maybe we should consider if we’re actively unhappy more often that not.

Basically: FEELS. Think about them <3

Ep 85: Sound Effects and Overdramatics

This week is the Manchester Testbash crew!

Matt, Claire, and I talk about out experiences of putting our first successful conference submissions together and how we’re preparing. We’re planning on doing a part two post Testbash to reflect on the days themselves.

You can find Claire on twitter and she writes for Ministry of Testing.

First, what we’re doing at Testbash:

Matt is giving a workshop(!) on APIs for beginners
Claire is giving a talk about her personal experience with imposter syndrome
I’m giving a talk on mental health and anxiety in testing

Tickets are still available!

Step one: Submission

Where can I start to gain confidence and experience before submitting?

  • Write blogs but don’t publish them
  • Take something you’ve read/watched/heard and create a small talk to a couple of people at work that you feel comfortable presenting to
  • Lightning talks/90 second talks
  • Attend meetups and just talk to other people – you will have something to share and you will be surprised at how different your experience can be.
  • Build up to bigger talks – present a talk to more of your company, or a local meetup (plenty of meetups that encourage new speakers).
  • If you get the opportunity – go to a peer conference
  • Outside encouragement – people will cheerlead you if you want to get into public speaking

What to submit

  • Technical talks
  • Overcoming a challenge / solving a problem
  • Personal experience report
  • Workshop

Where to submit

  • Consider whether you will have to pay to speak
  • MoT Open CFP
  • Other conferences which have a deadline for submissions
  • Smaller events like Leeds Testing Atelier
  • At home or abroad ?
  • Will your employer allow you to attend or will you need to book time off ?
  • Do you know anyone who has spoken at an event you are interested in ?
  • What was their experience like?
  • Can you submit the same talk to multiple conferences ?

How to put a submission together

  • What do I want to say
  • Why do I want to say it
  • What will other people get out of listening to me

Step two: oh god, what have I done (putting the talk/workshop together)

  • Scripted talk or more free-form ?
    • Notes
  • Avoid slides that you just read out
  • Using keywords or images as prompts

Step three: Practice AKA the countdown

  • Talk through it yourself – friends / family etc
  • What reads well on paper isn’t always very easy to say. Better to compromise your language so that it flows more naturally than tripping over pronunciations.
  • Practice at smaller events / internal company audience
  • Resources if you are new to speaking (i.e.: James Whittaker videos: one, two)