Ep 36: Life Is A Lemon and I Want My Money Back

https://www.youtube.com/watch?v=BF1wVv8OnfE

The final bit of debt I want to talk about is testing debt. Part one. Part Two

I’m going to split this up into a few sections sections: ‘bad’ testing, and delayed testing.

So ‘bad’ testing, for me, is testing where I’ve not had the time, space, or ability/skill to fully test a feature. This could be caused by many things:
No time to cross-browser test
Especially if I need to test specific browsers for the stakeholders – versions of IE for example.
No time to cross device test
No time to do exploratory testing – only testing to the AC
Sometimes this can be done while testing the AC, but if I don’t get time to fully explore, just pass it on the basics, then it’s not really tested properly. The balance between exploring and signing off is skewed too much towards quick testing.
The insidious lack of care that I mentioned a couple of weeks ago
If the devs don’t care then I won’t feel motivated to point out issues to
Not having the context I need to feel fully connected to a project; so I can’t fully tell if the feature meets the needs of the client, or fits in with their branding etc
Not having the ability/skill to test fully: this could be a missing knowledge on the system or things like accessibility testing, performance testing etc

Delayed testing is when the story is shipped without testing, the testing is moved to a separate task, or simply pushed back to the next sprint. Delayed testing can also be caused when you’ve got all the integration testing as well as stories in a sprint to do, in the same amount of time.

Sprint one: 30 stories
Sprint two: 30 stories (each with integration testing)
Sprint three: see above.

Most of the time this is doable, especially with automated regression checks and testing alongside the development process, and closely to developers, as opposed to siloed testing. However, it does add up, and it’s something to take into account.

One way of tackling bad debt is getting a feel of what adds value to a project – what the business wants to get from the feature or project as a whole, and what the stakeholders see as important. You can then focus testing around those areas. This has the dual effect of both bringing more value from testing, as you’re testing what’s higher risk, or most important, so covers off the most bases efficiently, so even if you are pressed for time, you still feel like you’ve contributed something to the project.

Testing debt is the one I’m very familiar with; I’m more confident in my skills, but still learning, and there are huge gaps in my knowledge especially around automation. I find it really hard to test a project I have no context for, and even if I don’t miss anything I still feel like I’ve only tested the feature or issue shallowly. But I’m learning, and learning how to stay out of debt and I become a tester proper.

Footnotes

http://thetesteye.com/blog/2010/11/turning-the-tide-of-bad-testing/

Ep 35: Put Your Money Where Your Mouth Is

So, part two of my debt series is about conceptual debt, or product design debt, or UX debt. This is a subset of technical debt, but specifically related to the design of the product, or the UX. See part one here

Most iterative development cycles form a minimal viable product, which is then improved upon with each release. We experiment with design and features, and then either add or change based on feedback. The issue of course is if the feature is successful, then design and ux refactoring rarely happens – we move onto the next feature.

The prime example of this is the crowded homepage, especially for online stores, in my opinion. Navigation gets more complex as more lines, sales, and categories are added, but, if it’s an established site, changing the navigation may be detrimental to the traffic that goes through to various parts of the site.

Offers, signups, social media blocks are all added to the homepage, and also services like optimzely allow users to easily do A/B testing by changing various parts of the site (A/B testing is where you make a change to a site, but only have the change show to a certain percentage of users (you can target users from devices, or certain browsers, or just at random), then compare the results of the change).

I separate technical debt from UX debt as UX debt is sometimes harder to quantify – it doesn’t necessarily affect performance or functionality like technical debt might – or might not be as visible (slightly clumsy user journey vs. a small bug in a feature). That doesn’t mean it’s not as important, it’s just different, and requires a different approach.

Fixing UX debt may also be a large change that may be too scary or time-consuming for the stakeholders to be able to stomach, and so it stays as it is, of only gets fixed slightly. Refactoring code to reduce the technical debt I mentioned last week does not (or should not) affect the functionality.

A blog post I’ll link uses Vegas as an example of product design debt – there’s so much stuff that’s been built close to other existing things to get more attention, or add more features that it becomes a confusing and confused looking mess. And fixing that is not going to happen.

So, how do you avoid, or minimise this type of debt?

Firstly, not going into it in the first place, or going into it with a plan. This is easier said than done, obviously, but careful planning and future-proofing will take time, but also save time.

This really changes depending on whether you’re in house, or a SAAS company, or an agency. I’ve worked in both, but I have more experience in agency work, and its a different kettle of fish.

As a team, you need to interact with the stakeholders to get their overall vision, their brand, what they want from their product and what their users want. From there you can build up ideas of the product, design features and the UX, and start building.

Once you start building you need to refactor as you go, and this includes the UX if needed. If you add a new feature that needs to go in the menu and on the homepage, you and the stakeholders might need to consider that feature in the wider scale of the project.

UX is a little harder on an agency project, because you might have two layers of separation from the users – there’s the team building the product, the stakeholders, then the users. Sometimes the stakeholders are users, but often there are users that don’t interact with the build team, so a lot of debt might be unintentional, and inadvertent, which is why regular refactoring is needed.

Again, there’s no way to avoid debt, shit happens, clients change their minds, features require different user paths on reflection. You need to build in time to refactor this into your sprints, just like you would build in time to refactor code.

Footnotes

https://medium.com/@nicolaerusan/conceptual-debt-is-worse-than-technical-debt-5b65a910fd46#.j98euowli
http://andrewchen.co/product-design-debt-versus-technical-debt/
https://medium.com/@vijayssundaram/user-experience-debt-c9bd265d521b#.zhdjekfy6

Ep 34: Bills bills bills

Can you pay my bills, can you pay my telephone bills?

Technical debt is essentially the consequences of creating a system. Shortcuts are taken in the course of a sprint, and, if the isn’t time to fix them, then this is the technical debt of that sprint. It can cover anything from hardcoded values, missing tests, missing documentation, lack of experience on the team leading to the system not being developed efficiently.

I’m going to talk about 3 types of technical debt over the following weeks: technical debt in the ‘traditional’ sense, conceptual or UX debt, and finally, testing debt.

Technical debt can be prudent (we’ll have to ship and deal with issues as they rise), or reckless (we don’t need tests); deliberate (compromise based on educated knowledge of how the system will work), or inadvertent (shit happens). While the cause may be able to be evaluated and fixed, the result (the actual debt), is what’s important for the work that’s in front of you right now.

As an aside, its not always something that can be fixed – I got my developer partner to look over this for technical correctness (the best kind of correctness!) and he said that debt can’t always be fixed: Not always, times change, things that seemed good at the time turn out not to be. Building flexibility that turns out not needed complicates the code. Not building for flexibility (for simplicity/speed) which turns out to be needed later.

Developers often have to fix debt, and are best placed to know about said debt, but what, as part of the Test/QA team, can I do about it?

Good communication between developers and testers is always good, and if you as a tester, can learn about any technical debt present in the system, this may help with testing. For example, if the devs know they’ve taken a shortcut, then there may be some bugs that won’t come as too much of a surprise, or future development will be harder to do and this is good information to have.

Depending on the project setup, I as a tester, may be used as client/PO proxy. The inclusion of deliberate technical debt may be based on a decision I make, so that’s something I need to take into account. For example, if a developer asks me what I should expect from a feature, and gives me a couple of options, they might say ‘we can do x, but it will take longer. I can take a shortcut, but we’ll have to deal with it down the line, or we can do y, which is less ideal, but will be doable in the time and with no debt’ then that’s the information needed to make an educated call on where the priority lies.

Testing after refactoring. Refactoring is when a developer ‘tidies up’ the code for want of a better word. When developing iteratively, like in sprints, code is added on top of code. When the project is feature complete, there may be time for the developers to sit down and restructure the code, not making any changes to the functionality (in theory), but making the code better; tidier, more efficient, nicer. If a developer refactors to reduce debt, they may introduce different issues, and it means that the developers aren’t adding new features, or fixing bugs, so you need to weigh up the benefits here. Your automated checks may catch some issues, but visual testing and some exploratory testing may need to take place depending on what code has been altered and how risky it is, and so being aware of refactoring and what’s been done is essential.

Documentation of tests and/or any prudent/deliberate debt taken on. This means that there is a base for future development to work off.

Technical debt is also a nightmare when you’re on tight deadlines – in the Agilefall/ScrumBut clusterfuck where testers get all the stories in one batch 3 days before sprint end and you’re going by the skin of your teeth before sprint end (or testing is done next sprint) adds to debt, as any bugs and fixes are done in an even more compressed timeline, leading to more debt.

There is a post I’ll link that has a quadrant diagram of debt, with inadvertent/deliberate and reckless/prudent as the combinations, and the post discusses prudent and inadvertent debt, which is what happens on good teams. The idea is that; after a product is shipped successfully, the team has learned so much that they’re not happy with the code they shipped, as they now know better ways they could’ve handled it.

And it highlights that, it’s impossible to avoid debt; even the best teams have technical debt come the end of a project. And shit happens – a late client request, unexpected resource loss, etc. So you need to ensure you reduce and mitigate debt, and make sure the risks are as understood as possible before going forward to ensure you’re not enmeshed in debt forever.

Footnotes
http://www.logigear.com/magazine/agile/technical-debt-a-nightmare-for-testers/
https://conference.eurostarsoftwaretesting.com/2013/agile-tester-vs-technical-debt/
http://martinfowler.com/bliki/TechnicalDebtQuadrant.html