Ep 48 – (Tron) Legacy

Gotta have a Wanz reference in there, though he may hate that it’s a reference to the new Tron…

We inherit a few legacy systems in our line of work. We have to get to grips with them across the board – PMs, QA, devs. We all need to know what the system does, how, and most importantly, why.

How do we do that?

Firstly, we do an audit of a site we’re inheriting, before it comes over to us. This will help the developers and system admins get to grips with any quirks of the code, or hosting needs. We can also do a security audit, make sure modules are upto date, if it’s magento make sure we know of any module licenses that need sorting out, etc.

Then we start on the actual functionality:

At this point we’ve had interaction with the client, so we’ve got a sense of what the site is for, who it’s for, what the business area is. We can get an idea of what they business needs are and how the site meets them (or doesn’t). But sometimes the client doesn’t have the in depth knowledge of the site, functionality, or code that’s needed to support the site.

Documentation is always useful, even if you can only see a user guide or other internal documentation, because that gives you insight to what bits of the system are used most often and what features or information is needed by stakeholders.

If documentation isn’t present or is old, then the code is another form of documentation. You can talk to the devs about what they’ve found, or even sit with them while they figure it out.

Finally, there’s the galumphing and seeing what you can find option. Paired with either of the previous techniques, it’s a good way to get to grips with the system, and start to test it; even without anything to test against, you can test your assumptions.

If you need to do it by itself, it that’s how you need to find out about a system, then it’s not going to be as comprehensive, but still useful. While you may not have any requirements, you can still get a basic structure for your test session, so you can time box and manage it properly.

So a basic structure may be something like:

Figure out inputs (valid, invalid, multiple, etc). Even if you know nothing about what you’re testing, if there’s a UI you’ll have cues or types of input that is accepted, then you may be able to make guesses on valid/invalid inputs from there
Outputs (with all the inputs above)
These can take the form of reports, data, messages, all sorts of outputs based on what you put in
Dependencies and interactions. From the above, can you see the flow of information. Can you see what happens if something fails along the way? Is it possible for the system to fail in that way?
Hypothesize on what you’ve learned, what inputs and outputs connections you’ve found, any other information that’s pertinent.
Repeat the above until you’ve narrowed down the expected results

Take notes. This can be the start of your documentation if needed. You can write up your findings and then talk to the team (and I include stakeholders in the team here), see where the gaps in your knowledge are.

You may be able to start writing tests from there for regression, if there are no tests present, or not enough tests. All new functionality should have tests, and if there are obvious tests that can be added, add them when you can. Each sprint should have a section for updating tests as needed.

Worst case it the code isn’t up to the standards of your agency/developers, or it could be under maintained. You may or may not be able to refactor it to be to your standards or liking, either way you can add tests asap to build the quality in and start to improve as needed.

Legacy systems may be awkward, and I’ve focused on the most awkward here, but it can be interesting, and you can learn a lot from picking up an old system and seeing where you can run with it.