38. Test Coverage & Legacy Code

Today we’ll be talking about test coverage and working with legacy code base, going from bad to good and knowing when to do it. Test coverage in various organizations is often used to ensure that the code that you are working on is great and works well as expected. However, some places may not know when to introduce test coverage. It can be confusing too. Because just like percentage itself, if you have a 100% coverage, does that mean that you have good coverage? Have you covered all the branches, have you covered all the statements? And what are all these metrics mean? Stay tuned as we jump into when to do it and some of the major benefits of having code coverage, like sleep and keeping away those zombie bugs. All this and more inside today’s episode.

Key Points From This Episode:

  • Legacy Code: Understanding what it is and how to define it.
  • Knowing when is a good time to introduce coverage to a legacy code base.
  • The importance of discussion when it comes to adding tests.
  • Benefits to adding test coverage; getting sleep, and avoiding zombie bugs.
  • Various ways to add coverage; when should they be used?
  • Why sometimes scrapping existing implementation may be the better option.
  • When to rather add coverage to existing code.
  • The side effects of adding coverage; changing velocity.
  • Reframing the necessity of coverage to communicate it clearly to your team.
  • And much more!

Transcript for Episode 38. Test Coverage & Legacy Code

[0:00:01.9] MN: Hello and welcome to The Rabbit Hole, the definitive developer’s podcast in Fantabulous Chelsea, Manhattan. I’m your host, Michael Nunez. I have my co-host today –

[0:00:10.1] DA: Dave Anderson.

[0:00:11.0] MN: Our producer:

[0:00:11.8] WJ: William Jefferies.

[0:00:12.8] MN: Our regular guest:

[0:00:14.2] EG: Emmanuel Genard.

[0:00:15.3] MN: Today we’ll be talking about test coverage; going from bad to good and knowing when to do it. I’m sure test coverage in various organizations is often used to ensure that the code that we’re working on is awesome and works well as expected. But some places may not know when to introduce test coverage.

[0:00:37.3] DA: Yeah. It can be confusing too, because just like percentage itself, if you have a 100% coverage, does that mean that you have good coverage? Like have you covered all the branches, have you covered all the statements? What are all these metrics mean? You know, it’s confusing, Bobby.

[0:00:52.1] MN: Yeah. I mean, we can just jump right in, into the topic knowing when to do it. Let’s talk about Legacy code basis. Has anyone ever worked on a Legacy code base that needed some coverage in it?

[0:01:08.8] EG: What would you define as Legacy code base? Like two, three, four, five years old?

[0:01:13.8] WJ: To me, if it’s not used and you wrote it then it’s legacy to you.

[0:01:19.4] DA: Also if you wrote it six months ago, then it’s probably legacy code too.

[0:01:25.1] EG: Good, good. All right, all right.

[0:01:24.8] WJ: I’ve also heard some people say that to them, legacy code has to be untested. If it has tested, it’s not legacy.

[0:01:33.3] EG: That’s interesting.

[0:01:34.0] MN: To me, I think legacy code is the code you’re trying to replace by writing something else, but it still exists because it has to. Like once your next project happens, then you want to roll that one out. But you still need to maintain this legacy system.

[0:01:50.6] EG: So if I wasn’t trying to replace it, it’s not legacy?

[0:01:53.8] MN: Because there’s nothing in front of it that’s the future.

[0:01:58.2] EG: It is ever present, until some young code comes in to take it’s spot.

[0:02:02.9] MN: Yeah, young blood comes through and calls it legacy and just throws it off.

[0:02:07.8] WJ: The one I like best is the one where it belonged to someone else, because this is their legacy that they left for you.

[0:02:15.5] MN: That’s good, because it could be good legacy, or it could be bad legacy.

[0:02:19.3] EG: Yeah, semantics.

[0:02:19.9] DA: Right, where you no longer have the context. Like it’s kind of no longer supported by the original vendor or people who wrote it.

[0:02:29.3] MN: Have you guys ever worked with a legacy code that needed test, or that needed coverage before?

[0:02:35.4] EG: Yes, I have.

[0:02:37.3] MN: Oftentimes, it could be painful and sometimes it can’t, but you got to introduce it. When have you found good times to introduce coverage to a legacy code base?

[0:02:47.8] EG: If I had to make a change to that file, I've got to introduce tests. man. Because I don’t want to break anything. I’m hoping to understand what the file is, what this piece of code does and how it behaves. I don’t really understand. I can read it as much as I want to. I don’t really understand it until I can have tests that describe its behavior.

[0:03:08.9] DA: Yeah. If there is no test file, there’s no spec file for it. Then like the hardest part can often be just writing that first file. Like making the new file and you know getting used to doing that. Because I think a lot of people feel intimidated about creating a new spec file for something that doesn’t exist or they don’t fully understand. So if you get one test in there, then the next time that you have to come back and write some code, you can write some better tests.

[0:03:37.6] MN: Right. As Emmanuel mentioned, if in the event you’re making changes to the code base then you want to introduce coverage to ensure that whatever you’re adding is exactly what you expect it to be.

[0:03:53.0] DA: Yeah. Although, I’ve definitely worked in applications where there’s just zero coverage and the political capital is not there to write the tests. If you don’t have the tests, then you need to make up for that with process. You need to do the QA and move very slowly and be nervous.

[0:04:11.8] EG: What do you mean by political capital? Is it the product person, or is it the other devs that you’re trying to say, that you would be trying to persuade or make the case for tests?

[0:04:22.0] DA: I think it would be like more on the business side, where people are happy.

[0:04:26.5] EG: So the other stakeholders.

[0:04:28.9] DA: They’re happy with what they have and they don’t see why they would need to invest, like the non-political capital of paying people to sort of refactor.

[0:04:37.9] WJ: At least if it’s the product or like business folks, which it usually is, who are reluctant and have a hard time seeing the value. Although I suppose, there theoretically would be anybody who is opposed to writing tests you would have to spend political capital to convince, to allow you to write tests.

[0:04:54.3] DA: Right Yeah.

[0:04:55.1] EG: I’m interested in this. What if you just then tell them? Say, “Oh, I’m going to add this new functionality,” and you just wrote some test for it?

[0:05:01.3] WJ: To me, tests are an implementation detail. If you’re a responsible engineer, then you should be writing tests as you implement features and you don’t need permission from anybody to do that.

[0:05:11.3] MN: I agree. I think that tests, as well as QA and any other regression tests also should be put in the velocity. But I think going back to spending the political capital, as Dave mentioned, it may be possible that the tests may not bring any additional value to the code that already exists, right?

Then if you spend time writing test coverage for something that potentially had been tested by not having tests and just by being in production the entire time, it’s impossible to kind of get away with that. I wouldn’t suggest it, but if time is of the essence and there is some drawbacks that have to happen and there should be discussion on when to write those tests for that.

[0:05:551.3] DA: Right, yeah.

[0:05:55.3] EG: I see what you mean, yeah.

[0:05:57.7] DA: I think it’s a challenge too is that if you’re the loan gunman developer who’s going to write the test for it, business and other devs be damned. Like if someone breaks that test then who’s going to be the one who has to maintain it? So you do have to – you have to get buy-in from everybody.

[0:06:15.0] EG: I mean, couldn’t you just get buy-in from the other devs mainly, because they’re the ones – like if they think it’s important, maybe like for instance, I don’t know QA, like if there’s no a test, there is probably no CI process that checks, runs the tests, right?

[0:06:30.5] MN: Right.

[0:06:31.8] EG: So if CI keeps building, maybe you just don’t run your tests on CI, you run them on your local machine, you get other devs to say, “Hey, I’m going to add some tests to this. What do you think about it? Do you think this will help you?”

Basically if you’re the only person feeling the pain of no tests it’s probably going to be hard. But there’s a good chance you’re not, I feel like, right?

[0:06:53.2] MN: I mean, there are benefits of coverage, right? Whether it’s legacy or it’s not. I have a couple of examples of mine, but does anyone want to introduce what are some of the benefits to test coverage?

[0:07:02.9] DA: Yeah, I mean an obvious one is that you have an assurance that the thing is working as you expect it without actually having to click on it every single time that you open it up. I mean, it’s like starting from zero if you have feature test that tests the good thing that you want to happen.

I know that we talked a lot about unit tests in past episodes and how valuable they are, and often more valuable sometimes than the feature tests or end-to-end tests. Like if you’re coming from zero then having a feature test could be a good way to start.

[0:07:34.6] WJ: Yeah, I think that code coverage is only valuable if you actually use it, which is why continuous integration is so valuable, or continuous delivery; things that will stop you from shipping code that breaks your tests.

But I would also add, assuming you’re doing that, assuming you’re actually using the code coverage, another thing that’s really valuable about it is that you are less likely to end up with those late night failures, where something gets paged.

[0:07:59.6] MN: Yeah, page duty. It’s not a thing I like to subscribe to. I don’t know anyone who likes to subscribe to page duty. Unless you get paid extra, I guess.

[0:08:08.2] WJ: Unsubscribe.

[0:08:08.9] MN: Yeah, no. Subscribe to the podcast. Unsubscribe to page duty.

[0:08:12.7] DA: What if you get a free pager? That would be hot, you know.

[0:08:17.7] MN: What year is this?

[0:08:18.3] WJ: Hot? I guess you go to the bar and it’s like, “Oh hey, check out that guy’s pager.”

[0:08:24.2] DA: Exactly.

[0:08:24.7] MN: Yeah, those are the worse. That was not cool at all. But I do enjoy sleep when I think that code coverage definitely helps you sleep by having the confidence of the code you have out there is awesome.

[0:08:37.5] DA: Yeah and I guess if you have been woken up in the middle of the night by something, then that’s a good candidate for a place to start putting that coverage if it’s missing, making new tests and make sure that you’re assured that that case is not going to happen.

[0:08:53.3] WJ: Also not having to fix the same bug twice. That’s one thing that really bothers me is when I have to go and fix that same bug. It is back.

[0:09:01.1] MN: Again?

[0:09:02.4] DA: Oh yeah.

[0:09:03.4] WJ: It is horrible. With code coverage it would stop it from coming back. No more zombie bugs.

[0:09:07.6] MN: Yeah, those are the worst. Can’t stand those. Yeah, just like whether it’s sleep, or it’s just like confidence in your code base and having to get rid of zombie bugs that keep coming back I think is a definite reason why I personally love having code coverage.

Trying to hit to that number 100% mark is the goal. It’s often improbable, right? If someone tells you, “The code base we have is at a 100%,” it’s like, “Really? You really got it at a 100%?” You have to question it, because it’s so rare.

[0:09:45.0] EG: The code base I am working on is at a 100% value.

[0:09:46.1] MN: It’s at a 100%? Really?

[0:09:48.8] WJ: That makes me a little bit nervous actually. That makes me wonder if they are more concerned about hitting the 100% number than they are about making sure that the most important execution paths are covered. Because it could be that that line is being exercised in one test, but actually it needs to handle three different data types. All three of them are equally important and you’re only testing one.

[0:10:14.0] DA: They’re all noble.

[0:10:15.2] WJ: You’ve instead spend that time that you could’ve spent making sure that the key paths were correct, catching all of these really minor code paths that no one really cares about.

[0:10:28.6] EG: Yeah. I think that’s a really valid point. The quality of your test is not something you can tell from code coverage. I’ve been sort of questioning that myself in terms of do my test really expose the problem I’m trying to solve, including the regular cases with the MD edge cases? Code coverage doesn’t tell you that.

I mean, when using all these metrics, code coverage being one of them, what is like – how do you know when to stop? I don’t mean the question I have. How do you know, “Okay I have covered enough, or I have covered enough?”

[0:11:04.8] WJ: I think if you are TDD’ing then as soon as the feature is done and you’re finished refactoring. You’ve written all the tests that were necessary to create the feature.

[0:11:14.6] MN: Right. I think William brought up the point of when the bug gets introduced. I mean, sure you may have missed a particular test that exposes that bug, but you’d never really know until it surfaces, which may happen.

[0:11:31.3] WJ: Then you write the test then.

[0:11:31.7] MN: You write the test for that. Then whatever new thing that comes up, you make sure that you write tests that covers everything. Test coverage is awesome. You should definitely do it. Okay, so supposedly you have coverage, you’re on a code base now. You’re at a client, you’re at a job and there is 50% code coverage. Let’s just say that, 50% flat.

You need to introduce new tests, new feature, you need to introduce tests to the new features that you’re building, but you also need to go back and ad some coverage to any other file that you touch. Let’s try to be realistic and not like, “Okay, your job William is to go into the code base and write tests for all the things. Go.” That’s highly unlikely. But if you’re adding new features and you’re touching this new thing that needs coverage, you might as well just start adding coverage.

[0:12:22.9] MN: Be a good Boy Scout. Leave it better than you found it.

[0:12:24.9] MN: There you go. There you go. Awesome.

[0:12:26.6] DA: That does happen though. Like you may have one person that is tasked on a project to write legacy tests, legacy coverage for the tests that are missing.

[0:12:36.4] MN: Right. I mean, I find that to be – you’re not writing new stuff. So it’s like sad.

[0:12:43.9] DA: I agree. I feel like it does isolate the context, and if there’s just one person or two people writing for all the things, then you’re not getting the benefits of the good Boy Scout aspect. You may be writing test that you don’t actually need. just because you want to get that artificial 100% branch or statement or path coverage or whatever you’re going for.

[0:13:07.4] MN: I myself have run into two different ways of going into a code base and increasing it. One would be to – if you’re on a file that needs coverage, you scrap everything and then you unit test the entire thing to make sure that everything works well.

[0:13:23.5] DA: Just TDD’ing it from the beginning.

[0:13:24.7] MN: Yeah, TDD’ing from the beginning, right? Or you just write coverage that covers the current implementation. There are pros and cons to both those things again.

[0:13:35.2] WJ: Another route you could do is write the tests with the code in place, and then delete the code and rewrite it.

[0:13:42.3] MN: Right. I think we’ve spoken about this in the TDD episode. But one of the pearls of ensuring that you delete all of it, and then you actually write the code again is ensuring that the design of the code is more thoughtful.

[0:14:03.6] DA: Right. It’s often frustrating to test with legacy code that didn’t have a test to begin with, because you have to mock the imports, or do some global messiness on classes that you might not want to do.

[0:14:19.7] MN: Right. I just feel like – then one of the cons that I’ve seen of actually scrapping everything is potentially missing out on a business value or a business requirement that wasn’t really visible in the code. I think if you delete everything and then you write everything, you may miss something. I’m not sure exactly what or where, but that’s the one thing that racks my brain.

There may be a business value that you delete because you wrote the test that you may have thought this will cover everything, but may miss something in the long run. Does anyone have any like personal experience with that, or?

[0:14:59.6] WJ: Yeah, you absolutely need to test it in the browser and make sure that it still works.

[0:15:05.3] DA: I guess that’s what version control is for. You can have some comfort in deleting code, because it’s not really deleted. It never really goes anywhere. It’s still there.

[0:15:17.0] MN: When in doubt, revert it out.

[0:15:20.2] DA: Yeah, that’s nice actually. Like when you’re doing a refactor and you delete a bunch of stuff and then you get to a point as like, “I just need to abort.” Like, “I’ve flown too close to the sun. My hubris is too great. I’ll be less ambitious next time.”

[0:15:34.7] MN: Right. But we agree that scrapping everything and writing the unit test, or as William mentioned, write the unit test and then rewrite the entire implementation over is probably the better option of the two, rather than just writing tests that exist in the code base.

[0:15:50.6] WJ: I’m assuming you’re optimizing for high-quality design and comfort that your tests actually cover the implementation.

[0:15:58.3] DA: Yeah. You often end up with better code, because you have dependency injection over like global things and all that good stuff.

[0:16:08.1] MN: Yeah. So imagine all your team if you’re having coverage that you weren’t adding before, one of the side effects of that as we mentioned before would probably be velocity, right? Because you’ll probably slow down on the velocity, because your team has now been made the uniformed decision that we will all test new features and take the time, be a good Boy Scout, be a good Girl Scout, fix those previous files and make sure that they have tests.

But velocity will take a hit, and I think that’s something that the entire team should know. I was going to say organization, but yeah, like if your team is doing that the organization needs to know, “Hey, we’re doing this for the benefit of sleep and other good things that we mentioned before, to ensure that everything works.”

[0:16:58.0] EG: I mean, I think one of the things I would stress to any other stakeholder is, “Remember that time when production went down at 2 AM, or remember that time when this bug that really messed up for our users happened and we got to scramble like crazy to do this thing? We would like to have less of those times.”

So I think framing you edit the point of like, “Hey, this is really for you and your job and for the people using our app together.” It’s going to be probably a negotiation I’m sure, but I think like if it’s really something that’s better for the application you’re working on and the people using it, I mean I would stress that more than anything else, because it’s for them.

[0:17:37.0] WJ: That’s the perfect time to do it is right after there is a huge production outage. Once you’re done cleaning up after it, that’s the time, go to the stakeholders and say, “Hey, let’s make sure that never happens again. Let’s get some automated tests.”

[0:17:49.8] MN: Yeah. Either experience, pain, like 2 AM calls or like money? Like, “Oh, that bug made us lose X amount of dollars.” People were like, “Oh, if we just paid it upfront by spending an hour or two extra on the code coverage, that will be very helpful.” But yeah, that was a great example Emmanuel.

[0:18:10.9] WJ: You’ll never completely eliminate the possibility of an outage, but you can reduce it dramatically.

[0:18:15.3] DA: It is good at like galvanizing people to a cause. If everyone is feeling the pain, then it’s easier to convince them to get onboard and join the push for happier code and test coverage.

[0:18:30.9] WJ: It gives you a dollar value.

[0:18:33.8] DA: Yeah, like assigning a dollar value to every minute that you’re out, especially if you’re looking ecommerce or what have you, like anything that has a stream of money you can put a dollar value on every minute that the site is out. That’s really powerful.

[0:18:50.9] WJ: Yeah, especially when they’re comparing it to features and saying, “Well, this feature is going to make us money and these refactors that you do don’t make us any money.”

[0:18:58.9] DA: Yeah, it’s true.

[0:19:00.5] MN: We want to make sure that everything works well, and then you make money in the long run.

[0:19:04.9] WJ: It doesn’t make us money, but it stopped us from losing this much money that we just lost.

[0:19:08.1] MN: There you go.

[0:19:09.3] DA: Right. Also the incalculable, like brand equity and trust that you lose from your clients and – Yeah.

[0:19:17.2] EG: Your site being down.

[0:19:19.9] WJ: And turnover from frustrated developers.

[0:19:23.6] MN: That is true. So, we just had a lengthy conversation on test coverage and why you should introduce it to your organization, if you haven’t already and why you should keep it up. As we mentioned before, it’s very good for your health, for your sleep, for your confidence in your code base and for the work that you do.

[0:19:44.7] EG: Don’t worry about 100%.

[0:19:46.7] MN: Yeah, just get it. If it’s a 100%, make sure that the quality of the test is important, which we have a couple of episodes — I’m glad to say we have a couple of episodes that talk about testing and the quality of those tests.

[0:19:59.3] DA: Part of the testing series of the Rabbit Hole.

[0:20:03.2] MN: Exactly. Do we have any teach and learns today?

[0:20:07.2] DA: Yeah. I’ve been learning a lot lately. We were talking earlier about planning this episode, we were talking about object destructuring and other blog posts that you had and I read that. I learned that you can unpack deeply nested objects in JavaScript, which is pretty awesome.

I really like object destructuring as it is. It makes it very clean syntax, especially when you’re unpacking props for React or what have you. But doing deeply nested stuff, that’s sweet. I like that.

[0:20:36.2] MN: I ended up writing that blog post, because I knew I would forget how to do that in JavaScript. But then I was asking one of our colleagues to look it over and to check on my English. The person was like, “Hey, I actually didn’t know you can do that, and you can fix a couple English here.” I was like, “Oh, you didn’t know? Okay, well then I have to share with everyone now.”

[0:20:56.9] DA: Yeah, thank you for sharing. That and the other thing, like object assigned and the spread operator. I got it now, but like when I first started using it was like, “Oh, USX. What’s going on with my brain?”

[0:21:08.5] MN: Yeah, it’s pretty crazy. Yeah, I think I need to write a blog post for a spread operator and assign, because I often forget those things too.

[0:21:16.0] DA: Oh yeah, do it. The other thing that I have been learning a lot more about is like server side rendering and what considerations are there. What do life cycle events actually mean in React? And, you know, I think the general consensus that I have seen is if you’re going to be doing some data fetching, then – or just anything in general, like componentWillMount and render the two things that fire on the server side, and it seems like componentWillMount is just not great. People don’t really like it. So choose componentDidMount and said, and then be happier.

[0:21:51.8] MN: Yeah, because I think componentDidMount happens after render, which means that everything is already in place and then you can load the props up like successfully. Is that along the lines?

[0:22:03.0] DA: Yeah, exactly. Yeah. I was having a bunch of problems and then I refactored and moved everything in componentDidMount and it was easy. So there you go.

[0:22:13.0] MN: Awesome. Yeah, I guess that about wraps up the episode. I like to thank my co-host. Dave, thank you.

[0:22:20.3] DA: Thanks, man.

[0:22:21.2] MN: Our producer, William thank you.

[0:22:23.4] WJ: Absolutely.

[0:22:24.3] MN: And our regular guest, Emmanuel. Always good to have you.

[0:22:27.0] EG: Always a pleasure.

[0:22:28.1] MN: Feel free to hit us up at twitter.com/radiofreerabbit. This is The Rabbit Hole. We’ll see you next time.

Links and Resources:

The Rabbit Hole on Twitter

Test Coverage

Destructuring Objects in JavaScript