I always learned that doing maximum code coverage with unit tests is good. I also hear developers from big companies such as Microsoft saying that they write more lines of testing code than the executable code itself.
Now, is it really great? Doesn't it seem sometimes like a complete loss of time which has an only effect to making maintenance more difficult?
For example, let's say I have a method DisplayBooks()
which populates a list of books from a database. The product requirements tell that if there are more than one hundred books in the store, only one hundred must be displayed.
So, with TDD,
BooksLimit()
which will save two hundred books in the database, call DisplayBooks()
, and do an Assert.AreEqual(100, DisplayedBooks.Count)
.DisplayBooks()
by setting the limit of results to 100, andWell, isn't it much more easier to go directly to the third step, and do never make BooksLimit()
unit test at all? And isn't it more Agile, when requirements will change from 100 to 200 books limit, to change only one character, instead of changing tests, running tests to check if it fails, changing code and running tests again to check if it succeeds?
Note: lets assume that the code is fully documented. Otherwise, some may say, and they would be right, that doing full unit tests will help to understand code which lacks documentation. In fact, having a BooksLimit()
unit test will show very clearly that there is a maximum number of books to display, and that this maximum number is 100. Stepping into the non-unit-tests code would be much more difficult, since such limit may be implemented though for (int bookIndex = 0; bookIndex < 100; ...
or foreach ... if (count >= 100) break;
.
Well, isn't it much more easier to go directly to the third step, and do never make BooksLimit() unit test at all?
Yes... If you don't spend any time writing tests, you'll spend less time writing tests. Your project might take longer overall, because you'll spend a lot of time debugging, but maybe that's easier to explain to your manager? If that's the case... get a new job! Testing is crucial to improving your confidence in your software.
Unittesting gives the most value when you have a lot of code. It's easy to debug a simple homework assignment using a few classes without unittesting. Once you get out in the world, and you're working in codebases of millions of lines - you're gonna need it. You simply can't single step your debugger through everything. You simply can't understand everything. You need to know that the classes you're depending on work. You need to know if someone says "I'm just gonna make this change to the behavior... because I need it", but they've forgotten that there's two hundred other uses that depend on that behavior. Unittesting helps prevent that.
With regard to making maintenance harder: NO WAY! I can't capitalize that enough.
If you're the only person that ever worked on your project, then yes, you might think that. But that's crazy talk! Try to get up to speed on a 30k line project without unittests. Try to add features that require significant changes to code without unittests. There's no confidence that you're not breaking implicit assumptions made by the other engineers. For a maintainer (or new developer on an existing project) unittests are key. I've leaned on unittests for documentation, for behavior, for assumptions, for telling me when I've broken something (that I thought was unrelated). Sometimes a poorly written API has poorly written tests and can be a nightmare to change, because the tests suck up all your time. Eventually you're going to want to refactor this code and fix that, but your users will thank you for that too - your API will be far easier to use because of it.
A note on coverage:
To me, it's not about 100% test coverage. 100% coverage doesn't find all the bugs, consider a function with two if
statements:
// Will return a number less than or equal to 3
int Bar(bool cond1, bool cond2) {
int b;
if (cond1) {
b++;
} else {
b+=2;
}
if (cond2) {
b+=2;
} else {
b++;
}
}
Now consider I write a test that tests:
EXPECT_EQ(3, Bar(true, true));
EXPECT_EQ(3, Bar(false, false));
That's 100% coverage. That's also a function that doesn't meet the contract - Bar(false, true);
fails, because it returns 4. So "complete coverage" is not the end goal.
Honestly, I would skip tests for BooksLimit()
. It returns a constant, so it probably isn't worth the time to write them (and it should be tested when writing DisplayBooks()
). I might be sad when someone decides to (incorrectly) calculate that limit from the shelf size, and it no longer satisfies our requirements. I've been burned by "not worth testing" before. Last year I wrote some code that I said to my coworker: "This class is mostly data, it doesn't need to be tested". It had a method. It had a bug. It went to production. It paged us in the middle of the night. I felt stupid. So I wrote the tests. And then I pondered long and hard about what code constitutes "not worth testing". There isn't much.
So, yes, you can skip some tests. 100% test coverage is great, but it doesn't magically mean your software is perfect. It all comes down to confidence in the face of change.
If I put class A
, class B
and class C
together, and I find something that doesn't work, do I want to spend time debugging all three? No. I want to know that A
and B
already met their contracts (via unittests) and my new code in class C
is probably broken. So I unittest it. How do I even know it's broken, if I don't unittest? By clicking some buttons and trying the new code? That's good, but not sufficient. Once your program scales up, it'll be impossible to rerun all your manual tests to check that everything works right. That's why people who unittest usually automate running their tests too. Tell me "Pass" or "Fail", don't tell me "the output is ...".
OK, gonna go write some more tests...