Unit testing accessors (getters and setters)

takeshin picture takeshin · Feb 14, 2011 · Viewed 10.5k times · Source

Given the following methods:

public function setFoo($foo) {
    $this->_foo = $foo;
    return $this;
}

public function getFoo() {
    return $this->_foo;
}

Assuming, they may be changed to be more complex in the future:

  • How would you write unit tests for those methods?
  • Just one test method?
  • Should I skip those tests?
  • What about code coverage?
  • How about @covers annotation?
  • Maybe some universal test method to implement in the abstract test case?

(I use Netbeans 7)

This seems like a waste of time, but I wouldn't mind if IDE would generate those test methods automatically.

To qoute from the comment of Sebastian Bergman's blog:

(it's like testing getters and setters -- fail!). In any case, if they were to fail; wouldn't the methods that depend on on them fail?

So, what about the code coverage?

Answer

DanielaWaranie picture DanielaWaranie · Jun 30, 2013

If you do TDD you should write a test for getter and setter. too. Do not write a single line of code without a test for it - even if your code is very simple.

Its a kind of religious war to use a tandem of getter and setter for your test or to isolate each by accessing protected class members using your unit test framework capabilities. As a black box tester i prefer to tie my unit test code to the public api instead of tie it to the concrete implementation details. I expect change. I want to encourage the developers to refactor existing code. And class internals should not effect "external code" (unit tests in this case). I don want to break unit tests when internals change, i want them to break when the public api changes or when behavior changes. Ok, ok, in case of a failing unit test do not pin-point to the one-and-only source of problem. I do have to look in the getter AND the setter to figure out what caused the problem. Most of the time your getter is very simple (less then 5 lines of code: e.g. a return and an optional null-check with an exception). So checking this first is no big deal and not time consuming. And checking the happy path of the setter is most of the time only a little more complex (even if you have some validation checks).

Try to isolate your test cases - write a test for a SUT (Subject under test) that validates its correctness without reley on other methods (except my example above). The more you isolate the test, the more your tests spot the problem.

Depending on your test strategy you may be want to cover happy path only (pragmatic programmer). Or sad pathes, too. I prefer to cover all execution pathes. When i think i discovered all execution pathes i check code coverage to identify dead code (not to identify if there are uncovered execution pathes - 100% code coverage is a missleading indicator).

It is best practice for black box testers to use phpunit in strict mode and to use @covers to hide collateral coverage.

When you write unit test your test on class A should be executed independent from class B. So your unit tests for class A should not call / cover method of class B.

If you want to identify obsolete getter/setter and other "dead" methods (which are not used by production code) use static code analysis for that. The metric you are interested in is called "Afferent coupling at method level (MethodCa)". Unfortunately this metric (ca) is not available at method-level in PHP Depend (see: http://pdepend.org/documentation/software-metrics/index.html and http://pdepend.org/documentation/software-metrics/afferent-coupling.html). If you realy need it, feel free to contribute it to PHP Depend. An option to exclude calls from the same class would be helpful to get a result without "collateral" calls. If you identify a "dead method" try to figure out if it is meant to be used in near future (the counterpart for an other method that has a @depricated annotation) else remove it. In case it is used in the same class only, make it privat / protected. Do not apply this rule to library code.

Plan B: If you have acceptance tests (integration test, regression test, etc.) you can run that test without running unit tests at the same time and without phpunits strict mode. This can result in a very similar code coverage result as if you had analysed your production code. But in most cases your non-unit tests are not as strong as your production code is. It depends on your discipline if this plan B is "equal enought" to production code to get a meaningful result.

Further reading: - Book: Pragmatic Programmer - Book: Clean Code