How would you measure code "quality" across a large project

askheaves picture askheaves · Aug 30, 2009 · Viewed 10k times · Source

I'm working on a quite large project, a few years in the making, at a pretty large company, and I'm taking on the task of driving toward better overall code quality.

I was wondering what kind of metrics you would use to measure quality and complexity in this context. I'm not looking for absolute measures, but a series of items which could be improved over time. Given that this is a bit of a macro-operation across hundreds of projects (I've seen some questions asked about much smaller projects), I'm looking for something more automatable and holistic.

So far, I have a list that looks like this:

  • Code coverage percentage during full-functional tests
  • Recurrance of BVT failures
  • Dependency graph/score, based on some tool like nDepend
  • Number of build warnings
  • Number of FxCop/StyleCop warnings found/supressed
  • Number of "catch" statements
  • Number of manual deployment steps
  • Number of projects
  • Percentage of code/projects that's "dead", as in, not referenced anywhere
  • Number of WTF's during code reviews
  • Total lines of code, maybe broken down by tier

Answer

Diomidis Spinellis picture Diomidis Spinellis · Aug 30, 2009

You should organize your work around the six major software quality characteristics: functionality, reliability, usability, efficiency, maintainability, and portability. I've put a diagram online that describes these characteristics. Then, for each characteristic decide the most important metrics you want and are able to track. For example, some metrics, like those of Chidamber and Kemerer are suitable for object-oriented software, others, like cyclomatic complexity are more general-purpose.