Is there a standard way of measuring Defect Density? Most sites online state that it should be measured as:
number of defects discovered / the code size
My questions are:
Our ultimate goal is to be able (a) to compare our defect density against industry standards (b) to identify modules which are fragile and more buggy and deserve more attention (c) to use a consistent metric in order to draw a trend line demonstrating improvement in the quality of a module over time
Defect Density is the number of confirmed defects detected in software/module during a defined period of development/operation divided by the size of the software/module. ('defects(confirmed and agreed upon (not just reported))).
Defect Density: Defect Density = Defect/unit size
Ques may arise here is, what is this unit size actually meant for. Unit size=Size is typically counted either in Lines of Code or Function Points. Being a good coder you should be confident enough that there are no duplication in your coding which could bloated up your code size.
Ex: Suppose 10 bugs are found in 1 KLOC Therefore DD is 10/KLOC