In Two Mistakes and Error-Free Software: A Confession, Robert Glass asks:
how often do missing logic and combinatoric considerations actually happen in the software real world?
and answers with his own observation:
35% of the errors involved omission of required logic
40% could have been found only if the right combination of segments had been executed.
In other words, in this case, the Test Coverage Analyzer could have helped find only 25% of the errors!
In another study of his:
60% were cases of omitted logic. Another 23% were cases of failure to reset data.
the most persistent and therefore most troublesome kinds of errors are the same ones the Test Coverage Analyzer couldn’t help us with!
This is some of the better support I’ve found for constantly questioning managers or others that blindly push for higher code coverage via more testing, and especially automated testing.
Always ask: Where are our customers finding bugs?
In the already covered code or the uncovered code?
With good defect tracking and source control systems, check in of fixes can be linked and you can see how many fixes are in areas not covered by tests versus those covered by tests.
The few times I’ve seen people do the analysis (even if just informally), they (not surprisingly to me), find that most of the customer reported defects are in already covered code! So why, would a test professional think that spending their time covering more code will help their customers find fewer defects?
Testers don’t spend enough time choosing the most appropriate method and technique for their problem and situation. Too often untrained testers knowing only one (or perhaps a handful) of techniques and tools, use what they know which may not be all that applicable.
My current focus is on state coverage, as opposed to code coverage. States can involve the combinatorics and with negative testing give indications of missing logic.