At February 3 SeaSpin meeting, Eric Rimby provided a discussion-provoking thought experiment around what he termed “Test Points” (analogous to Story Points). I didn’t quite have time to snapshot his slide, but I think he finally defined them something like:
“ The number of functional tests cases adequate to achieve complete coverage of boundary values and logical branches strongly correlates with effort to develop.”
He counts functional test cases specified as part of backlog grooming for a user story as “test points”.
While I’ve always had issues with counting test cases (e.g. “small” versus “large” tests and Reponse to How many test cases by James Christie), he at least restricted the context in which the test cases he was counting were created. He presumed a team trained in a particular test methodology for doing boundary values and logical branches (I suggested Kaner’s Domain Testing Workbook), and that they compared notes over time. Another audience member afterwards indicated that Pex (or other tools) could probably auto generate many of these cases. Similar to how a scrum team should get more uniform in ascribing story points over time, Eric expects the number of functional test cases estimated by various team members for a story would become sufficiently uniform over time.
While I disagree with many of the suppositions he made during his talk, I agree that tracking the number of functional test cases estimated for a story might be a useful thing to track. Whether it correlates to anything remains to be measured. However, I think just getting teams to be better about their upfront Acceptance Test Driven Development (ATDD) as part of story definition can only help.
Abstract from Meetup.com , How Agile Teams Can Use Test Points
Test points are similar to story points. They can be used to estimate story size, track sprint progress, normalize velocity across teams, among other things. Test points have some advantageous that story points do not. They could be used instead of, or alongside with story points.