PNSQC & QSIC 07 notes

I was in Beijing for 3 weeks and negligent in updating by blog.
I presented at PNSQC.  My testing web services talk was well received.  The paper should appear soon in the proceedings.  I also stepped in to give a talk in place of a speaker who withdrew.  The talk on Model Based Testing of Protocol Specifications was less well received.  The slides should be on the PNSQC site shortly.
I attended the SPIN meeting with an excellent talk by Niels Malotaux about estimation with a simple, but great exercise. 
At the lunch panel, Dick Hamlet indicated to be on the look out for two phrases:  "Adaptive Random Testig" (ART) and "Partial Oracles".  I persued them a bit at QSIC07 and don’t think they are ready for prime time.  ART has mostly been researched only for numerical domains and the major question (which perplexed the last WHET 4 Workshop on boundary testing also) is how to measure distance or closeness of inputs in a non-numerical domain.
Partial Oracles is related to metamorphic testing which is perhaps more problematic than I realized.  Since there are so many possible partial oracles, how do you choose which ones?
I found QSIC07 far more useful to me than I expected.  I particularly like Gordon Fraser’s talk (Improving Model-Checkers for Software
Testing) on generating tests from Models.  He introduced several concepts I would like to follow up on.  Mainly he took ideas I’ve seen applied to implementations and applied them to models.  In particular for example, as given by Jean Hartman’s talk at PNSQC, is prioritizing tests.  Code coverage is a traditional way to do this for implementations.  We can do the same for tests of models.  Which tests cover the most states, transitions, etc. 
Even more exciting was the thought of determining the most powerful tests.  Kaner defines test B as more powerful than test A if test B can detect all the defects of test A and other defects besides.  Using the old implementation concept of mutation testing, we can mutate our models and see which tests detect this.   Mutation of implementations suffers from the mutation possibly causing just another legal implementation of the behavior expressed differently.  It is an undecidable computer problem to determine if two programs have the same behavior.  However with model mutation, Gordon claims it is decidable, and thus an even more powerful technique.

About testmuse

Software Test Architect in distributed computing.
This entry was posted in software testing. Bookmark the permalink.

One Response to PNSQC & QSIC 07 notes

  1. Gordon says:

    The equivalent mutant problem is decidable for models only when one restricts considerations to finite domains. This is often done for model checking or generating tests from abstract models. If this restriction does not apply, the equivalent mutant problem remains undecidable; e.g., for software model checking the state space of the model will be the same as the state space of the source code.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s