I find the titles of some papers quite obtuse and hard to guess their real content. I mostly learned about 2 techniques being explored in research for several years now that I was unaware of.
I’m still following up on the concept of Metamorphic Testing which seems promising and practical. For example:
Metamorphic testing extends the concept of an inverted function as oracle (e.g. testing SquareRoot results merely by squaring them) into a partial model based on repeated test results. A good example presented includes testing a shortest path algorithm where a few metamorphic relations (MRs) are:
- Reverse: The shortest path between B and A should be the reverse of the shortest path between A and B.
- Prefix: For any vertix, V, on shortest path between vertices A and B, the Shortest path between A and V must be the same sequence as the start of the path between A and B.
Table 5 from the paper An Empirical Comparison between Direct and Indirect Test Result Checking Approaches at the conference shows MRs for the Boyer-Moore algorithm which returns the index of the first occurrence of a specified pattern within a given text. For example, if string X exists in string Y, then the reverse of Y exists in the reverse of X.
I was also introduced to Adaptive Random Testing (ART). It is interesting how much research effort there has been on this.
"The goal of ART is to select test cases more evenly spread (within the input domain) and more far apart from one another than randomly generated test cases. The basis for this strategy is that Chan et al. found out that failure-causing inputs tend to be clustered within the input domain." [from Adaptive Random Testing through Iterative Partitioning Revisited]
But after reading An Empirical Analysis and Comparison of Random Testing Techniques by Johannes Mayer and Christoph Schneckenburger, I think ART isn’t very applicable to my world yet as the analysis mostly appear very theoretical. I found some of the background papers quite interesting, especially:
Some small disappointments in that Wolfram Schulte described his group’s current work (some of which I have already used) instead of the Challenge Problems in Software Testing and that at least two of the authors didn’t present their papers, but had a graduate student present instead. Brief observations about some of the other papers:
- On-line Anomaly Detection of Deployed Software: A Statistical Machine Learning Approach is far too low level, but the concept is interesting.
- I actually like the summary of comparisons with other techniques most from Discriminative Pattern Mining in Software Fault Detection
I also found another university professor, Peter J. Clarke, specializing in software testing and teaching a software testing course,
CEN 5076 Software Testing (3).
Tools and techniques to validate software process artifacts: model validation, software metrics, implementation-based testing, specification-based testing, integration and systems testing.
besides those currently listed at http://www.testingeducation.org/general/othertestingcourses.html