VALID 2010 (2nd Int’l Conf. on Advances in System Testing and Validation Lifecycle)

Woefully behind in my blogging

I attended and presented at VALID 2010 (The Second International Conference on Advances in System Testing and Validation Lifecycle) which is part of the umbrella SoftNet 2010 conference that also included ICSEA (The Fifth International Conference on Software Engineering Advances).  [Aug. 23-26, Nice, France]

This is not a top tier conference with names.  A majority of the presenters are new professors or finishing PhD students looking for publication.

However, they do have a review process and rejection rate.

After the conference, Wolfgang pointed me towards links where I ended up finding:

However, given stating:

The IEEE will not publish the IARIA fake conferences’ proceedings again. IARIA will not use any longer the Logo of IEEE


VALID  is basically an academic conference with some people from industry research groups.  VALID had about 30 people attending and most sessions had around 20 people.  At one coffee break a researcher on protocols in a parallel track (The Fifth International Conference on Systems and Networks Communications) was complaining their group only discussed simulations of protocols and not their testing. I explained how he needed to look at other tracks like VALID and ICSEA Testing.

Key note, David Bernstein, covered basics about convergence on cloud computing and mobile with examples of Data Centers from Microsoft, Google, Apple, and Facebook and energy as the greatest expense.  He stated that in 2012, due to video, info (bytes) doubles every 11 hours on the internet.  In 2015 there will be 100 Exabytes video & file sharing and 400 Exabytes of video calling recorded.  The average person, who today stores 128GB, will in 2020 collect 130TB.

Precise QoS Metrics Monitoring in Self-Aware Networks presented by Maurizio D’ Arienzo describes a new Cognitive Packet Network (CPN) – Layer 3 protocol alternative to IP compatible via encapsulation within IP networks.  “A passive measurement technique is used to gather samples of the real traffic and compute several QoS metrics of each network link to build a real time map of load conditions with high precision.”   As long as a reply traverses (in reverse) the same route as the request, they can gather time on all the links without clock synchronization.


A Hybrid Approach for Model-Based Random Testing presented by Stefan Mohacsi and Johannes Wallner from Siemens Austria (old colleagues of Jean Hartman).  While they were using it with on the fly testing to overcome state explosion exploration issues, it also provides an interesting idea for extending Spec Explorer’s point and shoot mechanism with on the fly testing.

If the generation algorithm detects that the efficiency of the random search falls below a certain threshold, it switches to a more sophisticated strategy that searches for a solution in a target oriented way. Once the blocking point has been resolved, the algorithm returns to the random strategy.”



The SQALE Analysis Model: An Analysis Model Compliant with the Representation Condition for Assessing the Quality of Software Source Code present by Jean-Louis Letouzey explained how bad most software metrics are, especially averages.   I liked their example, if you ask your car mechanic about your tire pressure, he doesn’t say the average pressure is OK (when some are high and some low), he says 2 tires are low.   A car mechanic gives a detailed list of work still be done to get car to a quality level.  Mechanic might say you have 30% brake pad or 40% tire tread left indicating how close you are to problems and how long before you need to address the issue. They, DNV IT Global Services, are selling their SQALE model.

But the concepts around developing metrics are interesting.


Interacting Entities Modelling Methodology for Robust Systems Design presented by Eric Verhulst indicated they found modeling nirvana:

“OpenCookbook, an environment for systems engineering features a coherent and unified system engineering methodology based on the interacting entities paradigm.”  [Now being sold thru Altreonic.].   

Most of the slides came from:

During this talk, panel, and break he emphasized how they created a 10KB Real Time Operating System (OpenComRTOS), that was more reliable, efficient, and robust than their previous large real time kernel grown over time. “For the formal modeling we used the TLA/TLC modelling language and checker of Leslie Lamport”

As a side note, see seL4: Formal Verication of an OS Kernel


























Table 1: Code and proof statistics.

Haskell/C coding: 2.2 Person years.  Proof: 6-9 Person years.



Using Hardware Performance Counters for Fault Localization presented by Cemal Yilmaz is an interesting take at detecting anomalous behavior via anomalous patterns of execution as recorded by built-in processor diagnostics.   Still early in its development, it might allow for in-the-field quality assurance approaches.


Way too early to tell if  Testing Web-Services Using Test Sheets whichcombines the expressive power of approaches such as xUnit and TTCN-3 with the readability of tabular approaches such as FIT” is useful.


About testmuse

Software Test Architect in distributed computing.
This entry was posted in software testing. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s