PNSQC Trip Report


PNSQC Trip Report

 

I found this year’s Pacific Northwest Software Quality Conference (PNSQC) good, but a few times with no talks of interest to me during certain time periods.  As always they had both software test and software quality oriented talks.  They also focused a lot on "soft skills".  For example, ignorance from one audience member during the second keynote from Linda Rising, was:

  "Why do I, as a Quality Assurance person, need to learn about ‘influence’?”

I think most experienced test and QA folk understand the need for influence in their jobs.  Part of my inability to influence was my communication style, which taking the Dale Carnegie course last winter really helped with. 

 

As the PNSQC usually makes their proceedings publicly available, I try to record what is not necessarily in the proceedings.

 

Five Uncomfortable Truths About Making Useful Systems by Tim Lister was entertaining.   Some excepts:

·        No real agreement, on design, coding, or testing best practice

·        QA is a thankless job — like 8th grade teacher — nasty 12/13 year olds — not much money

·        #4 Getting agreement on requirements is not neat …
Looking for at new cars.  Don’t know requirements, but look around at new ones.

·        "Functionally late" — on time, with less function.

·        Estimate should be dispassionate — not the goal.

His ultimate statement which resonated throughout the conference was

Don’t adopt anything.  Adapt it for perfect effective, efficient, fit.

With follow ons:

·        You don’t adopt a dog.  You adapt to the dog and the dog adapts to you.

·        Quantitative Software Methods (QSM) — great tool, but unless you collect YOUR data, not great.

·        DOORS — the Audrey2 (little shop of horrors) of requirements repository
You can use for all sorts of things, but why.

Doors for Audrey2 — sacrifice, no testers 🙂

And one audience question from

Jon Bach: What are the antidotes to these truths?

Tim: Acknowledge the lies:

    We can’t estimate.

Impossible goal, because people won’t work hard (parkinson’s law)

Projects that smell success are productive — don’t have to crack the whip.

Public estimates, not private ones.

 

I attended Niels Malotaux’s talk "Optimizing the Contribution of Testing to Project Success" and his contribution to the evening panel on "Why Contemplate the Evolutionary Approach for Project Success?", but not his tutorial the previous day.  Niels believes Evo is the silver bullet!   If you follow Evo you will succeed.  He provides several case studies and testimonials to back him up.  I didn’t actually find his talk that focused about contributions of test;  rather there several good ideas about quality attitude and "organizing testing the Evo way means entangling the testing process more intimately with the development process. "

 

Excerpts from his talk.

·        Evo is about Very short Demming Plan-Do-Check-Act

·        Planning, Reqts, Risk Mgmt -> Result Mgmt

·        Weekly evaluations: Most work takes less than a week.

·        Defect: A problem caused to any of the stakeholders while relying on our results.

·        Is being late a defect?
     Yes, it causes problems for stakeholders

I found his definition of defect strikingly different from most others and potentially more

useful as he showed. 

·        Neils: "As long as we keep putting bugs in the center of the testing focus, there will be bugs"
Bugs and Debug are dirty words.
Why the emphasis on finding defects if we don’t want them?

·        Testers should learn better how to prove the absence of defects
While the developers should learn better how to avoid defects.
From "Fixation to Fix" to "Attention to Prevention"

·        40-50% of the bugs "evaporate" just by attitude.
Must feel the failure to learn.

While I agree attitude has a big effort, I’m not sure it is 50%.  The “feel the failure” reminded me of a hilarious spoof video clip made by Microsoft’s UK support center where every time a customer called in and complained, the developer got an electric shock!

Goal is zero defect delivery.

Devs ask tester to help them where they are not perfect.

Evo has no debugging phase.

 

Finding defects is NOT the goal.   Project success is the goal.

Tester’s customer is "the developers"

Don’t let mgr knowing less than you proscribe your process.

Testers check work in progress even before it is finished.

 

Don’t wait until everything is injected.

 

Devs so busy doing they forget to organize the reviews/inspections.

–> Testers solve the review and inspection organizing problem

 

Testers use their own timeline sync’d with dev timeline.  Done same moment as devs.

 

Tracing is a "zero activity".  Estimation is a TimeBox.

Neils also had several comments about metrics which I do NOT agree with (perhaps the subject of another blog entry):

Decrease numbers by Design.   Don’t count Defects per KLOC.

        Incoming defects per month.

Don’t count — do something.

Cost to find and fix a defect

The less defects thus the higher the cost per defect.

–> bad metric.

 

At the SPIN meeting in the evening, there was a great intro by K. Iberle on comparison of software lifecycle models.  Some more observations from Niels:

Places in the world just deliver on time w/o problem.

Evo is a method of methods

Interestingly one of the Agile panel members related a project they worked on which was pure waterfall, successful, and the right approach for that project.  Evo puts heavy responsibility on the team for estimation.   But this can be learned in just 3 one week cycles (shades of Personal Software Process, PSP).

My take away about the difference between Evo and Extreme Programming (XP) is that XP mostly gives tips and techniques for code development while Evo address the broader context of software development.

 

I liked learning the real experiences and advice given by Kathryn Kwinn in "Tuning the Root Cause Analysis Process".  How to incent people to follow up on their RCA is truly key.

Applied RCA to RCA process — tune the process

Found many recommendations never fully done

Push for full implementation of the wisdom their people have already shared.

 

I loved one of her examples

Once: RCA came in with suggestion — automated code scanning tool enhancement requested —  found the error all over the place!

C++ code :   a == b ;    (returns 0 or 1 and throw away)

Meant   a=b;

They planted a few instances with a==b in code inspections and in checklists, but people did NOT see the construct!

I love this example on several levels.  First, the idea of comparing inspections with static analysis was brilliant.  Second, the idea of enhancing static analysis tools is what I preach and much of Microsoft practices.  On the Indigo team, John Lambert created a static analysis tool for verifying consistency about test cases and Zhisheng Huang extended the FxCop rules engine used for managed code to create QaCop.  (A topic for another blog entry).

Once again attitude is critical.  With RCA

Changes mindset form "mistakes happen" to "what can I do to prevent mistakes"

 

I only listened to the first half of Brian Marick’s "The Gap Between Business & Code".  Comments I heard from others included:

  • one of the new ideas to come out recently
  • I think I learned something, but not sure what yet.  I need to think about it.

I’ve always liked Brian’s ability to stir people to think.   My absurdly short take away was:

     Requirements bad (or not really plausible)

     Examples good (or something like them and extended)

Given Brian’s work with Agile and FIT the focus on examples is not surprising to me.  To boldly defend requirements as untenable is interesting.

 

I skipped the second half of Brian’s talk to attend  Ann Marie Neufelder’s "A Toolkit for Predicting & Managing Software Defects Before Code is Written".  I wasn’t familiar with her work (1993 book Ensuring Software Reliability) or her company (SoftRel).  It seems she has a tool akin to the COCOMO model but instead of cost and time it focuses on reliability.  She uses a wide variety of variables (over 150 according to her website).  A debate during the talk was the influence of using 3rd party vendors and what about projects that don’t need 3rd party vendors.

 

I started Greg Bell’s "Cultural Competency — First Steps to Understanding" but found it too soft for me.  As per his web site we were still "Defining Cultural Competency"

After skipping out of "Defining Cultural Competency", I went to an "Open Space" topic on Software Metrics.  I was the only person, besides the proponent, Chris Holl, to show up.  So we chatted.  He provided several interesting aspects about his work and showed me examples.  The ability to network at PNSQC is a prime reason for going.

 

I found Linda Rising’s "The Art, the Science, and the Magic of Influence" interesting, but way too slow.  I think taking a Dale Carnegie course based on 100 year old wisdom emphasizes a lot of these psychology lab tested results.  She did mention a book Blink — how we make decisions

 

I loved John Balza’s "Using Metrics to Change Behavior".  A great experience paper from HP which really seems to understand metrics.    A focus on Avoiding adverse behaviors was great to see as key thing.  He also used several (or my favorite) Dilbert Cartoons to great effect.  For example offering $10 per defect and having Wally “write me a mini-van”

They used Goal-Question-Metric to good advantage and several good examples of good vs. bad metrics.   He is also stated that they:

Backed out features when quality wasn’t there.

This is what the Longhorn quality gates are also forcing in Microsoft.

 

I missed most of the next session as I had to each lunch before my lunch time panel on "How Should We Keep Our Toolbox Current?".  Harry Robinson and I livened things up a bit by dishing out Microsoft (VS Team System) and Google T-Shirts.  We got a lot of audience questions in a hurry by giving a t-shirt for each questioned asked.   I found several questions a bit too Open Source and Agile oriented as opposed to more general.  Open Source and Agile are not necessarily the answer.

 

The first half of Robert Sabourin’s talk, "Deciding What Not to Test" was entertaining and informative.  He uses a classic "I Love Lucy" video clip about wrapping chocolates to understand testing questions!  I skipped the second half to attend Jon Bach’s "Open-Book Testing: A Method to Teach, Guide, & Evaluate Testers", which as Jon himself said "I’m not sure what I have here but it seems interesting".  He also stated it was related too (according to Cem Kaner), but not the same as, Active Learning.

 

Chris Holl’s "Lessons Learned in Implementing a Company Metrics Program" was interesting and repeated a lot of what he discussed individually the previous day in Open Spaces.  He gave a quote which he couldn’t quite recall the reference and guessed:

Caper Jones ? : “Should be shot if you are still using defect density”

He also had a great statement

            Managers can be held accountable for metrics, but not individuals.

 

Finally I attended the last part of Esther Derby’s "The Value-Added Manager: Five Pragmatic Practices" which was good information for managers.  However, it is a soft skill, and not unique to the quality or testing professions.  Many large corporations, like Microsoft, that offer manager training provide this same info.  I do subscribe to Esther’s fine free newsletter.

When people have to fill in the blanks, they apply their worst fears.

People shouldn’t have to try and read their manager.

Some great discussion about how to appreciate and reward people.  She mentioned the following book

        Punish by rewards  — by Alfie Kohn

 

I felt that several of the presentations from consultants, etc. were thinly veiled pitches to buy their product, whether Evo, SoftRel, Cultural Competency, etc.  They came across as "here is an intro (insufficient to really do anything with)" and implicitly "hire me (if you really want to do something with this)".  I preferred people pitching ideas and practices without expecting me to hire them. 

 

I also didn’t like the PNSQC scheduling multiple vendors on tracks at the same time.  Thus in some time slots, half the tracks (2 of 4) were vendors.

 

So is the above final paragraphs idle bitching?  NO.  I was approached for and am now running to be on the board of PNSQC.org to make the next PNSQC better than ever.   Comments and suggestions welcome.

 

Keith.

 

About testmuse

Software Test Architect in distributed computing.
This entry was posted in software testing. Bookmark the permalink.

Leave a comment