Putting Lean Principles to Work for Your Agile Teams – Sam McAfee’s talk

Interesting talk, Putting Lean Principles to Work for Your Agile Teams, by Sam McAfee at Bay area Agile Leadership Network (BayALN.org).

While as one commenter put it, nothing new, it was interesting to me to see it strung together and the one-year transformation.
Basically starting with a team that followed many Agile practices he described changes using Lean that transformed or even eliminated the need for some of the Agile practices.

Initial agile practices: Collocated teams with daily stand ups.  Pair Programming 95% of the time amongst the 16-17 engineers.  2 weeks sprints with shipping at the end.  Test Driven Development (TDD) & Continuous Integration (CI).  Engineers estimate in story points.

With all this, they still had stories stuck or blocked for long periods of time.
So apply Theory of Constraints,  (Book:  The Goal — Eli Goldratt , or more recent IT oriented version: The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win by Gene KimKevin Behr and, George Spafford).
Also need more visual progress, so use Kanban board.
Kanban — use buffers to throttle upstream demand and reduce cycle time.   Reduce Work In Progress (WIP):   Ready, Doing, Done
Bottleneck was some times deployment.  They had manual deployment process.
So use Continuous Deployment  (CD)— automate deployment — Reduce cycle time of deploying value to customers as close to zero as possible.

dev -> continuous integration —> test in cloud —>  Deploy & Monitor (system health, . . )

Note: Theory of constraints assume single, stable bottleneck.

With Knowledge work, the bottlenecks bounce around.
Kanban more systematically lays down constraints.

Typically delay in handoff between roles or back & forth between roles.

==> use tighter feedback loops to reduce stuck stories.

CD allows change to fixed length sprint structure. Sprint planning meetings continue, but fixed sprints becoming superfluous because of continuous deployment.

Not all features released produced the Key Performance Indicators (KPIs) they wanted.  So use Innovation Accounting from book The Lean Startup.
Use the build-measure-learn loop to reduce the amount of uncertainty in new product development or process innovation.
Build smallest KPI change you can ship and run experiments.

Small lightweight experiments could be costly using full TDD and pair-programmed development.  For short lived proto-typing code, may not need TDD.   Also consider pair with Designer —  Designer & Engineer to create experiments.   Use experiments to validate business needs

A change from Agile to Lean was dropping story points in favor of Statistical Process Control (SPC) ala Shewhart and Demming.
Assume most stories flow normally, but analyze why outliers outside the range?  What makes those work items special?   Estimation time focused on risky (outlier) areas.
Team used electronic Kanban board (not Sam’s first choice of way to do it) that collected data, which was fortuitously used for SPC control chart.

Not all was rosy.  CEO would make urgent requests to reprioritize.   Which slowed original work down.   How to do the tradeoff?    Measure Cost of Delay.
Compare cost of delay for what you are doing now vs. what CEO wants now.

Cost of Delay: Quantifying the impact of delay on total life-cycle profits for each of your projects.  Delay typically shifts when start recognizing revenue, without shifting end of life.
How to get costs?  Spreadsheet of conversion rates, traffic, etc. from Finance.
Assumes you know cost of production:  Engineering time spent and cost of payroll.

Quantify Risk using data — not intuition — to model, and validate, risk factors.
Books by Hubbard : How to Measure Anything  & Failure of Risk Management.
Quantifying Risk of : Traffic -> convert user —> paid user retention
“all other risk” (without data) is just hand waiving.
Use Monte Carlo simulations — which parts are most sensitive to Risk.

Sam’s summary:

Continuous Deployment
Optional pair programming (E/E or D/E pairs)
Optional TDD & Continuous Integration
Use experiments to validate business needs
Use historical data to provide estimates, and asses risks.

Change daily stand ups  from what I did, doing, and am blocked on
to talk about flow of work.

Moving KPIs in right direction.
How to make many small bets.

But don’t believe what I wrote!   Watch the video with the graphics for more detailed descriptions.
Video of Sam McAfee’s Putting Lean Principles to Work for Your Agile Teams talk.
https://www.dropbox.com/s/ci53jk8tzn9jkxh/SamMcAfee-BayALN-25Feb2014.mp4
To visit Bay Area Agile Leadership Network, go here:     http://www.meetup.com/BayALN/

Posted in software testing | Leave a comment

Book review of Domain Testing Workbook – Equivalence class analysis and Boundary-value testing explored in depth

The Testing Domain Workbook is the most extensive and exhaustive work you will ever find on a specific testing technique (or related techniques if you include equivalence class analysis and boundary testing as the book does).  What I like best is the combination of academic background and roots combined with practical experience and industrial practice.  All the concepts are presented in a simple and approachable manner with pointers to more details for those desiring more.   While the book appears daunting in size, it is only because of the extensive examples and exercises.  The core of the book is very approachable and less than 100 pages.   To gain mastery, working through the exercises is most useful, but you can do that over time.

Many practical aspects and considerations for testing are covered that are usually skipped over in broad testing surveys or short articles.  For example, many books talk about different approaches such as risk-based, scenario-based, or pair-wise testing.  Books may also cover the issue of combining values for a test, but Testing Domain Workbook walks you through the details and implications of what each approach entails when applied to combining values for a domain test.  Further, it provides extensive guidance of when (in which context) the advice is most applicable (or not).  For example:

If you’re doing system testing after the programmers have done extensive unit testing of their variables, it will be unnecessary and wasteful to do thorough testing of secondary dimensions.

The book incorporates many viewpoints, sometimes strong opinions, and pithy statements such as:

Boundaries are funny things. When people say “No one would need a value that big,” what they really mean is “I can’t imagine why anyone would need a value that big.” The world is often less constrained than the limits of our imagination.

The book is exacting and consistent in its terminology, but the reader needs to be careful to keep the concepts clear and distinct.  For example:

Well-designed domain tests are powerful and efficient but aren’t necessarily representative. Boundary values are suitable for domain testing even if those values would be rare in use.

The best representative of the class is the one that makes the most powerful test.

So the best representative, most powerful, is not necessarily the most representative of typical values.  The book focuses on boundary values and bug hunting so that typical values are unlikely to be used even though they are part of the domain.  You need to use more than the one well-developed technique of this book as the authors themselves state.  For example:

well-designed scenario tests are usually representative but they’re often not powerful.  To test a program well, you’ll use several different techniques

You will be a better tester if you read this book.  You will be a much better tester if you actually work through the exercises of the book.

I was a pre-publication reviewer and I also posted this on Amazon as a review of the book.
http://www.amazon.com/review/R32QDJC6CFIXPU?ref_=pe_620760_65501210

Posted in software testing | 3 Comments

Seek first to understand, then to be understood & story telling while drawing

Pacific NW Software Quality Conference (PNSQC 2013 #31) & Workshop On Performance and Reliability (WOPR 21).

It was interesting that there were 2 quotes or observations from both meetings that provided added emphasis to me.   I need to learn better to incorporate these into my work.

PNSQC Doug Reynolds about Leadership and WOPR21 Dan Downing about consulting (Habit 5 from The 7 Habits of Highly Effective people):

Seek first to understand, then to be understood

The observations around visual and stories were also interesting:

PNSQC Moss Drake on 9 low-tech tips for PM:

“Telling stories while drawing helps people who learn in different styles.    Drawing is a process and a s result.   Drawing effects a change in your body.”

Book: Visual Meetings: How Graphics, Sticky Notes and Idea Mapping Can Transform Group Productivity by David Sibbet

In Drake’s PNSQC paper 3.6. Draw Out Ideas:

“Kevin Cheng, author of See What I Mean (Cheng, See What I Mean: How to Use Comics to Communicate Ideas 2012) promotes the idea of drawing comics for business reasons. Creating 
storyboards and scenarios as comics not only engages people in the process, but also saves time and effort. “

and WPOR21 Dan Downing’s comment about the importance of:

“White board drawing — even on webex or goToMeeting”

and a reference at WOPR21 by Jane Fraser to the book:

Tell to Win: Connect, Persuade, and Triumph with the Hidden Power of Story

 

 

Posted in software testing | Leave a comment

Workshop On Performance & Reliability (WOPR21) – Technical Debt

I’ve attended 3 prior WOPRs (4, 5,14) and this was the best one for me.
The content owner, Mais Tawfik Ashkar, chose the theme Technical Debt which turned out to work quite well.

The first day was the most positive with Dan Downing explaining the challenges to construct an end to end architectural diagram for a widely distributed organization.
(The end diagram is a pre-requisite for the original work – performance test end to end).

Felipe Kuhn then provided the most amazing description of Agile teams focusing on quality while building an entire product in 2 months.  His definition of Technical Debt was inspiring:

Anything that slows us (the team) down (or isn’t speeding us up) is Technical Debt.

And:

Technical Debts are not stories, as they don’t deliver user value.

He provided numerous examples of the team investing in automation to build a quality product quicker.

Matt Geis described an amazing evolution of test automation for partner testing.   From long lists of setup instructions to a single (4GB) VM to moving into the Amazon Web Services cloud.   He claims they increased technical debt by automating more and more of the process using more and more tools including mock SSLservers (recompile SSL library with special config) and a Jira plugin to auto configure test suites for new partners.

Jude McQuaid described his new destructive testing group at SalesForce.com, the WOPR hosts.   Sounds like fun to me!   I also appreciate SalesForce’s transparency about its data centers by openly publishing their status: http://trust.salesforce.com

There were many other Experience Reports (ER) including my own, all of which I learned from.

Beyond the ERs, we had a daily group brainstorming session on a topic.  The final day was “test debt” and a special subset (Mais initiated) called “test decay”.  The group came up a with a draft working definition:

Test decay occurs when testing becomes less effective over time.

After the fact I researched this thought a little.  Contrast with Software Decay.  Some of what we discussed is covered by Lessons Learned in Software Testing, Lesson 117, “Automated regression tests die” which then describes “Regression tests decay for several reasons”.  And even earlier, in a book I find too infrequently read, Marick’s Craft of Software Testing discusses (page 225) how “to avoid test suite decay”.

Attendees (as best I know, probably incomplete and mis-spelled):

Mais Tawfik Ashkar
Goranka Bjedov
Dan Downing
Jane Fraser
Matt Geis
Andy Hohenner
Paul Holland
Dave Holt
Pam Holt
Felipe Kuhn
Reena Mathew
Jude McQuaid
John Meza
Eric Proegler
Keith Stobie
Tom
Mahesh
Ashok

Posted in software testing | Leave a comment

Social Testing similar to what we call Social coding / Open source

In reply to a post Peter Kartashov on LinkedIn Software Engineers in Test

Is there a need of Social Testing similar to what we call Social coding / Open source ?
I’m wondering whether the idea of social/open testing is viable or not. We have github, sourceforge, etc to support open source coding but seems software testing is not on that level yet, though some web places like http://bugpub.com trying to spin off on the idea.

I stated:

Several open source projects have considerable test suites. Many open source projects have people funded by companies to work on them. I think testers would be very welcome to help in open source projects to provide more and better testing. I think firefox/mozilla with bugzilla is a good example.
You can also see that open source has provided many test tools, which is another area where testers can participate.

Like Alexander Pushkarev, I think community testing would ultimately end up being open source-oriented since testing proprietary software for free doesn’t make as much sense, but it could. Proprietary companies could fund open test suites that work for several products including their own. One example might be interoperability tests between products.

Further I see the efforts of things like testing weekends or competitions as advancing social testing.
http://weekendtesting.com “A platform for software testers to collaborate, test various kinds of software, foster hope, gain peer recognition, and be of value to the community.”
CAST has had competitions in the evening, for example http://www.associationforsoftwaretesting.org/conference/cast-2012/schedule/

Posted in software testing | 2 Comments

Software Testing as Insurance

Getting managers to understand trade-offs in testing is critical.   I loved my Manager’s explanation to her test leads.  Mary Beam presented the analogy of insurance.

You can buy insurance for almost anything.   When an organization is buying testing, it is like buying insurance.   Personally anybody could spend their entire salary on insurance.
Besides standard home, life, disability, and auto, you can get earthquake, flood, war and other perils insurance,  including umbrella insurance.   Maybe you also need dog bite insurance, ID theft insurance, computer insurance, kidnap ransom extortion insurance,
alien abduction insurance, etc.
Almost nobody spends all their money on insurance and neither should a company spend all its money on testing.  Software testers talk about risk and so does insurance.  Insurance has a long history with lots of data in most cases, e.g. actuarial life tables  for life insurance.  Unfortunately software testing has few calculated data, but still we must try.

So Mary implored her managers to think about which risks were truly worth the expense (of testing).

Along the same lines, I was re-reading Marick’s, Testing for Programmers  and still find his “Tests Are Economic Entities” a good introduction to automation value (overestimated) and cost (underestimated).  Every tester and test lead needs to be aware of this.   Should I manually explore this test (one shot), manually script it, automatically explore it, or automate a script?    Many tests aren’t worth running more than once.   Numerous other authors and articles have discussed this.

After writing this, I searched and found a few other blog entries related to this which may be of interest:

Posted in software testing | Tagged , , , , | 1 Comment

Testing Luminaries – 3 years by Software Test Professionals

Glad to see the winners of the past 3 years.   Interesting to see the runners-up as well.

2010  Gerald Weinberg   (other nominees: James BachCem Kaner)
2010 Software Test Luminary Award Recipient Announcement

2011 Ross Collard  (other nominees: Cem KanerDr. Adam Kolawa)
Announcing 2011 Software Test Luminary: Ross Collard

2012 Dr. Cem Kaner (other nominees: James Bach,  Paul Fratellone)
Dr. Cem Kaner – Software Test Luminary Award Winner 2012

Learn more about the award or also http://www.softwaretestpro.com/item/5141

Posted in software testing | Leave a comment