I attended PNSQC Birds of a feather session “Why do we need synthetics” by Matt Griscom. It was advertised as “The big trend in software quality is towards analytics, and its corollary, synthetics. The question is: why and how much do we need synthetics, and how does it replace the need for more traditional automation?”
I spoke with Matt briefly to understand what he meant by synthetics, because I thought it was a rare, relatively unused term.
I learned a lot from other participants at the session. First, New Relic is trying to stakeout the term! http://newrelic.com/synthetics (They describe test monitors as Selenium-driven platform sends data using turnkey test scripts).
Second, I attended a great talk, which I highly recommend, by former colleague from Bing:
Automated Synthetic Exploratory Monitoring of Dynamic Web Sites Using Selenium by Marcelo De Barros, Microsoft.
So synthetics are mainstream. So what are synthetics? What are not synthetics? I had a hard time parsing synthetics as corollary of analytics. Still do.
For me synthetics are tests that run in production (or production-like environments) and use production monitoring for verification.
Analytics are just a method, in this context, for doing monitoring. To me synthetics are artificial (synthetic) data introduced to the system.
I thought synthetics were almost always automated and A/B testing would be a type of testing where synthetics wouldn’t apply. I was proven wrong on both with a single example: Using Amazon’s Mechanical Turk to pay people to choose whether they like A or B.! This is manual testing and synthetic, as it is not being done by the real user base.
Maybe the problem with “synthetics” is the same problem I have with “automation”. Even “test automation” isn’t very specific, and means many things. I’m not sure if “synthetics” is supposed to mean synthetic data (Wikipedia since 2009), synthetic monitoring (Wikipedia since 2006 – the description also uses “synthetic testing”), or something else.