Is “Quality” a Trade-Off?

One tool taught in Agile Methodology is the use of “Project Success Sliders”, initially created and devised by Rob Thomsett in his book, Radical Project Management. Agile training takes this approach and there is now a nice tool to use when considering project success sliders (see Mike Cohn’s Blog). The theory is that Sliders are a way to convey expectations to the team. By default there are six sliders, each of which reflects some dimension by which project success can be determined—for example, delivery of all planned features, quality, meeting an agreed upon schedule. Each slider goes from 1 to 5 with the midpoint being 3.

My first reaction to this is “What? Trading off Quality?” What does it mean to have a Quality “slider”?

In my years of software management, I’ve always believed there’s “3 fuzzy peas”, not four (the 3 P’s that drive a software project: Plan, People and Product. See my blog on Fuzzy Peas). Related to the sliders above, Plan=Time, People=Budget, and Product =Scope.

The first slider above, Stakeholder Satisfaction, I believe comes from delivering a quality product on time with the features they deem valuable. Team Satisfaction comes from working in an environment where your work is valued by the customers. I don’t think those really are project “sliders” although I imagine someone could argue for some usefulness in having them discussed by the project team.

On the other hand, “Quality” isn’t an item to be traded off.

I was happy that in Mike Cohn’s Blog, in every example the quality slider is always in the middle, at “3”. That makes sense – there is a middle ground on ‘reasonable’ quality. You typically don’t need to run a battery of tests that takes every possible path through the system regardless of how obscure it is. I’ve agreed to let a release go out when there was a potential bug in a place a user could only get to if the moon was full, they stood on their head and typed ‘MeMeMeMeMe’ 5000 times in the input box. Well, maybe a little easier to get to than that but some boundary test that goes far beyond any reasonable business case. Or a low-level issue (typos or an issue only an admin user would see and could easily work around). I suppose some could argue those are still ‘bugs’ in the code. Those bugs we list as 3-Low/3-Non-Critical and move them to the “Future” bucket. If we ever run out of work, a someday clean-up pile. On the other hand, if no one will ever encounter them, probably not worth messing with the code to fix them. Every code change is a potential new bug, they say.

But that isn’t what I think when I see a “Quality Slider” on a screen. I think that people may think they can actually move it off of the centerline “3” position into a lower-quality setting as an up-front project “trade-off”. This isn’t a tool used at the end to help figure out how to get an overdue release out-the-door, it’s an up-front tool. “Let’s add these 3 additional features and we’ll deliver it with a few more bugs.” Yikes.

And I fear (I know) it is common in the software industry to do just that. At Azerity and other places I’ve worked we’ve had a zero-bug policy for releasing a release. Every bug that was identified and classified as a potential user issue had to be resolved and tested in order to release the product. We of course also had an ‘exceptions’ clause (in business, one has to be sensible) and if there were one or a handful of issues found at the very end of the test cycle that were not deemed to be likely to cause any user real problems, or if it may only affect, say, one admin user who could be counseled how to avoid it until a fix were released, we had a sign-off process for those. But, believe it or not, that sign-off wasn’t needed in every release or even most releases. With an appropriate “Plan”, schedule (sufficient “People” for the Plan) and reasonable new features or “Product” size (the 3 P’s), you shouldn’t ever consider quality as something to be traded off.

What does it mean to “slide” the quality slider to the the left (lower quality), from a ‘3’ to a ‘2’ or even a ‘1’? If you ship software with known bugs that clients will find, it impacts tech support, causes escalation issues, client dissatisfaction, and costs more to fix later.

What’s the cost of bugs in delivered code? It’s a proven software fact that if fixing a bug when found in the current release takes the developer an hour (say $100), if instead it’s found by QA it’s $1000 (QA test time, goes back to development, potentially other code is impacted, code re-checked in, re-tested – at least a day total). But if found by the client it could easily end up $10,000 or more. (Tech Support time, trouble-shooting, reproducing it on in-house systems, getting developers on it – diverting them from their work, pulling the old branch to their dev systems to fix, merging code if other changes have occurred since). Ten-fold increase in cost for each development phase the ‘bug’ goes through without being caught. (That is also the argument for good code reviews, unit tests, and skilled developers).

I can see an argument for saying the Quality slider moves right for FDA or satellite software (you often can’t fix ‘bugs’ in space). But that is more of a company-wide process issue (how is the program run, what types of tests are required for delivery), not a Agile project-by-project team “trade-off”. So I still don’t see a reason/purpose for a “Quality” slider.

What bothers me with having “Quality” as a slider is I’ve been in companies where quality actually “is” considered a slider. I’ve been in meetings where engineering VP’s actually say “well, you can’t deliver code without bugs”. People I’ve worked with retort “of course you can – we’ve always done it.” But too often it seems to be a common software industry belief that software inherently has to have bugs. In my experience, that isn’t the case. There needs to be a customer outcry against shoddy software deliveries.

I think having a Quality slider is wrong. While it’s true that in real life quality does get compromised. Requirements keep morphing until there’s no time to finish the job, engineering over-designs software beyond the requirements and then can’t deliver, or management doesn’t come up with the people needed to meet the schedule but the schedule doesn’t move. In those instances, often, as the deadline for releasing the product is fast approaching the decision is to “Ship as is – we’ll fix the bugs in the first maintenance release.” That’s always the wrong decision.

The right approach is to continue, throughout the project, to manage the P’s – Plan, People and Product. If the schedule (Plan) is the most important part and at some point the people you have are all you will have (at some point bringing on new bodies doesn’t help due to ramp-up time), smart managers will start cutting out functionality – quickly. I was always quite draconian with my red pen if releases looked like a release was in trouble. Reduce the load and save the schedule. Or negotiate for more schedule. But that’s at the END of the project cycle, not during the planning phase.

Up front, where the project sliders are used, the test plan needs to be appropriate for the amount of code being implemented/modified.

Quality isn’t a “Trade-Off”.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s