It’s March already. As days, months, and years pass by, often we just move ahead, one step after another, and don’t lift our heads up to see if we’re going in the right direction or what progress we’ve made. Periodically we need to stop, step back, and assess our progress and how we’re doing. True in life, true in software companies.
Sometimes in a software company, all organizations are hard at work but something is amiss. In one software company, the technical support team was feeling that the customer’s needs weren’t getting addressed yet all of the product organizations were working hard, producing new releases with client-requested enhancements, and regularly issuing standard bug fix maintenance releases. All of the orgs felt they were busy and overworked but that the product and quality were on track. But by using metrics, they were able to assess the real status.
Metrics were evaluated about the number of customer calls currently being reported that were product bugs or other product issues versus the number one year prior and two years prior. The metrics included turn-around time to get the issue resolved.
What was clear from the metrics was that the number of bug reports had been steadily increasing as new clients buying and installing the software and existing clients were steadily upgrading to the newer releases. In parallel, several new projects were underway, stretching the bandwidth of the product marketing, development and QA orgs. So instead of trying to quickly fix all newly reported issues as they came in, which had been the process in prior years, in order to reduce workload on the developers and QA, fixes were being pushed out to maintenance releases two, three, or more months in the future instead of the next planned release. As a result, more clients were finding related product issues and more issues were being escalated. So to appease the clients who complained the loudest and wouldn’t wait for the future releases, the clients were sent one-off class files, tested only by the support organization instead of QA. If multiple clients needed the change in different releases, the developers zipped up sets of fixes. Then confusion ensued about which client had what file and instead of easing the load, this new degraded process was actually increasing the amount of work due to more call and more one-off fixes. And as a results, the overall product quality was impacted, causing more client frustration. When compared with prior years where bugs were immediately categorized and important issues quickly fixed, now there were too many fire drills and much confusion.
Metrics in this case uncovered both the negative quality trend and the underlying cause. But there is a right way and a wrong way to use metrics. A company can recognize metrics used in the wrong way when employee behavior is effected in non-useful ways. For example, one company used metrics to measure their Technical Support response time and rewarded the techs for maintaining 90 percent first-customer-contact turn-around time in less than four hours. The TS metrics looked great but in reality what the techs were doing was that when they received an automated call from a client, they would place their return call during the lunch hour or just after the company closed, raising the probability that they would be able to simply leave a voice message thereby responding to the call within 4 hours but without having to spend time discussing the call or resolving the problem which could tie them up and make them miss another client’s 4-hour call window. As a result, clients were not talking to a human for one, two days or up to a week and were playing “telephone tag” and getting frustrated.
In another company, a percentage of each developers merit plan was based on low bug count. But often issues reported by users as “bugs” were in reality items that were never spec’d or were spec’d incorrectly. So a lot of conflict resulted, arguments between the development org and support arose (“It is a bug.” “No, it isn’t a bug.”) Team members became opponents which created organizational silos and mistrust. Once the underlying issue was realized, the process was changed and a new Tracker category was created separate from “bug” or “enhancement” to denote a design flaw or spec bug. This allowed the Technical Support team to push that the issue was perceived to be a bug in the client’s eyes and thus get the problem resolved in a maintenance release rather than wait for the yearly enhancement releases.
But correctly removed the “blame” from the development organization since the issue wasn’t caused by a coding or process issue like a real bug would be and the correct metric was then being used to measure developer performance. The finger-pointing and arguments ceased, silo walls came down, and the product organizations coalesced into a supportive, cohesive team.
It’s easy to maintain status quo – to march along without noticing the slow and gradual deterioration of quality and effective processes. But by stepping back periodically and reviewing key metrics, teams can make sure they are working effectively and efficiently.
PS: Make sure you have measurable metrics – use Tracker to track Calls, Bugs, Enhancement requests and more. For at-your-fingertips metrics for future use.