Who Needs Documentation?

I posted an Agile LinkedIn discussion group question Where do you go to see what the code really does besides asking a developer to start a discussion about the role of specifications in the new Agile methodology. The discussion threads have been interesting and lively (LinkedIn, group “Agile”). So I wanted to answer the question posed numerous times: “Who would use that documentation and why/how?”

First, I’m not talking about old, antiquated waterfall design specs that are soon out-of-date used by no one. I’m talking about living, breathing hierarchical information (in a central tool accessible by everyone) that represent the repository of product knowledge: protected, validated, for use by the entire organization. Documents that everyone in the organization takes ownership in and uses. A cool set of “shiny docs”.

I can answer later how it could have been done (seems it’s not done at most companies and recent surveys say 83% of companies at best use Word Docs to describe their requirements) – but for argument’s sake, trust and believe there is a repository on-line specifications (“shiny docs”).

Who uses our current on-line specs and why do they need them?
a) For the developers – for the new guy, not as a replacement for training and not a static user manual – rather a handy reference area to delve into for more detailed, complex functional understanding, even some design details. A wiki for the product, helpful even for the most experienced developers.

b) Technical Support (TS) (the Call Center) wants to be able to quickly identify if a customer request is a bug, training issue, or if it should become a new feature request. They do their initial investigation using the specifications and replicate the issue in-house if they believe it’s a bug (so they help identify if it’s a customer config problem or base code). For non-bugs, they use them to understand how the product was designed versus what the customer is requesting to give better feedback to the PMs and better represent the customer’s business position for the change versus what’s there. Having a knowledgeable TS team that can get accurate information about all features in a very large enterprise application streamlines getting real bugs and issues into the product development releases. It’s much quicker if everyone knows/agrees a fix is needed than if something just sits on the backlog pile until the next PM release review cycle. Customer satisfaction is improved with clear feedback about if a ‘fix’ is feasible and how quickly. Some product changes can’t be immediate. Customers get that. Clear communication either way improves customer satisfaction.

c) The Product Management Team: If you are a Product Manager for a huge enterprise application and need to add new features and functions requested by customers to an older module with tons of business functionality, you probably don’t know everything about what the product does or even all of the history about what it should do and why customers wanted it that way. The online specifications help immensely.

If such documentation could be produced automatically as the result of the development process – wouldn’t it be useful? Why not aim for that?

We don’t need separate tools

(Posted to Duck Pond from Software 2020)
Aren’t new enhancement requests, stories, bugs, and requirements all just “Tasks”? And instead of all the different tools today — Agile Project Backlog Management Tools, Bug Tracker Tools, Requirements Management Tools — shouldn’t those be just different views to manage these (or subsets of these) “Tasks”? I think so.

Story Boards, Requirements Specs, Bug Tracking Tools are all just different views of “Tasks”.

Given that perspective, a modern robust Tracker would manage all “Tasks” (whether they are the new Agile Stories, bugs, client issues, etc.) consolidated for team tracking and estimation to complete reports considering the team’s vacation calendar and all of their work combined.

Then modern Agile tools could help plan new releases taking into account information about the developers’ other work (hence their percent availability) and provide a better estimate of completion for the current sprints and releases. Useful tools then would help manage the current release (in Agile, that’s Story Boards and organized hierarchical views – grouping tasks into epics, stories and their children). And facilitate the team’s management of these Tasks as they move from concept to delivery.

But keep them in the bigger database of “Tasks” so managers can manage the team’s full effort, not just the current release.

That’s what Software 2020 does.

Can ‘Agile’ and ‘Architecture’ Co-Exist?

This was a question raised on a recent Agile Linked-In discussion forum from a computer.org article “Software architecture is getting a bad rap with many agile proponents due to such aspects as big design up front, massive documentation, and the smell of waterfall. It’s pictured as a nonagile practice, something we don’t want…”

I pondered this and thought that the issue arises because some architects feel they need to design a “big, robust architecture” up-front in order to have a platform that can support changing needs release after release. Whereas in reality, too often what they are building ends up overly complex, heavy, hard to develop on and definitely not Agile. And too often their “elegant” and “brainy” complex architecture causes major issues down the road.

Instead, if the architect thinks “clean”, “simple”, and “functional” and only creates the underly framework as needed, even very complex projects can be accomplished in an agile way. I guess that would be called an Agile Framework 🙂

It’s not an easy problem – you need a very smart architect who has a clean, lean philosophy and can guide the development team and help with refactoring or restructuring when needed. If the architecture is clean and simple, typically there are less issues encountered over time and the product requires less people to develop and maintain it. At least that’s been my repeated experience in multiple companies and projects – even long before “Agile” came about.

I’ve been lucky find such architects over the years. The latest one is Freeman Michaels who was the Architect at Azerity and is my partner on Software 2020.

Where do you go to see what the code “really” does besides asking a developer?

What I worry about are where are the final documents that describe what the software does? That seems to get lost in Agile. And is sooooo important for long term maintenance, Technical Support/Call Center and years later Product Owner updates.

Today documentation is “Not Agile” and I question that. Before Agile if you had a real requirements management tool (we used DOORs) there were requirements, developers would look at the DOORs specs (linked to our internal task tracker for any new feature), during development they’d give feedback if they needed DOORs to be updated (it was fast, real-time, agile), QA would test against the tracker and DOORs and we then knew the DOORs specs accurately reflected what the code did. What a timesaver for Tech Support (the Call Center), PMs and others to know what’s in the code without having to go back to the developer. Without specs you seem to loose that.

Not the waterfall design docs Agile eschews that are in Word and get out of date before the code is delivered – but real information somewhere like in DOORs that says what the product is doing. Information the developers worked from and that QA says is how the code works.

So while I’m totally on-board with the Agile concepts for the software development itself, what is the answer for the rest of the company to know what’s in the code? Are we back to having to have the software developers go look at the code and tell them? Stories in tools like Rally are for the current release. And Agile doesn’t say how to update them or what happens for the next release. Agile “Stories” are incremental. After two or three releases, where does someone go look to see what the code does – other than asking developers? (which is a waste of their time).

That’s one of the big problems we’re hoping to solve or facilitate with Software 2020.

Why Software 2020

In the ’90s we found the perfect software process and the right toolset. Our process was practical and results amazing. Leveraging managers with years of software management expertise together with very talented practical software architects, the Azerity product demonstrated that software can be done right. Azerity competed with the big gorillas (SAP, Siebel, Oracle) and won every time. And it didn’t cost the clients millions to install and millions more to upgrade. High quality, intuitive. Some said it was nearly perfect software.

What did we learn?

  • One. How to build a practical architecture. One that delivers what clients want – intuitive, useful, useable. How to avoid the pitfalls which often occur when engineers aim for software elegance but end up with an architecture that is overly complex, bulky, and slow.
  • Two. How to develop a software plan that includes all aspects including deployment and upgrades. We think software companies should charge for value-added services but not for avoidable deployment and upgrade costs.
  • Three. 
    How to use the right processes and tools to make sure the software meets spec, is a high quality release (no bugs, that’s right, none) and completed on-time. The right software development tool to improve the process and software quality was SD Tracker.
Our process was practical

  • Adopted what worked from the government SEI/CMM and 2167A
  • But eliminated unnecessary process steps
  • Eschewed organizational silos and waterfall ‘walls’
  • Encouraged developer creativity and feedback (communications)
  • Fostered accountability and ownership (“buy-in”)

Our management tools were SD Tracker and DOORs

  • SD Tracker for bug tracking, enhancement requests/backlog, internal projects and our Call Center
  • DOORs for Requirements Management – our Specs


  • Amazing productivity, good quality, happy support team, satisfied customers

Others who saw our process said we were “Agile”

  • I said “Yes, we’re very agile” (little “a”)
  • Later I was trained in “Agile”. I thought “humm” some good things, but some bad. Could be great with more focus on the bigger picture.

Duck Pond Software was created to provide support to software companies on architecture, product, and process.  As the 2000s have progressed and more and more teams are moving to Agile, I’m finding that Agile is helping teams move to better software development methodologies and how the ’90s processes can be updated to help support Agile teams.

But I’m also finding the current Agile tools are lacking.  That’s why we’ve started software2020.org and are developing Software 2020. A demo will be available soon. It combines the best of the Azerity processes and tools plus Agile. There’s more to come . . .

Is “Quality” a Trade-Off?

One tool taught in Agile Methodology is the use of “Project Success Sliders”, initially created and devised by Rob Thomsett in his book, Radical Project Management. Agile training takes this approach and there is now a nice tool to use when considering project success sliders (see Mike Cohn’s Blog). The theory is that Sliders are a way to convey expectations to the team. By default there are six sliders, each of which reflects some dimension by which project success can be determined—for example, delivery of all planned features, quality, meeting an agreed upon schedule. Each slider goes from 1 to 5 with the midpoint being 3.

My first reaction to this is “What? Trading off Quality?” What does it mean to have a Quality “slider”?

In my years of software management, I’ve always believed there’s “3 fuzzy peas”, not four (the 3 P’s that drive a software project: Plan, People and Product. See my blog on Fuzzy Peas). Related to the sliders above, Plan=Time, People=Budget, and Product =Scope.

The first slider above, Stakeholder Satisfaction, I believe comes from delivering a quality product on time with the features they deem valuable. Team Satisfaction comes from working in an environment where your work is valued by the customers. I don’t think those really are project “sliders” although I imagine someone could argue for some usefulness in having them discussed by the project team.

On the other hand, “Quality” isn’t an item to be traded off.

I was happy that in Mike Cohn’s Blog, in every example the quality slider is always in the middle, at “3”. That makes sense – there is a middle ground on ‘reasonable’ quality. You typically don’t need to run a battery of tests that takes every possible path through the system regardless of how obscure it is. I’ve agreed to let a release go out when there was a potential bug in a place a user could only get to if the moon was full, they stood on their head and typed ‘MeMeMeMeMe’ 5000 times in the input box. Well, maybe a little easier to get to than that but some boundary test that goes far beyond any reasonable business case. Or a low-level issue (typos or an issue only an admin user would see and could easily work around). I suppose some could argue those are still ‘bugs’ in the code. Those bugs we list as 3-Low/3-Non-Critical and move them to the “Future” bucket. If we ever run out of work, a someday clean-up pile. On the other hand, if no one will ever encounter them, probably not worth messing with the code to fix them. Every code change is a potential new bug, they say.

But that isn’t what I think when I see a “Quality Slider” on a screen. I think that people may think they can actually move it off of the centerline “3” position into a lower-quality setting as an up-front project “trade-off”. This isn’t a tool used at the end to help figure out how to get an overdue release out-the-door, it’s an up-front tool. “Let’s add these 3 additional features and we’ll deliver it with a few more bugs.” Yikes.

And I fear (I know) it is common in the software industry to do just that. At Azerity and other places I’ve worked we’ve had a zero-bug policy for releasing a release. Every bug that was identified and classified as a potential user issue had to be resolved and tested in order to release the product. We of course also had an ‘exceptions’ clause (in business, one has to be sensible) and if there were one or a handful of issues found at the very end of the test cycle that were not deemed to be likely to cause any user real problems, or if it may only affect, say, one admin user who could be counseled how to avoid it until a fix were released, we had a sign-off process for those. But, believe it or not, that sign-off wasn’t needed in every release or even most releases. With an appropriate “Plan”, schedule (sufficient “People” for the Plan) and reasonable new features or “Product” size (the 3 P’s), you shouldn’t ever consider quality as something to be traded off.

What does it mean to “slide” the quality slider to the the left (lower quality), from a ‘3’ to a ‘2’ or even a ‘1’? If you ship software with known bugs that clients will find, it impacts tech support, causes escalation issues, client dissatisfaction, and costs more to fix later.

What’s the cost of bugs in delivered code? It’s a proven software fact that if fixing a bug when found in the current release takes the developer an hour (say $100), if instead it’s found by QA it’s $1000 (QA test time, goes back to development, potentially other code is impacted, code re-checked in, re-tested – at least a day total). But if found by the client it could easily end up $10,000 or more. (Tech Support time, trouble-shooting, reproducing it on in-house systems, getting developers on it – diverting them from their work, pulling the old branch to their dev systems to fix, merging code if other changes have occurred since). Ten-fold increase in cost for each development phase the ‘bug’ goes through without being caught. (That is also the argument for good code reviews, unit tests, and skilled developers).

I can see an argument for saying the Quality slider moves right for FDA or satellite software (you often can’t fix ‘bugs’ in space). But that is more of a company-wide process issue (how is the program run, what types of tests are required for delivery), not a Agile project-by-project team “trade-off”. So I still don’t see a reason/purpose for a “Quality” slider.

What bothers me with having “Quality” as a slider is I’ve been in companies where quality actually “is” considered a slider. I’ve been in meetings where engineering VP’s actually say “well, you can’t deliver code without bugs”. People I’ve worked with retort “of course you can – we’ve always done it.” But too often it seems to be a common software industry belief that software inherently has to have bugs. In my experience, that isn’t the case. There needs to be a customer outcry against shoddy software deliveries.

I think having a Quality slider is wrong. While it’s true that in real life quality does get compromised. Requirements keep morphing until there’s no time to finish the job, engineering over-designs software beyond the requirements and then can’t deliver, or management doesn’t come up with the people needed to meet the schedule but the schedule doesn’t move. In those instances, often, as the deadline for releasing the product is fast approaching the decision is to “Ship as is – we’ll fix the bugs in the first maintenance release.” That’s always the wrong decision.

The right approach is to continue, throughout the project, to manage the P’s – Plan, People and Product. If the schedule (Plan) is the most important part and at some point the people you have are all you will have (at some point bringing on new bodies doesn’t help due to ramp-up time), smart managers will start cutting out functionality – quickly. I was always quite draconian with my red pen if releases looked like a release was in trouble. Reduce the load and save the schedule. Or negotiate for more schedule. But that’s at the END of the project cycle, not during the planning phase.

Up front, where the project sliders are used, the test plan needs to be appropriate for the amount of code being implemented/modified.

Quality isn’t a “Trade-Off”.

Theme Screening

Last week in Agile training we learned about theme screening, theme scoring and relative weighting – tools to help the team decide how important new features were.

It reminded me of the decision matrix my husband and I built when we were debating whether to move to Charlotte, North Carolina or not. We were living in Silicon Valley where we moved after college and had lived for several years. Our best friends had moved out from the University of Utah ahead of us and we joined them. We (my husband and I) both had jobs with Ford Aerospace and the company was starting a division in Charlotte to apply a technology built at Ford Motor for finding flaws in windshields – to apply that technology to the textile industry. To find flaws in cloth.

Mike had been offered the job as Finance Manager working for the new company division’s General Manager. I was given the opportunity to work part time (as I had been doing since taking maternity leave) as software developer. There were only a handful of new employees. A start-up opportunity.

So we crafted our decision matrix.

  • Friends
  • Area (The south – humid and unknown; California – wonderful)
  • Work opportunity (would mean an immediate promotion for Mike)
  • Family (ours were all in Utah – NC was much further away)
  • Monetary (better salaries were being offered; particularly considering cost-of-living
  • Etc.

We weighted (added relative values) to the categories, evaluated them, and added up the results. The results: Don’t move to Charlotte. We sighed, looked at each other. Mike said “But what do you want to do?” I said, “I want to go and give it a try.” He said, “Me, too!”

And off we went. It was a great decision. We met great new friends. Saw an area of the country we never would have. Learned about the South. And our youngest daugher was born there.

All the tools for making decisions help – but in the end, also consider your “gut” reaction. In the end, how much do you want it?

If you REALLY want that feature that the decision matrix says is too big or has other constraints, rather than removing it from the list, a better alternative is to find a cheaper way to deliver the function that’s more streamlined. Maybe your development team’s approach is more “elegant” or “robust” than this feature requires. Maybe all the client wants is a simple button and the team was suggesting an entirely new feature.

An important part of the decision criteria is weighing the technical options. If you want the feature badly enough, there may be a way to get it that is more streamlined and still fits into the plan.

Rapid Development and the Easter Bunny

Monotreme - Easter Bunny? We received Easter candy from our daughter, Kristin, who is living in Australia while getting a master’s degree in Wildlife Conservation. She pointed out on her Easter card that Australia must have been the origin of the Easter Bunny. After all, “Australia is the only place where you find egg-laying mammals (monotremes).” (e.g., the Platypus).

Interesting since I’ve been thinking a lot lately about the tortoise and the hare. You know the old adage – where the hare is fast and quick to the finish line but the tortoise wins the race. Initially one has to think about the “Agile” (i.e., hare-like, fast and hoppy) software development methodologies vs. the old school “Waterfall” methods. Yes – the waterfall methods were somewhat like a tortoise but they were worse, more like a tortoise that had to stop every few steps and wait for the gate to open so he could continue on to the next phase of the journey.

The problem I have with “Agile” technologies is the somewhat “gleeful” rejection of any and all documentation. “We are going to be a small team so we’ll just have discussions and come up with the right answers as a team.” Yeh, right. What happens when you need to do updates to your software. Is this “real” software that has users and a next release? What happens then? How does anyone know what the software actually does if it was designed and developed by committee and no document artifacts remain that accurately reflect the software “as built”.

“As built” documents are key to (1) accountability (for marketing, developers, and QA) and (2) an understanding of what the software is supposed to do so that knowledgeable changes can be made and (3) resources so that Technical Support/Customer Services can answer the call about “Is the software supposed to be doing this

I don’t think “Agile” is “Practical”. See our Requirements Management blogs (Category “Documentation”) for better ideas about how to really be quick-on-your-feet, focused, and develop high quality and low-cost software !