Monday, August 3, 2009

Definition of Done and the Quest for the Potentially Shippable Product

One of the main contributions from Scrum to the software development community is the conscience that has been created about obtaining a tangible deliverable product at the end of each sprint. This tangible product, “potentially shippable product” in Scrum terminology, is what governs all efforts from the team, the Scrum Master and the Product Owner. In theory, the potentially shippable product should be ready for shipping at the end of each sprint, given that if you can find a client willing to buy a product that works with limited functionality.
Theory contradicts reality in many occasions because the team hasn’t implemented enough functionality or this has not been completely tested to be considered as a valuable product that can be delivered to clients. Furthermore, confusion within the team arises when there isn’t a clear and unified Definition of Done (DoD). For instance, for development “done” means that a user story has been implemented but not tested. The DoD needs to be understood by everybody in the team; moreover, the DoD should be extended to different levels to cover all the expectations and requirements from the team and other stakeholders. Not having a clear DoD eventually affects the chances of the team to attain a potentially shippable product; what is even worse, without a clear DoD the team will be working as if trying to hit a moving target during the sprint.
This article will present and analyze different perceptions for the DoD in order to suggest a unified version. A word of caution thought, the DoD is something particular to each project so it needs to be evolved from what we suggest here in order to be applied in particular projects.
Different Types of DoD
Let’s start saying that we have only one team but different people playing different roles in it. As a consequence of their work, engineers tend to have their own perception of how and when something can be considered done, let’s take a look at the following definitions of done:
  • For development some code is considered done when it has been written, compiled, debugged, unit tested, checked-in in a common repository, integrated in a build, verified by a build verification tool and documented using an automatic documentation generator.
  • For automation test cases are considered done when the corresponding automated testing scripts have been coded, unit tested and ran against a build.
  • For quality engineering done means that the user stories received on a specific build have been tested by running testing cycles, encountered bugs have been reported using a bug tracking tool, fixes provided have been validated and the associated trackers have been closed. In consequence, test cycles include both manual and automated test case execution.
  • For info dev done means that all the help files in the product as along with the printed manuals are written in perfect English or any other language.
  • For project managers done means that the sprint has ended and the user stories have been completed (not necessarily accepted by QE) and the schedule hours burned. Moreover, for managers done usually means that there’s something−anything−that can be presented to the stakeholders in a demo.
Now let’s continue analyzing why those definitions, if taken separately, won’t help to build the potentially shippable product:
  • Development DoD is not enough because code was produced but not tested, at least not tested from the users perspective. Crucial tests like functionality, usability, performance, scalability and stress are still pending. Without passing all those tests, at least to a certain level, the product is simply not good enough for being shipped. A big temptation is to consider that in early stages of the product, the code is not required to pass all tests or there’s no need to test it at all. This of course violates the very definitionconception of a potentially shippable product.
  • Automation DoD is not good enough either, because it only covers code that has been written to test code, no bugs discovered, validated, retested or closed. It’s very true that automated test cases save many man hours of manual testing, but it should only be used as a tool that helps Quality Engineers in their work and not as a milestone that if reached qualifies the product for release.
  • Info dev DoD is also falling short because it only considers manuals and help files, not the product itself or the quality it might have. It’s not uncommon to see technically excellent products with a high degree of associated quality but with poor documentation. The opposite scenario is also common and even more problematic.
  • Project Manager’s DoD it’s focused on metrics and high level information. The biggest risk for Project Managers is to look at the wrong metrics and try to put a product out of the door when it’s not complete or stable enough. Avoiding biased perceptions that come from information collected from only one group of engineers in a team is the key for telling if the product is ready or not to go into production.
  • Quality Engineering DoD in my opinion, this definition is the one that contributes the most to the goal of having a potentially shippable product; the reason in simple, QE should be the last check point for telling if the implemented user stories are good enough for being considered as part of a potentially shippable product. One consideration is that the documentation provided by info dev should also be reviewed for technical accuracy. Another is that automated test cases should also be checked for effectiveness and real contribution to bug detection and validation.
It is evident that we need a DoD that can work for the whole team helping it to remain focused in the right direction and serving at the same time as a true enabler for the potentially shippable product concept. But before we go into that, let’s explore the different levels of accomplishment.
Levels of accomplishment
By accomplishment we understand some milestones that the team will be reaching during the project’s execution, there are several levels of accomplishment as we describe below:
  • Task level means that a task, whatever it was, has been completed. Tasks are usually performed by individuals in the team and still need to be cross checked or validated. Tasks are purposely mentioned as the smallest chunk of work and will be that which has been done by somebody in the team. Tasks have an identifiable output in the form of code written, test cases written, environments set, test cases automated, documents prepared, etc.
  • User story level means that all tasks in a user story have been completed, when this occurs we also have fairly good chances to say that the user story has also been accepted. Albeit bugs might appear during testing, once they are fixed the user story can be marked as accepted and will go directly into the potentially shippable product.
  • Sprint level means that the time boxed hours have been burnt; more importantly, the planned user stories should have been accepted. One very common mistake is to consider that a sprint was successful because the team has been able to complete certain amount of work by the sprint end date; this is true to a certain degree. The problem arises when you compare the work that has been done with what can and should be included as part of the potentially shippable product.
  • Release level is achieved when the product finally meets all required criteria −technical, legal, marketing, financial, etc.− to be put out of the door and make it available to clients.
  • Products level means that the team has been able to put several releases out of the door; by releases we don’t refer to hot fixes and service packs. Hot fixes are symptomatic of poor quality as they usually indicate that urgent fixes are required by clients that are reporting serious problems with the product. By the same token, releasing many service packs might be a good radiator for suspecting quality problems in a product, the processes and/or the team.

Unifying the DoD with the level of accomplishment
The ultimate DoD should be such that it really helps the team to envision the true goal behind the sprint: build a potentially shippable product. In that sense the Quality Engineering DoD seems to be the one that gets us closer to that goal for the following reason:
  1. It deals with user stories, quality and completeness.
  2. It also covers automation and info dev DoD; this assertion works under the assumption that QEs are the ones who execute automated testing suites and review written product documentation for technical accuracy.
  3. User stories accepted by QE are probably the best indicators for managers because they reflect the degree of accomplishment on a sprint and are directly correlated to functionally that can be included right into the potentially shippable product.
Consequently, this QE DoD applied to task and user story level will have a positive impact in the sprint accomplishment level, which in turn will favorably impact on the release and product accomplishment levels.

4 comments:

  1. This article has just been published by the Agile Alliance, see this link http://www.agilealliance.org/articles_by_category?id=17

    ReplyDelete
  2. Most folks I've talked with these days would say that a story or task is not DONE till it passes acceptance tests. What you call 'done' they call 'ready to test' and the QA element of the team (you DO have a tester embedded on your team right?), who earlier helped define the acceptance tests, so dev knows what 'done' looke like, can then execute those tests for that story and it can be declared as done (and story points counted) or it goes back to dev because something got missed.

    If you're not doing this, and testing as you go, then IMHO you're not doing scrum because you don't KNOW that your stuff is actually potentially shippable or not. If you're handing everything off 'to test' at the end of the sprint, as a bundle, then I'd argue that you're just basically practicing a modified form of waterfall, perhaps with better development practices (e.,g. XP stuff), and very short cycles, but that 'hand off' as a bundle, and not story by story is a dead givaway that you've still got some waterfall in your process.

    I'd also argue that if you come out of the sprint with a bunch of stuff you are handing off to test, you have NOT produced 'potentially shippable code', because you don't KNOW that you can actually ship it.

    Potentially shippable means that if the various stakeholders wanted to deploy the code to users, RIGHT NOW, they could do so without any questions or hesitation. It means the only reason to NOT ship might be one of strategic nature, outside control of the dev team.

    Instead, what you are producing is code that is potentially of shippable quality. You don't KNOW it's actually shippable quality, because nobody has done end-to-end testing to make sure all the stories work, and all acceptance tests will pass. so no, you ain't done.


    Read about "Agile Testing" http://www.amazon.com/Agile-Testing-Practical-Guide-Testers/dp/0321534468 or agile acceptance testing and specification by example http://www.acceptancetesting.info/the-book/

    ReplyDelete
  3. Chuck,

    I agree with you, something is not done until it passes all manual and automated test that were designed for that feature.

    Further, not having "done" user stories violates the principle of having a "walking skeleton" at the end of each Sprint.

    Unfortunately, many people still believe that done means implemented but not tested, or tested with bugs.

    Somebody asked me an interesting question the other day, should we allocate time in the Sprint for bug fixing? If so, how much? I of course have an answer for the first question, for the second I'm still thinking about it.

    I'll read the reading material that you recommended.

    Regards,

    Juan

    ReplyDelete
  4. Hi. Greetings. This post is really good and blog is really interesting. It gives good details.
    Scrum Detail

    ReplyDelete