One of the key questions in every project of any engineering field is how to measure progress. Traditionally progress is measured considering how much work has been done in how much time. This simple approach has been working and in fact works very well for intensive production systems like factories were the degree of innovation is constrained for detailed design documents. More to the point, manufacturing tangible and simple things like screws is a repetitive task that can be done always at the same velocity. Consequently, once velocity is known, it will be very easy to extrapolate and predict how long it would take to produce an X amount of screws. Even if we think of building very complex but tangible engineering masterpieces like submarines, we’ll find that the process has been well studied and described to the last details in tons of pages of design documents. Notwithstanding, building commercial software can’t be ruled by the same principles. Key differences are not well defined requirements and changing developing conditions. For years methodologists and gurus have been trying to provide a methodology that reduces complexity to a level that can allow to have predictive planning, results were null and Agile popped up as an alternative that tries not to predict but to adapt to changes and react in kind. However, Scrum practitioners have a new challenge in front of them: how to measure sprint progress. Even if you use Scrum, a project is a project and stakeholders will always ask the typical question: Are we still on schedule? This article will explore some indicators like burn down charts, user stories life cycle, estimated vs. actual hours and team velocity.
Burn down chart
The most basic chart for measuring status is the burn down chart; however it can very imprecise because of the following:
- Burned hours that are not related to productive tasks, i.e. hours allocated for Spike activities, reading, researching, attending meetings, helping teammates, etc.
- Hours are not the same for development, quality engineering or automation. You can’t expect that producing code is the same as testing it or automate its testing. Further, a burn down chart can look good but you don’t know exactly what sub-team is burning more hours.
- Burning hours per se won’t get you close to the completion of user stories; a user story can’t be considered completed just because its planned hours were completed or even exceeded. User story acceptance criteria need to be defined instead.
- Burned hours need to be baselined to initial estimation; accuracy could be a symptom of user stories slipping dates.
Some suggestions for improvements are presented below:
- Create the culture in your team to log in your tracking tool (Xplanner, VersionOne, an Excel spreadsheet or something else) only the hours that are related to productive work in coding, testing or automation.
- Even thought general success or failure is everybody’s responsibility, you need to distinguish the different actors that you have: developers, architects, quality engineers, automation guys, tech writers, etc. Not differentiating them will give you the false impression that everyone can burn hours at the same rate and thus create an illusory burn down chart. Separate burn down charts could be an alternative here.
- Burn down hours as a metric need to be cross referenced to user story completion; again, burning hours alone is not a good indicator.
- Poor initial estimation, bad reestimations and continuous addition of work are enemies of schedule. It’s almost impossible to have every task in all user stories estimated at the beginning of the sprint, and too much estimation constrains agility. Mitigation for this is to monitor the burn down chart progress against the initial estimation and when a significant gap is perceived, work should be moved back to the release backlog or at least reprioritized within the sprint.
User stories life cycle
There are a couple of good reasons for having a life cycle for sprint user stories:
- You need a visual indicator that can quickly show you the progress in a specific user story
- You need to have visibility of who (development or quality engineering) currently has having the user story in his/her plate. This itself is a good indicator for the progress of a user story.
- Draft means that a user story has been created but no time estimations have been added yet; in other words, no tasks has been created for the user story.
- Planned means that the user story already has tasks with time estimations. Besides time estimates, a user story is marked as planned when development and quality engineering have both agreed in the acceptance criteria that will be applied to decide if the user story is accepted or not.
- Implemented means that development has completed the implementation of the user story, all code has been unit tested and has been checked-in in the repository and passed a build verification test. Quality engineers are free to pick the code and start testing it.
- Verified means that the user story failed to pass all test cases executed by quality engineers, consequently it hasn’t met the acceptance criteria and bugs were reported. At this point, the user story returns to development’s court and will be worked there until all bugs are fixed, once this happens, development will mark the user story as Implemented and the user story will be retested.
- Accepted means that all test cases passed and no bugs where found or the bugs reported where successfully validated.
Using this user story status field is not complicated, but like many things in Agile, it requires discipline. The benefits for a team that fully adopts and applies this concept are great; in my experience, this indicator complements the burn down chart perfectly.
Estimated hours vs. actual hours
Much has been written about how to make good estimations, tons of books and dissertations have been prepared on the subject, but the truth is that there’s no magic formula that can allow you to have accurate estimations in all situations. The fact of the matter is that maybe you don’t need accurate estimations, at least not for Scrum sprints, some arguments in favor of this rationale are:
- The planning stage for a sprint in Scrum has to be by definition very short and very light, this contradicts the goal of having solid estimates that comes from deep analysis.
- Scrum is an adaptive business framework; it doesn’t have much to do with predicting things to a great level of detail and precision.
- Input for numeric estimation models varies too much from sprint to sprint; sometimes this variation is so uncontrollable that the results are considerably offset in one direction or another.
- Estimating tasks for developers is quite different from doing the same for quality engineers, automation guys or tech writers. The very nature of the work of each individual in a team contributes to increase or decrease accuracy. Empirical evidence collected in my projects indicates that quality engineer’s estimations tend to be more accurate than developer’s; the reason for this seems to lie behind the more repetitive tasks that quality engineers usually perform.
Consequently, I wouldn’t recommend that you make your team invest in becoming master estimators, try to use this time to mentor them in how to be reflective and learn from past sprints instead.
Team velocity is an interesting indicator but it has a great disadvantage: it’s based on the number of hours that the team burns and not in the degree of success (accepted user stories) that it achieves in a sprint.
Moreover, team velocity has to be carefully analyzed because it can correspond to several things, not just developers producing code; for instance.
- Velocity will tend to increase when quality engineers have enough user stories to test, testing will eventually derive in bugs being reported, bugs will need to be addressed by development slowing down the work in other user stories. Even though velocity will increase, overall productivity will decrease or stay the same at best.
- Which team’s velocity are we talking about? Developers’ velocity can’t be compared to that of automation guys when they execute automated test suites. Furthermore, quality engineers doing manual testing is no match for automated testing scripts. If we try to compare velocity among these teams; we’ll see no correlation because of the implicit difference of their work.
- Developers writing code apply a different velocity from what they use when they have to do unit testing or bug fixing. Again, velocity can be a misleading indicator because one might tend to think that the team has boosted productivity when, in reality, it has shifted its tasks.
- Team velocity shouldn’t be applied without considering individual velocity. Individual velocity can be a very good indicator for detecting blockages or not well defined requirements; in this case individual velocity will tend to cero. Team’s velocity sometimes tends to cover these symptoms because it doesn’t offer enough granularity.