Thursday, November 26, 2009

Sprint Planning Insights

We typically start Sprint planning moving user stories from the Product Backlog to the Sprint Backlog. However, we need to take into consideration that:
  1. User stories can be split, actually they should be split to accommodate them in a three to five days time frame
  2. User stories in the Sprint Backlog must include all necessary work; development, infodev and QE (Quality Engineering) tasks would need to be considered
  3. User stories should be delivered on a weekly basis, we should try to avoid at all cost to deliver all user stories by the end of the Sprint. On the contrary, our target should be to deliver small increments of implemented and tested functionality each week, see the bar chart below


A collateral benefit of having weekly deliverables is that by doing this, we'll contribute to distributing work evenly during the Sprint-sustainable phase concept-avoiding peaks at the end of the Sprint that usually cause to have implemented but not tested user stories.



I think that the key for this to happen is good planning, good in the sense of team's involvement but not description of tasks to the last detail. It's very common that planning is more an individual effort focused on estimating hours the more accurately than experience and guessing can permit.

Scrum approach for this is quiet different, firstly, planning is a team's activity were everybody has an stake. Secondly, it's divided in two parts: one for understanding what has to be done for each user story, and the other for providing time estimations. Both sessions combined should not take more than eight hours-it's recommended that the team invest four hours tops in each part.

The first part is also good for developers and QEs to communicate their understanding of what needs to be done; feedback from the whole team is what in general would increase the degree of understanding of what would be done and how it would be tested.

For the part where time estimations should be provided, I've been working in an approach that might work, no magic recipe that can be extrapolated to all teams but certainly the essence can be adapted to team's particular needs.

Let's start recognizing that estimations will always be inaccurate, no matter how much effort and time you invest. Further, estimating only hours is by far the most inaccurate way to estimate tasks duration.

An alternative is to estimate size and complexity. My preferred technique for this is a vote by a show of hands. If size and complexity are defined next comes priority, again the team would need to decide what they'd need to tackle first. This goes hand in hand with Sprint Backlog prioritization.

Practical experience shows that having a target date for implementation helps QE to get prepared for receiving implemented user stories. It's also a good idea to provide an estimation for this.

Hours come next and again the team should vote for estimating them to tasks. This voting would take into consideration the other already estimated parameters and hopefully would conduct to a more educated guessing. Again, accuracy is not what we're looking for.

Lastly, team members can start to volunteer for the work they want to do for the Sprint. Work balancing would be oriented to balance complexity and size more than just hours. I present and spreadsheet that summarizes this approach:

Wednesday, November 25, 2009

And Who Will Guard the Guardians?

Another interesting questions that I've heard the other day, do the code produced by automation guys has to be code reviewed and unit tested?

Again, some analysis before answering. Automation code run against a product and automatically executed test cases that otherwise would have to have been executed manually. There's an investment in building automated testing suites, but this greatly pays off when automation can be executed repetitively and in a fraction of time and cost of what it'd have taken to do it manually.

Automation can be used as part of a Build Verification Test that quickly smoke test builds to check if anything has been broken. Nevertheless, automation can also be extended to test product functionality and here's where the problem starts, what if automation is not catching errors?

Many times errors are not caught because the original test case has not been well designed or is outdated, but in many more occasions automation is failing because it has its own bugs. In past projects I've heard several times that those errors that were easily manually reproduced, passed automated test. This was of course a huge alarm sign that was pointing in the direction that code produced for automated suites had been poorly tested if tested at all.

Thus, automation can be hindered if good development practices are not applied for all code that automation team are producing. After all automation guys are still developers producing code that needs to be tested by a third party. Going back to the question, this is my two fold answers: first, automation guys should unit test and code review the code that they produced, second, Quality Engineers should test automated test cases and suites like they test other software.

Tuesday, November 24, 2009

And Who Appoints The Product Owner?

This is a very interesting question that I think that deserves some analysis before answer it. Let's start saying that the Product Owner is usually appointed by the organization and not by the team.

Even though Scrum literature does say that the Scrum Team it's self governed and can appoint their own Scrum Master, it doesn't say much about who appoints the Product Owner and how could decide to remove him if necessary. My recommended rule of thumb would be this: the Product Owner-originally appointed by his organization-could be removed if the team agrees and present a business case that justifies this decision. Upper management endorsement would be required though.

In my experience, Product Owners have a great deal of talent and experience dealing with clients, they're part of the sales force; more clearly, they're the advance team with clients. They have a great ability convincing people-both clients and in house-and are in essence business driven individuals.

So, as you can guess, these types of individuals are not abundant. Besides, Product Owners require a deep technical knowledge, not only good selling skills.

I guess that the question have some triggers, why a team would be thinking in changing its Product Owner? Is it because o a lack or communication? Is it because the Scrum Master is not full filling his role? Before even in thinking in changing somebody my advice would be to drill down the root cause of the problem.

Monday, November 23, 2009

The Scrum Dream Team

I was thinking in some of the characteristics-I guess that the word qualifications doesn't apply in this context-for a great Scrum team and I came up with a list:
  1. It has to be self directed, meaning that it has to be capable of getting into trouble and create its own ways to strait things
  2. It has to be self governed, meaning that it shouldn't wait for managers to be told what to do
  3. It works more based on agreement more than on impositions
  4. It respects everybody's personalities
  5. It really cares about everybody's well being
  6. It reach consensus because their team members don't have an "I win, you loose" mentality
  7. It creates a friendly environment, a note here, I come from a Latin culture where friendly really means friendly like people eating, jogging, going out, making jokes, and drinking together. It's not uncommon to see that team mates do things together on weekends and after office hours, this creates close bonds that survive years after they finish a project
  8. It has to have team members ready to make concessions and pact rather to go into a quarrel
  9. It has to have a good buffer (a.k.a. Scrum Master) from external interferences
  10. It has to have a track story of continuous improvements
  11. It has to be willing to learn new things

Friday, November 20, 2009

Scrum Masters and Referees

Watching a good football game (soccer in American sports jargon) the other day I realized how much fun you can have seeing a couple of excellent teams playing. I chanted a lot and the team that I was supporting won (of course I had no power to influence score, but it was a lot of fan screaming, jumping, and singing).

After the match I've started to think in some analogies between football and Scrum teams. Both are self-organized teams that work together in spite of their individual starts. Both has roles, specialties should I had said, like the goal keeper or the insider. But at the end anyone can score and change curse of the match in an split second.

But more importantly, there is this one character that nobody likes that much, the referee. And here I've found some interesting analogies with the Scrum Master:
  1. The least you see from the referee, the better that the match is being played (unless of course that you have an incompetent referee afraid of doing his job)
  2. The fewer interventions that a referee is forced to make, the better that the teams are self-governing
  3. Referees solve complicated situations that can easily escalate to crisis
  4. They use a mix of gentle and strong hand to keep peace in the play field
  5. They don't appear very often but when they have to, they're not afraid to show some muscle to enforce his decisions
  6. They are not the starts of the show, players are

At the end you might ask, why does a match need a referee? To rule or facilitate the game? See the similarities with the Scrum Master role.

Friday, November 13, 2009

Metrics and Process Improvement

It's not uncommon to hear that managers are interested in improving productivity in their teams, metrics-wise this means that the indicators that they're looking at, should start to show bigger and better numbers.
Metrics are commonly applied for measuring processes and they in turn reflect internal team organization policies and practices. Following this rationale, managers should start improving processes if they want to have better metrics. But what if processes are something that you can't measure and consequently improve?
Process Definition
According to Wikipedia a process in engineering is "engineering which is collaborative and concerned with completing a project as a whole; or, in general, a set of transformations of input elements into products with specific properties, characterized by transformation parameters".
Processes in Manufacturing
This definition stresses the point that a process implies a set of transformations; further, a process in manufacturing has some interesting characteristics, to name a few:

  1. It's standardized, meaning that it has a set of predefined set of steps that has to be followed by workers. Conversely, deviation from the standard produces defects

  2. It's repetitive, this is key in factories and shop floors where journeymen perform the same tasks again and again. Perfection comes from repetition, and seniority is based on the number of times that journeyman executed tasks. Repetitive tasks are also great for passing knowledge from journeymen to apprentices, and of course also great for foremen supervising work in floor shops. However, repetition cuts creativity and innovation.

  3. It can be extrapolated, the X process in the Y factory can be documented in a recipe format and then sold for use in the Z factory. This rationale of course doesn't consider that the same process in X and Z factories will be implemented by different teams. Again, focus in the process disregarding the human factor.
Process in Services
Processes in service industries like fast food franchises or banks have humans that follow some rules and procedures to provide services to other humans; however, some characteristics of this interaction can be similar to what happens in manufacturing like:
  1. It's standardized, there are standards for food elaboration, quality, and service time. Franchises are especially prone to set standards but even the old mom-and-pop type of restaurants has standards that they employees have to meet.
  2. It's repetitive, unless you decide to go to a very expensive restaurant, you'll be almost all the time getting the same food-good or bad no matter-that comes out of the same process that restaurant employees followed to prepare it.
  3. It can be extrapolated, otherwise there won't be fast food franchises that can offer the same product with the same quality-again, good or bad-in different parts or the world.
The Processes in the Software Industry Dilemma
By this time you might have realized that manufacturing or services processes can't be directly translated to software industry, some reasons are:
  1. The software development industry requires a high degree of innovation and creativity. This brings a lot of variability over processes; for instance, a test case can be executed by two different quality engineers with different results, one might have executed the test case in two hours and finding no bugs whereas the other might have execute it in half the time and with high severity bugs reported. Is the first quality engineer less effective than the second? Of course not, many factors like expertise, product familiarity, or technology knowledge make the difference. More importantly, the human factor plays here a more crucial role.
  2. Work is essentially not repetitive, maybe some parts of testing are but development is certainly not. Programming languages have a very few and well define set of structures like repetition, bifurcation, and comments. However, these are like building blocks that you can use in an uncountable number of ways to achieve the same results. Of course that there are good practices and rules to follow, but human creativity certainly is more valuable than repetition at least in this line of business.
  3. Recipes don't work anymore, you can't extrapolate processes from one development team to another because there are too many variables related to people that you can't control. For instance, you can't expect to have the same degree of motivation and commitment in two different teams, again soft-factors like those are what will prevent process extrapolation and consequently standardization.
So, seems like old project management techniques based on process standardization are no valid for this industry eh? Even more importantly, if you can't standardize process how could you improve metrics? I guess the question should be, is it really worth the try to look for processes and metrics applicable to the software industry? My short answer would be no, my long answer you be that we could look for a different type of metrics originated from agile processes.
Scrum Coming to Rescue
What Scrum can bring to help solve the dilemma? For starters, it doesn't prescribe a process/metrics measurement approach which I've described before simply won't be applicable in this industry.
Secondly, Scrum brings a shift in focus from control to empowerment, from contract to collaboration, and from documentation to code. This highly pragmatic paradigm requires many things for work, things like self-discipline and upper-management support, but undoubtedly empowering teams would be the cornerstone.
Just imagine having teams than can create their own process based on their needs; that can discard or modify process that are not longer applicable, and that can be able to achieve impressive results with no formal process in place. For some this might sound like project manager's worst nightmare, but for Scrum purist this could be heaven.
Self-directed teams are essentially highly committed and innovative-they have to be, otherwise they shouldn't have choose the Scrum way-that work at a sustainable phase and creates on the fly whatever process they need. Of course processes come from consensus and this in time generates internal commitment and motivation. In time this produces hyper-productive teams with competitive individuals looking for challenging stuff to work on the next Sprint.

Wednesday, November 11, 2009

What a Burndown Chart is Not

A burndown char is not:
  • a fortune teller crystal ball that you can learn to use to predict how things will be occurring in the future
  • a time machine that will allow you to travel back and forth in time to fix things or foresee pitfalls
  • a single chart that will concentrate in it all the necessary information for you to see at a first glance project and team status
  • a chart that is magically draw by somebody everyday before you come to the office
  • an accurate and no misleading indicator
  • a complicated tool that takes too much effort to understand and maintain
  • a converging point tool that the team will use to agree on something
  • a bargain chip that managers can use for negociating witht the team
  • the ultimate indicator for measuring team's productivity
Locally the question would be, what a burndown chart truly is? I'll save the answer to that question for another post, stay tunned.

Wednesday, November 4, 2009

Sprint Planning Meetings

One of the biggest confusions in the Agile community is about the need for sprint planning. Some people tend to think that since Scrum for instance is all about adaption, no planning is required.

In order to clarify this confusion is necessary to say that planning is not the same as detailed planning, the big difference is that in detailed planning we look for concrete data about what, how and who would do that during the Sprint. On the contrary, when we do Agile planning, we're just interested in understanding what needs to be done but not exactly how.

Planning sessions are also a great opportunity to question the Product Owner who in turn should look for the answers with final user or customer.

During the second part of the planning stage is necessary that the team estimates complexity for all user stories that will be moved into the Sprint Backlog. Estimated complexity is a fun exercise and an excellent opportunity for team members to make public their understanding of the work that needs to be done. One of the great benefits of this part of the planning meeting is that fosters communication among team members, explaining what one thinks will try to build in the Sprint is a perfect vehicle for understanding it better before even starting to work on it.

Finally, planning meetings should include QA work for testing and validating fixes. One common pitfall is to just plan for implementation but not for testing.

Tuesday, November 3, 2009

Fulling QA's plate


Many people tend to believe that a good Scrum Smells is having QA overbooked for the whole Sprint. From a capacity standpoint this might make some sense, but from an Agile perspective this approach has several limitations, to name a few:
  • Overbooked personnel has no breathing room for creativity
  • No creativity implies repetitive work and repetitive work is enemy of reflection
  • No reflection cuts continuous improvement
Further, Lean-wise having QA working at full capacity all time creates waste in the form of work being piled up waiting for being processed and processed work being blocked to pass to the next server in the line because it's also booked.

Lastly, having QA at full capacity will break the continuous phase principle because engineers simply will quit the project if we make them work at this killing phase during all Sprints. The ultimate goal should not be making full use of resources from a capacity perspective, on the contrary, noble goals like team work and team empowerment should be pursuit.

Monday, November 2, 2009

What QA do During the Sprint?

This has been an interesting question and debate topic. Let's start saying that in Scrum teams are not divided in development and QA, everybody is part of the whole team and takes active part in the Sprint. Further, QA's involvement goes thought all Scrum meetings, artifacts and roles. For instance, a Quality Engineer could be appointed as Scrum Master or even Product Owner.

If we think in planning meetings, QA should participate in user story definition and understanding of what is required to do. Moreover, QA has to create the acceptance criteria for all tasks. Also during the planning meetings, QA participation is crucial when estimating tasks complexity and duration. One important note here, Quality Engineers has to vote during planning poker meetings.

Remember that what we're estimating is complexity and time for tasks that has to be completed during the Sprint, and when I say completed I'm referring to the Definition of Done that has to be agreed by the team. One very common mistake is to include only development's estimations for tasks and as a result tasks get implemented but not tested or tested with bugs.

During the Sprint, QA has to participate in the Daily Scrum Meeting to report its status and blockages. One common Scrum smell is QA reporting that it doesn't have anything to test. Typically QA has almost anything to do during the first weeks of the sprint but in the last days it's suddenly overwhelmed. Again, this is a consequence of bad planning and a violation of the "Sustainable Phase" principle. Workload should be evenly distributed both for development and QA during the Sprint, avoiding killing hours of work at the end only for QA.

Also at the end of the Sprint, QA participates preparing and presenting the demo to clients. Further, during Sprint Retrospective, QA has to reflect in its work and interaction with development and make the necessary adjustments for the next Sprint. One word of caution though, don't wait for the Sprint Retrospective to adapt, adapt as soon as you feel the need.

Why Scrum is so Popular?

I was thinking the other day about why Scrum is such a worldwide phenomenon in terms of popularity, I mean, why is with Scrum that everybody seems to be so eager to learn about it?

One or two steps back, other frameworks and thinking tools like XP or Lean are tremendously good, their principles are so down to the earth that you should absolutely follow them in whatever project that you try to start. Further, there's certainly too much overlap among Agile, Lean, XP, Scrum and even Kanban, but at the end of the day Scrum seems to be gaining more popularity that the other Agile flavors.

If you like the books and read Beck, Jeffries, Poppendiecks, Schwaber's and others, you'll find that all are excellent but somehow Scrum succeeding again attracting more visibility.

So, ready to develop the mystery? My theory, and please take it just a personal believe without hard data behind, is what has propelled Scrum to the top was the certification program. If you think it twice you'll see that there are around a hundred Certified Scrum Trainers around the world that has to pay a $7000 annual fee to maintain their certification status and still make a profit out of that. This small army of CST are intensively active organizing courses around the globe and as a result we currently have around 60.000 Certified Scrum Masters.

CSMs paid close to a grand in the US for a two day course, of course that is well invested money but certifications are not just for shining, has to have a practical use. CSMs do their best to introduce Scrum in their teams and organizations, but soon enough they discover that almost nobody beyond them has heard of or is willing to adopt Scrum. What they do? Convince more people to take the CMS course and keep preaching the Scrum word in their organization.

As a result we have a growing community that spreads very quickly, some people even called it "Pandemic Scrum". Of course that there's a business model and people making profit of it, but what a heck, Scrum is a great framework that is making IT people happier and more productive.

Thursday, October 22, 2009

The Scrum Dream Team

I was watching football (soccer in American sports terminology) the other day. I'm not a big fan these days but I used to be. Anyways, seeing Brazilian National Team playing is always a good investment of time.


While I watched the game some reflections came to my mind:
  1. Brazil has great football players, they're all sport starts that play in the best teams in Europe. They are rich, talented, and famous, everything in the same package. But they're humble enough to accept to be coached.
  2. Anyone in the team, except the goal keeper of course, can score goal and win a match. It's not unusual to see that Brazilian defense can also score. Team success in more important than individual recognition.
  3. Why defense player can score? Simple, they change roles during the match and become part of the offensive. Adaptation is key here.
  4. Brazilian National Team plays differently when it plays against South American or European National teams. Brazilian players use technique or play rough if needed. Essentially they adapt their style to circumstances.
  5. Motivation comes from inside the team, Brazilian players believe that they're the best and always look for new challenges.
Lastly, there many national teams and thousands of excellent players around the world, but what makes this national team so successful? My answer would be that a combination of talent, humbleness, adaption, inspection and teamwork, triggered the magic of the 'jogo bonito'.

Monday, October 19, 2009

The Incomplete Judo Definition of Done

One of my clear memories from youth is training Judo projections and uncountable number of times. In Judo you won a match in several ways, but the most spectacular one is when you throw your opponent to the floor over your heap. You get a full point, the match ends, you won a medal, the crowd chants your name and everybody is happy but your opponent.

But what if you for misfortune you are forced to use the same technique to protect your life in the street? Throwing somebody to the floor would be enough to neutralize your attacker? Hard to say, but statistics shows that this won't be enough, you need to immobilize your attacker to be really safe. So following this rationale, Judo projections are half done.



Extrapolating this to the Scrum world, Judo projections are like writing code: looks really cool, people love to see it, it attracts attention and respect but is still half cooked. Code that has been coded but not tested is like a furious attacker in the ground, it can counter-attack and really hurt you unless you do something to really control him.

In Judo (and more effectively in Ju-Jitsu) you use punches and locks to really immobilize your attacker. In software development you can use testing-both manual and automated-to assure that the code is good enough. Further, you need to reach consensus about the acceptance criteria and the Definition of Done. Why? Simple, it's a team's activity, without consensus effort would be wasted without favorably impacting in code's quality.

Wednesday, October 14, 2009

Some Scrum Fallacies

I been looking into some Scrum concepts that we take from learned but are wrong from a conceptual level, I compiled a list that you can read see below.
  • The PO (Product Owner) prioritize team's activities. This is false. The PO main responsibilities are to provide feedback, answers questions about items in the current Sprint, and work with the stakeholders. The Team is in charge or prioritizing its activities.
  • The Sprint has to last a calendar month. False again. The Sprint can last a month or even less, it can also be extended under special circumstances and with PO's approval. However, the Sprint shouldn't be so long that it doesn't adjust to business events.
  • It is necessary to have a vision and product backlog to start and sprint. False. There are important things that you'd like to have before starting an Sprint, things like a release plan, budget, executive sponsorship, technical leadership, team space and team rules, backlog, etc. Nonetheless, those things are not essential for starting and Sprint, remember, Scrum is about adaptation and not fully detailed planning.
  • All the Sprint Backlog must be defined up to 100% in Sprint Planning meeting. False. The team just need to identify all the necessary tasks to complete the selected Product Backlog items that will be tackled during the Sprint.
  • The SM (Master Scrum) is responsible for removing team members that are not performing well or disturbing other team member's work. False. In Scrum we have self organizing teams that can decide to remove team members by themselves, the SM may provide advice but the final decision has to be team's call.
  • Burndown charts show how much effort has gone into the Sprint. False. Burndown chart essentially shows estimated work remaining for each day of the Sprint.
  • During the Sprint, identified tasks are added as soon as the PO approves their inclusion. False. Tasks needs to be added by the team as soon as their are identified, no need to wait for approval, agility and proactivity is crucial.
  • The team is responsible for selecting the PO. False. Even thought we have self organizing teams and that the SM role can be rotated, the PO is appointed and can't be removed, at least not for the team.
  • The SM has to attend to the Daily Scrum. False. The Daily Scrum is a inspect and adapt meeting for the team that does actual work in the project, so no need for the SM or PO to attend unless they are invited by the team. The SM however should collaborate setting the logistic for the meetings: reserving a conference room, providing phones, sending invitations, etc.
  • The Sprint Backlog will be the output of the Sprint Planning Meeting that will set the direction for the next Sprint. False. The target and direction for the team will be provided by the Sprint Goal.
  • The PO has to ship each Sprint increment when it'll be free of defects. False. The Po has to ship increments whenever he feels that this make sense, of course that they don't have to have defects but this more like an enabler and not the ultimate motivator for shipping.
  • A Product Backlog item is considered completed when all its tasks are completed. False. A Product Backlog is actually completed when it meets all negotiated acceptance criteria.
  • User stories are included in the Sprint Backlog. False. User stories are decomposed in tasks during the Sprint Planning Meeting and then they go into the Sprint Backlog.
  • The SM has to act as a go-between the PO and the team. False. This is the least communication technique that the SM could use, more effective ones are: monitor communication between the team and the PO, teach the team to talk business languages , teach the PO technologies involved in the product.
  • The SM or the PO are in charge of updating the Sprint Backlog during the Sprint. False. The empowered team takes the decision of updating the Sprint Backlog to reflect their decisions and tasks progress.
  • Teams make adjustments to their engineering practices only after consensus was reached in a Sprint Retrospective meeting. False. Teams are adaptive in nature and need to take initiative and adapt their practices whenever needed.
  • The SM is an engineering position. False. The SM is a management position.
  • The product has to be shipped at the end of each sprint. False. The product needs to be shipped when the PO decides to ship it and taking into account a release cycle that includes marketing, finantial, legal and other implications.

Monday, October 12, 2009

A Backlog for Improvements

During a training session that I conducted last week, a very interesting question popped-up, how teams accounts in their improvements after Sprint Retrospectives?

What I use for my teams is a very simple technique: I take pictures of all problem insights and improvements that we''ll be tackling in the next Sprint after the retrospective. I then post those pictures in team's SharePoint and I project them in the initial part of the next retrospective meeting in order to have a feel of what has been accomplished and what not.

This works quiet well but we still miss the feeling of progress, more to the point, we're lacking the psychological effect of burning something (like hours, story point, user stories or any other indicator).

What I propose is to create a backlog of potential improvements, we could go as down as estimating effort and time and maybe creating associated tasks, but my initial feeling is that this would over complicate things. Instead the attack plan should be just logging in this backlog the list of potential improvements from the Sprint Retrospective, prioritize them and then work in the top priority areas for improvement. No much data about how improvements have been accomplished should be logged, unless somebody in the team wants to document them as case studies.

I'll be trying to implement this idea and write later with actual evidence in hand.

Friday, October 9, 2009

Jazz Band Conductor a.k.a. Agile Project Manager

I though in a interesting analogy the other day, if you hear a jazz band (and hopefully a good one) you'd realize that the conductor encourages improvisation. Actually, improvisation is a distinctive characteristic of jazz music.


The conductor is someone that is deeply familiar with the instruments and music, but not necessarily plays with the band. What is important for him is that everybody plays good jazz and the audience gets satisfied. Making every musician in the band happy is of course a priority, happy people are generally more willing to improvise.

There's of course an style to follow and a lot of training, all these required for being able to improvise in the first place. But the goal is always at sight, have fun and play good jazz. Again, the conductor don't tell the team what to do, he let the team create and help it to reach the next level. This is certainly a lesson that we can translate for Agile Project Managers.


On the opposite side, this guy has everything figure out from beginning to end-or at least he likes to think so-, everybody in the chorus has to sing exactly what has been rehearsed endless times.

No deviations from the score are allowed, improvisation is not specially highly rewarded. Do you see any similarities with traditional Project Management?

Wednesday, October 7, 2009

The Most Important Meeting in Scrum

I was thinking on which would be the most important meeting in Scrum, this questions crossed my mind because I've heard that many teams in my organization still believes that Scrum is all about the Daily Scrum Meeting. Maybe the confusion came from the fact that both Scrum-the framework- and Scrum-the meeting- has the word "Scrum" in their names. Whatever it was it's pretty obvious that is a serious mistake that needs to be corrected.

An interested behavior that I've observed on those teams is that they believe that they're doing Scrum just because they have a daily stand-up meeting. Nothing could be further from the true spirit of Scrum.

But going back to the initial question, in Scrum we have meetings like:
  • Planning Meeting
  • Daily Scrum Meeting
  • Sprint Retrospective Meeting (a.k.a. Post Morten meeting)
  • Mid-Sprint Review Meeting
  • Release Retrospective Meeting
This list is not definitive and surely enough more meetings will be added/suppressed by teams based on their specific needs. However, the meeting that in my humble opinion is the most important in Scrum is the Sprint Retrospective Meeting. Why? Because this is the meeting for revision and improvement and Scrum is all about inspection & adaption.

Tuesday, October 6, 2009

The Old McKnight Was Right

William McKnight at 3M create a new mentality: let the workers innovate. This revolutionary mentality made of 3M a company capable of being always surfing the wave by introducing dozens of new products to the market and create very diverse business units.

I read a very interesting quote from McKnight:

"Hire good people, and leave them alone."
"If you put fences around people, you get sheep. Give people the room they need."
"Encourage, don't nitpick. Let people run with an idea."
"Give it a try—and quick!"

This is something that definitely goes inline with empowering the team and other core Scrum values . Again, I'd recommend that we listen to old McKnight.

Monday, October 5, 2009

Technical Debt a.k.a. Torpedo in the Water

What can be more dangerous than having a armed torpedo coming fully ahead to your ship? Short answer: not knowing that the torpedo is coming and just feel the blast and have only an split second to realize that you, your ship and your crew are domed.

You could mitigate this by having some good active and passive radars that at least tell you that at torpedo is on seeking mode and looking for your ship. Of course knowing what is coming is times better than being taken by surprise.



During the Sprint we have a very good indicator-you can call it information radiator if you want-for knowing than something really bad is coming. This indicator is the technical debt which in its simplest definition is just unfinished work. But let me elaborate on this definition a little more, technical debt is:
  1. Schedule work that has not been even started
  2. Schedule work that has not been implemented
  3. Implemented work that has not been tested
  4. Tested code that failed to pass all test, bugs were encountered
  5. Work with bugs for which fixes were provided but rejected
  6. Retest work that is still in progress, consequently there's still uncertain that fixes actually work
  7. Unplanned work that has been discovered in the Sprint but that has not been completed-maybe not even started
  8. And the worst of all, undiscovered work that has not been considered but that will be necessary-armed and locked torpedo
Technical debt, like torpedoes, is lethal unless you know that it exist and is coming towards you. If you detect a torpedo in the water, you can still save your ship-and your life of course-if you launch counter-measures. The sooner you act, the more chances that you have to save your boat, sounds similar to technical debt?

Thursday, October 1, 2009

Agile Flavors

Thinking tools: Tools that we use in our field of expertise to build software. In other domains they’re also called mindsets or philosophies. Some examples are:
  • Lean
  • Agile
  • Systems Thinking
  • Theory of Constrains
When thinking becomes acting you need a toolkit (some people like to call it framework). There are many toolkits/frameworks out there to put Agile/Lean at work:
  • Scrum
  • eXtreme Programming
  • Kanban
  • RUP
Some thoughts on the various disciplines:
  • Agile: A core set values for executing projects
  • Lean: A core set of principles for running a company
  • Scrum: A set of agile management practices based on one principle: Inspect and Adapt
  • XP: A set of agile software engineering practices/processes

Wednesday, September 30, 2009

Building a Burndown Chart using Story Cards

This is something that I tried the other day on a training session. I divided the class in three groups of seven to eight participants, each group created a template for its user stories cards.

I specially like this template because it has all the necessary information in just one place, it has fields like:
  1. Developer's initials
  2. Quality Engineer (QE) initials
  3. User story name
  4. Developer's estimated time
  5. QE's estimated time
  6. Developer's actual time
  7. QE's actual time


By adding developer's plus QE's estimated times, one can have the total estimated time for the user story. If one does this for all user stories, one will ending up having the total schedule time for the Sprint.
This time would be the total allocated time or time that the team has to burn during the Sprint. In the chart below we had check point every five minutes (because our simulated Sprint only lasted 15 minutes) to see how much has been worked and what is still remaining. This information is what allow all three team to draw burndown charts and use them as a tool for monitoring progress and forecast Sprint degree of completion -only from a time perspective.

About Story Points

Story points are a good estimation technique that you can use when you try to estimate complexity in user stories. Unfortunately, complexity is not directly correlated to effort and time so you might end up having ultra complex features that can be implemented by a very inspired developer in a very short time (yes, miracles can happen from time to time).

On the other hand, you can have low complex features that require many hours of repetitive work.

Another variable in the equation is the QE time that you'll have to commit to test the feature. Again, there's no direct correlation between the implementation and testing complexity. For instance, automated testing can save a lot of man hours and make testing easier for a highly costing -from development perspective- feature.

A guess the caveat here is not try to use story points as what they are not: a metric for progress and effort.

The Myth of the Accusatory Taskboard

I've seen many teams in many projects that are afraid of updating their real status in a taskboard because they feel that if they say the truth their manager will punish them. Childish as this might sound, this situation is not uncommon and frequently leads taskboards to be an inaccurate representation of the reality.

Let's start one step back, what is a taskboard in the first place? My answer would be this: a taskboard is a collaborative tool that the whole team uses to communicate ideas and reflect tasks status.

Conversely, what a taskboard is not is a reporting dashboard that upper managers look for finding problems and bottlenecks. Further, managers can't pretend to use taskboards for micromanagement and problem diagnosis.

Two steps back, Scrum is all about empowering the team, not about managers using reporting tools to controlling things in a project.

Tuesday, September 29, 2009

A Foreman has no Place in Scrum

I remember reading that Henry Ford T-model success came from the fact that the assembly line that he invented was able to produce 17 million of cars in less than two decades.

I also remember reading that his Detroit assembly plant hired employees from several different nationalities and around 50 different languages were spoken in the plant -something similar to the world software industry today. What make possible that all these people can work productively was Frederick Taylor invention: scientific management.

Scientific management is based on the principle that engineers -or foremen- study a process, optimized, and then teach workers how to do it. Deviations from the process are not allowed and creativity is limited at minimum. Quality is controlled on the base of how well workers sticked to the process and produced the expected results. Mechanization and mass production is the result of this management technique. Controlling and supervision is assigned to foremen that are appointed responsible from a group of workers and a portion of the production line.

This worked pretty fine for some decades until Japanese Toyota Production System changed the auto car industry forever. For starters this brought the idea of suppressing foreman and empowering teams.

It still strike me odd when somebody ask me about the best way to manage developers, Scrum principles are not about better management or better application of the scientific management. On the contrary, Scrum is about flat structures -no foremen- and creative thinking.

Monday, September 28, 2009

Metrics are not Crystal Balls

Teams need visibility of their progress, but how can they get enough visibility about how they are doing in the Sprint?


There's no short answer but you can try this:
  1. Convince your team to believe in metrics.
  2. Choose the metrics that best work for your project -simplest one is number of completed user stories .
  3. Make everybody look at the same set of metrics.
  4. Put all metrics in just one place (a taskboard and burndown charts are more than probably you'll need).
  5. Create the habit for all your team to provide initial estimates and then compare them to their actual work.
  6. Do not waste too many cycles trying to innovate in metrics, everything has been already discovered, is just matter or look what would be good for your project.
  7. Metrics are not black and white, there's a lot of gray areas in them.
  8. Metrics are indicators, symptoms but not conclusions or explanations.

Friday, September 25, 2009

What the PO Does During the Sprint?

Somebody asked me an interesting question: what does a Product Owner do during the Sprint? But before answering this question, let’s go one step back and see what the PO does for the release backlog.
Let’s start by saying that the PO has changing roles and responsibilities during the project. For instance, during the very inception of the project, the PO would be responsible for gathering the initial set of requirements–provided by the client–and together with the team, create an initial set of user stories.
This initial set of user stories is prioritized and estimated–using Planning Poker, T-shirt sizes or any other technique–by the team and the PO. This prioritized list of user stories is what accounts for the initial backlog or release backlog. One caveat, estimations provided in this stage are related to complexity, not to time or effort.
Building the Sprint Backlog
The sprint backlog should be a subset of the product backlog that the PO and the team agree to be worked during a time-boxed period of time: Sprint.
During the planning stage of the Sprint, the PO in collaboration with the Scrum Master are responsible for guiding the team in providing time estimates for the committed user stories. Over-commitment is hard to avoid in the first sprints because the team isn’t aware of its own velocity.
Based on the time estimates provided for each user story, the PO can calculate–adding up all the user stories times–the total number of hours that the team would be working on the Sprint. This number of hours is the total capacity allocated or hours to be burned during the Sprint. Making sure that these hours are burnt–and the associated user stories delivered–would be one of the key responsibilities of the team during the Sprint.
Running the Sprint

Scrum recommendations limit to say that the PO has to be available for the team during the planning stage and also during the Sprint, in case the team has additional questions. The PO also has to prioritize and reprioritize the backlog on a need basis. But it would seem that doing all this wouldn’t take much time, at least not to keep him fully busy during a three-week Sprint. This is one of the Scrum myths that I'll try to demystify.

So, let me bring some light to what the PO should be doing to stay entertained during the Sprint:
  1. The PO should be in contact with the client showing and discussing results from the past demo. Showing something to a client just one time is usually not enough to have an accurate perception of what he liked or not from the product.
  2. Clients are not always available for the PO, so chasing clients will consume cycles from the PO.
  3. If the PO is working with a product targeted for multiple clients, the chasing/demoing activity increases exponentially consuming even more time.
  4. Demoing a product without the team's assistance implies that the PO has to set his own environments and learn the product well enough and not only from a user's perspective. This is not a people's activity; it is more technical and will consume many cycles from the PO.
  5. By using the product in his own environment, the PO will be discovering what I'd call "requirement issues". In fact, the PO will be doing "requirements testing" which is a validation of the product from a client’s perspective. Passing this test would guaranty that the PO has something he can sell.
  6. Studying the market and monitoring competitors should also be in the PO’s agenda.
  7. Defining new requirements based on client's feedbacks and market reading.
  8. Adding new user stories into the project backlog by his own initiative and/or per team’s requests is pretty obvious, what is not so obvious is that the PO needs to ask the team to do some planning poker to estimate complexity for the recently added user stories and that the backlog would need to be reprioritized.
  9. Grooming the backlog to accommodate new and changing requirements should be the tangible output from the PO's work during the Sprint.
  10. Building and monitoring a burndown chart is also crucial for POs. Actually, the burndown chart should be used by the PO to see if the allocated hours are being burned at an acceptable rate that can be used to predict that all scheduled work would be completed on time.
  11. Frequently enough, the burndown chart will show that the team is more likely to miss the mark, in these situations, the PO has the responsibility to negotiate whatever can still be completed on time. Having a bunch of user stories in progress is not as good as having a few completed–walking skeleton principle–.
As you can see, the PO has enough–and maybe more than enough–to keep him busy during the Sprint. One note thought, Scrum doesn't prescribe that the PO's tasks should be planned and tracked on a separate backlog, but what if some PO decides to do so? I'd be happy to hear about the results of this experiment.

Thursday, September 24, 2009

The Real Kaizen Behind Scrum


My aikido sensei told me the other day "you need more kaizen, your aikido is not flowing smoothly". I thought to myself, "is my sensei referring to the same kaizen principle that we use in Scrum?".

So, after the class I sat with my sensei and I asked, "what do you oh sensei understand for kaizen?" This is what he responded:

"Kaizen in about seeing yourself and recognizing that you still have still much to improve and learn.

Kaizen is not about beating others, it's about learning and teaching others and together improve.

Kaizen is about being humble and be ready to recognize that your learning path just started and will never really end."

Much of this can be applied to Scrum, don't you think?

Applying Queuing Theory to Sprints

Queuing theory is one of those subjects that you learn in college but you never completely understand, like calculus or algebra. The problem with it is the little practical use you make of it.

I realized some time ago, that Quality Engineers (QEs) in my projects were not doing much during the first half of the Sprints and this had a recurrent pattern; they created test cases, preparing testing environments, updated existing ones, did some research, wrote training documents, etc. That sounds kind of ok, but the problem was that by the end of the iteration, they were all of a sudden overwhelmed by the amount of user stories they had to test. Consequently, they had to work killing hours (including weekends) to get the work done.

I remember that time ago, banks had a similar problem, tellers where not doing much during most of the day, but at peak hours they could hardly breathe because they would have a bunch of impatient clients formed on a line in from of them. Banks played smart and did something to improve their service systems and evenly distribute arrivals. Queuing theory-wise, if you can't increment servers, go in the other direction and look for a stable arrival rate.

It's pretty obvious that we have a lesson to learn from banks and other service industries. So what can we do in order to help balancing Quality Engineer's work? Some ideas below:
  1. Plan for micro-deliveries, small pieces of functionality should be passed from development to quality assurance on a daily basis, if possible.
  2. Don't pass code that is not on a build for testing, builds are great for version control and integration testing.
  3. Automate as you implement, run Build Verification Test on a daily basis.
  4. Write test cases as you progress with testing; don't wait to have tons of test cases written before starting to test.
  5. Sync developers and QEs’ efforts to know what to implement and test.
  6. Don't plan for large user stories, break them into smaller chunks of functionalityuse the INVEST principle.
  7. The acceptance criteria is not fixed, it evolves as the functionality is implemented and testedit certainly exists from the beginning but should be refined as we move into the sprint.
  8. Look for people that are sitting idle because they're waiting for something to test, waiting is waste and has to be eliminated.
The ultimate goal should be to have a stable flow of code from development to quality assurance and vice versa. A stable flow is what will allow you to make better use of your QEs reducing queues’ length and decreasing throughput time for user stories.

Wednesday, September 23, 2009

The Rule of No Rules

Is Scrum all about anarchy? Well, yes and no.

Yes, because Scrum is not about forcing rules and creating enforcers of this rules. There's no hierarchy and authority representatives, no one is entitled in a controlling role.

One common misunderstanding is believing that the Scrum Master is like a police officer that has to make team member follow the rules of the "true Scrum".

No, because Scrum has values and principles that one should understand and follow. Working agreements reach consensus in the team and are applied on discretion. The team auto manages itself and decides what to do and how to do it.

Scrum is all about negotiation, inspection and adaption. No imposing something and making it work by force.

Telling the Tale Using a Task board

What a taskboard is?
  • A communication vehicle
  • A wall or a whiteboard where the team post their kanban cards
  • A visual indicator -a.k.a. information radiator- that provides a quick status check
Who should build it?
  • The Product Owner should build the first version
  • The team can increment it's content in each Sprint
Who should maintain it?
  • The main responsible is the Product Owner
  • But this doesn't excludes the team in participating
  • The ultimate goal is that the taskboard accourately reflex tasks -kanbans- progress
  • The rule of thumb is having always updated status of tasks completion
Who should prioritize it?
  • In kanban the taskboard is not prioritized but in Scrum it is, so the main responsible for this is the Product Owner
Does it has to be online?
  • Yes, if you want to save the kanbans for later analysis
  • Yes, if your team is distributed
  • Yes, if you have way to many kanbans to put in a single wall (smells: problem indicator)
  • Yes, if you have found an electronic taskboard that is not complicated to use and fill your needs
  • Otherwise you should be using the old and simplified version: post-its and a whiteboard

Tuesday, September 22, 2009

Management by Sight

To world famous Toyota Way, based on the Toyota Production System success, has a very interesting concept that always caught my attention: Management by Sight


So what this all about? Let me try to explain the need for this:
  • Manager, and why not engineers, are busy people who don't have time to crunch numbers to see how they're progressing in the project
  • Everybody cares about measuring progress, but managers are just big fans of it
  • So, we need something that people can look and instantaneously, and why not magically, can tell them whether they're on track or falling behind schedule
Once we all realize that we need some light here, we're ready to look into what Toyota discovered years ago: a simple chart that shows progress


But some conditions for the magic to work:
  1. It has to be visual (obviously!)
  2. It has to be public
  3. It has to be periodically updated
  4. It has to be easy to read
  5. It has to be easy to understand
  6. It's interpretation has to be clear
  7. Only a few indicators should be considered per graph, don't try to show everything in just one chart
  8. It has to be a communication vehicle for the team
  9. It has to have real data, not massaging information should be allowed
  10. It's a tool for the team, not just for managers

The Mythical Scrum Master

Once upon the time there was an all powerful individual full of charm and personality, his energy and intelligence what nothing from this world. His voice was like a thunder and he has this magic thing for solving everybody's problems: his name was The Scrum Master

Unfortunately this is just a myth, no such thing exist or at least can be easily found in just one individual. But let's go one step back, why people created a myth around this character? There are some reason that I can think about:
  1. From a management perspective it's easier to concentrate all responsibilities -and powers- in just one individual
  2. From a team perspective it's times easier to expect that somebody leads the team and solve problems
  3. Concentration of responsibilities in one individual is the worst enemy of team empowerment and the great ally of micromanagement
  4. Blaming only one individual in case of failure is less harmful to morale than blaming the whole team
So what can we do to kill the myth? Several things I guess:
  1. People believes in myths because they try to avoid reality, so let's start there recognizing that there's no Super Scrum Master
  2. Like it happened in old towns, myths will be erased via education -so let's start by educating managers and teams
  3. But education people on what? Short answer: Scrum
  4. Micromanagement is good as long as the team is in charge of it: micromanagement of the team by the team is what we're looking for

Monday, September 21, 2009

When the Product Owner Has to Say No

Saying no to the team, to the client or the Scrum Master, is one of the PO's more important tasks, but let's think in some situations when this might happen:
  1. When the team wants to force the client to use a technology, platform, architecture or design that it believes that is "technically suitable" but has no value for the client -I know what is best for you syndrome-
  2. When the client wants something that is clearly unfeasible with the given schedule and resources -the dreamer syndrome-
  3. When the Scrum Master representing the team wants to change the scope for the project -two captains one boat syndrome-
  4. When the team voted and decided what should be excluded from the product -technical decision without business support syndrome-
  5. When developers want to work on an overkill solution for a problem -super engineer syndrome-
  6. When the team wants to investigate during several sprints without guarantying practical results -researcher syndrome-
  7. When the team and Scrum Master want to skip demos -ostrich syndrome-
  8. When the team want to exclude the PO from all meetings because they believe they already understand the product well enough -come back when it'll be ready syndrome-
  9. When the client want to communicate directly with the team bypassing the PO -serve yourself syndrome-
Syndromes Origin

Even though these syndromes are very common, their origins are still fuzzy, some might be:
  1. The PO is not making a good job gathering requirements
  2. The PO is not making a good job communicating requirements to the team
  3. The PO is not making a good job tracking Sprint progress and backlog evolution
  4. The PO is not making a good job interfacing with the Scrum Master
  5. The PO is not making a good job buffering the team from external influences
  6. The PO is not making a good job setting priorities
  7. The PO is not making a good job using her charisma to lead/influence the team/client in the right direction
Whatever reason that is forcing the PO to says no to the team, these seems to indicate a disruption in the PO role an performance. After all a NO is not required if everybody is clear in what should be done.

Many NOs accumulated in the course of a project are a symptom of a failure in the PO performance. But in any case is times better that the PO says no and correct her course than letting things flow in a direction that will take the project to a failure.

Friday, September 18, 2009

Another of my Articles Published by the Agile Alliance

Another article written by me, Definition of Done and the Quest for the Potentially Shippable Product, has been recently published by the Agile Alliance: see this link http://www.agilealliance.org/articles_by_category?id=17

The Illusion of Control

Buddhists believe that the control is just an illusion, people fond to control things suffer to much when they realize that they can't control everything.

Buddhists recipe for happiness (and please don't take this literally) is not creating tight bonds to anything or to anybody, just let things flow and interact without forcing control.

What I found interesting in this philosophy is that managers tend to devote much of their time and effort to craft complex control devices, dashboards, and metrics loosing perspective that in Scrum everything is about micromanagement, but not micromanagement of the team by the manager. Micromanagement of the team by the team itself is times more efficient.

If a team learns how to manage itself, the illusion of control won't be necessary any more, managers will become coaches, and will sleep better at night.

Is the Product Owner a Villain?

Short answer, yes. Long answer, still yes.

The PO hasn't has to be a villain, only look sometimes like one in front of her team.

Why such a nice gal like the PO has to look though? Simple, if she doesn't, team members will end up imposing whatever they believe that the user should need from a system rather than to hearing to user's requirements.

By the same token, as a user's representative, the PO is in charge of sometimes say "your work is not what what I need, I can't sell that to clients, you need to redo it". The team will be close to kill the PO but in the long run her firm hand rejecting not good user stories or asking for work to be redone will be key for taking the ship to good harbor.

Dirty work PO's it is, but somebody has to do it. May the force guide your steps POs.

Wednesday, September 16, 2009

Scrum Bolivia User Group in the Agile Alliance


Our Scrum Bolivia User Group is now officially registered in the Agile Alliance group listing, see http://www.agilealliance.org/show/1641

When Pigs Become Chickens

I received and interesting question, somebody asked me about some reasons why a pig might become a chicken I can think in a few:
  1. The pig is a low performer and she is as productive as a chicken
  2. The pig's motivation is so low that she acts like a chicken
  3. The staff was reduced and we have less pigs or some were reassigned to other projects but still play chicken's role in our
  4. Part time engineers that switch among projects and don't really contribute to any
  5. Upper managers who like to play pig's role but run away when they have to get their hands dirty with complicated tasks

Thursday, September 10, 2009

Mechanics and the Myth of the Satisfied Client

It’s always a true in software development that we try to satisfy clients believing that they’re always right and they always know what they want. Further, we tend to think on them like almighty goddess with unlimited decision powers. But let’s think it twice, why should we be more committed to customer satisfaction than other service industries?
Let’s take this example from real life, my real life at least. I took my car to the car shop because the front wheels were making some funny noise whenever I slammed on the brakes. The mechanic when below the car, took a look and found nothing. I insisted in that the car was making noises when I drove it, the mechanic said let’s take it for a test ride. I drove the car, slammed on the breaks and again nothing: no noises, no breaks failure no nothing. The mechanics just smiled to me and charge me a no cheap fee for his time.
As a customer, what was my problem? Did my car really have a problem? It was a random not easily reproducible bug? Honestly, I don’t know, but I’m sure that I’m not crazy and the noise was there. So the point is, does the client have enough knowledge to diagnose problems? More to the point, what approach the mechanic used to try to pin down the problem with my car? No big up front requirements, the mechanic uses and iterative approach for diagnosing, fixing is incremental and many times mechanics have to inspect and adapt their work as they move.
Lastly, mechanics usually don’t tell us how much they’ll charge us for their services; they complete the work and then ask for payment. Most of the time your car is fixed and you paid without questions. What a wonderful life mechanics have, don’t you think? Something that we can copy?

Sunday, September 6, 2009

My article has been Published by the Agile Alliance

An article that I wrote last month, Key Metrics for Measuring Sprint Status, has been published by the Agile Alliance, see this link http://www.agilealliance.org/articles_by_category?id=10

The full article can be also found in this blog: http://juanbandaonscrum.blogspot.com/2009/08/key-metrics-for-measuring-sprint-status.html

Scrum Bolivia User Group on Twitter


The Scrum Alliance officially announced the creation of the Scrum Bolivia User Group on Twitter.


The Scum Bolivia Yahoo! User Group has been Created

Finally the Scrum Bolivia has an official User Group where Scrum Practitioners can meet to share experiences and learn from each other, this is an open group so no moderator approval is required. The link to this group is http://tech.groups.yahoo.com/group/scrumbolivia/



The Scrum Alliance also made public the creation of this group, and include it in its listing of official groups, see this link http://www.scrumalliance.org/user_groups/74

Friday, September 4, 2009

Plan for the Sprint vs. Sprint Planning

A typical question that comes from teams is how much planning should be made for before starting a Sprint? Further, what should we plan for? These are fair questions without direct answers. Let’s start trying to solve this puzzle one piece at the time.
What is planning?
According to Wikipedia, planning is “…the psychological process of thinking about the activities required to create a desired goal on some scale”. So, if planning involves thinking next questions should be, is this an individual or collaborative activity? The answer for this seems to be quiet intuitive, it’s an individual activity that is performed in a group but not necessarily a team activity.
What is Sprint Planning?
Is the stage in which the team, and when I say the team I mean everybody that actively will contribute during the Sprint (no chickens sorry), gets together in a single room for an X number of hours and try to define what to do. Focus on what and not how. One tip: don’t let the Product Owner run without answering all your questions, lock the door if necessary.
Plan for the Sprint
This is not the Sprint planning stage; this has to do with more tangible things like: capacity planning, logistics and infrastructure. For instance, consider you will be taking vacation and what type of environments or licenses are you going to need in the Sprint. Some other questions might to consider:
  • Should I have to use Agile/Scrum for plan for the Sprint? Not necessarily, since this is a support part necessary for the sprint but not under the scope of the team.
  • Who should take care of this? Somebody in your Management and/or IT staff.
  • Is this really important for the Sprint? Absolutely, you can’t have a Sprint without knowing what resources you’d be committing.

Question on Why we're Always Late

  • Why we always have Quality Engineers and Info Devs trying to catch up with development? Is this because development is moving too fast and QE/ID too slow? Or is this because development moves to slow holding back QE and ID work? We shouldn’t be asking this questions in the first place if we consider everybody as part of a single team (no dev, QE, and ID for separate) and we have a consolidated Definition of Done. DoD is what can tell us if a user story is really completed or half baked. DoD should be common and all team should aim to complete user stories under this definition.
  • One step back, why we plan for more than we can achieve? Part of this is because we don’t plan thinking as a team, development plan first and then QE and ID add their estimates. Development estimates might fit whiting the timeboxed iteration but QE and ID would most likely fell of the mark. The other part of this is because we don’t know for certain team velocity.
  • Why we don’t know team velocity? We measure progress in hours but we plan in user stories, our backlog contains user stories that when moved to iterations are tracked down using burn hours. What Scrum recommends is using burn down charts for user stories not for hours. Burning hours is not a good indicator of progress, on the contrary, it often mislead us when we try to evaluate iteration progress.
  • Velocity could be better indicator if we can estimate the number of user stories that the team is able to produce in an iteration. However, story points or other metric should be considered to have a correlation metric for user stories’ size and complexity.
  • Are carried over user stories that bad? Without question carrying over user stories break the Scrum principle of having a Potentially Shippable Product at the end of each Sprint. Moreover, from a production stand point, carried over user stories creates a technical debt that will drain team’s capacity in the future.

Monday, August 3, 2009

Definition of Done and the Quest for the Potentially Shippable Product

One of the main contributions from Scrum to the software development community is the conscience that has been created about obtaining a tangible deliverable product at the end of each sprint. This tangible product, “potentially shippable product” in Scrum terminology, is what governs all efforts from the team, the Scrum Master and the Product Owner. In theory, the potentially shippable product should be ready for shipping at the end of each sprint, given that if you can find a client willing to buy a product that works with limited functionality.
Theory contradicts reality in many occasions because the team hasn’t implemented enough functionality or this has not been completely tested to be considered as a valuable product that can be delivered to clients. Furthermore, confusion within the team arises when there isn’t a clear and unified Definition of Done (DoD). For instance, for development “done” means that a user story has been implemented but not tested. The DoD needs to be understood by everybody in the team; moreover, the DoD should be extended to different levels to cover all the expectations and requirements from the team and other stakeholders. Not having a clear DoD eventually affects the chances of the team to attain a potentially shippable product; what is even worse, without a clear DoD the team will be working as if trying to hit a moving target during the sprint.
This article will present and analyze different perceptions for the DoD in order to suggest a unified version. A word of caution thought, the DoD is something particular to each project so it needs to be evolved from what we suggest here in order to be applied in particular projects.
Different Types of DoD
Let’s start saying that we have only one team but different people playing different roles in it. As a consequence of their work, engineers tend to have their own perception of how and when something can be considered done, let’s take a look at the following definitions of done:
  • For development some code is considered done when it has been written, compiled, debugged, unit tested, checked-in in a common repository, integrated in a build, verified by a build verification tool and documented using an automatic documentation generator.
  • For automation test cases are considered done when the corresponding automated testing scripts have been coded, unit tested and ran against a build.
  • For quality engineering done means that the user stories received on a specific build have been tested by running testing cycles, encountered bugs have been reported using a bug tracking tool, fixes provided have been validated and the associated trackers have been closed. In consequence, test cycles include both manual and automated test case execution.
  • For info dev done means that all the help files in the product as along with the printed manuals are written in perfect English or any other language.
  • For project managers done means that the sprint has ended and the user stories have been completed (not necessarily accepted by QE) and the schedule hours burned. Moreover, for managers done usually means that there’s something−anything−that can be presented to the stakeholders in a demo.
Now let’s continue analyzing why those definitions, if taken separately, won’t help to build the potentially shippable product:
  • Development DoD is not enough because code was produced but not tested, at least not tested from the users perspective. Crucial tests like functionality, usability, performance, scalability and stress are still pending. Without passing all those tests, at least to a certain level, the product is simply not good enough for being shipped. A big temptation is to consider that in early stages of the product, the code is not required to pass all tests or there’s no need to test it at all. This of course violates the very definitionconception of a potentially shippable product.
  • Automation DoD is not good enough either, because it only covers code that has been written to test code, no bugs discovered, validated, retested or closed. It’s very true that automated test cases save many man hours of manual testing, but it should only be used as a tool that helps Quality Engineers in their work and not as a milestone that if reached qualifies the product for release.
  • Info dev DoD is also falling short because it only considers manuals and help files, not the product itself or the quality it might have. It’s not uncommon to see technically excellent products with a high degree of associated quality but with poor documentation. The opposite scenario is also common and even more problematic.
  • Project Manager’s DoD it’s focused on metrics and high level information. The biggest risk for Project Managers is to look at the wrong metrics and try to put a product out of the door when it’s not complete or stable enough. Avoiding biased perceptions that come from information collected from only one group of engineers in a team is the key for telling if the product is ready or not to go into production.
  • Quality Engineering DoD in my opinion, this definition is the one that contributes the most to the goal of having a potentially shippable product; the reason in simple, QE should be the last check point for telling if the implemented user stories are good enough for being considered as part of a potentially shippable product. One consideration is that the documentation provided by info dev should also be reviewed for technical accuracy. Another is that automated test cases should also be checked for effectiveness and real contribution to bug detection and validation.
It is evident that we need a DoD that can work for the whole team helping it to remain focused in the right direction and serving at the same time as a true enabler for the potentially shippable product concept. But before we go into that, let’s explore the different levels of accomplishment.
Levels of accomplishment
By accomplishment we understand some milestones that the team will be reaching during the project’s execution, there are several levels of accomplishment as we describe below:
  • Task level means that a task, whatever it was, has been completed. Tasks are usually performed by individuals in the team and still need to be cross checked or validated. Tasks are purposely mentioned as the smallest chunk of work and will be that which has been done by somebody in the team. Tasks have an identifiable output in the form of code written, test cases written, environments set, test cases automated, documents prepared, etc.
  • User story level means that all tasks in a user story have been completed, when this occurs we also have fairly good chances to say that the user story has also been accepted. Albeit bugs might appear during testing, once they are fixed the user story can be marked as accepted and will go directly into the potentially shippable product.
  • Sprint level means that the time boxed hours have been burnt; more importantly, the planned user stories should have been accepted. One very common mistake is to consider that a sprint was successful because the team has been able to complete certain amount of work by the sprint end date; this is true to a certain degree. The problem arises when you compare the work that has been done with what can and should be included as part of the potentially shippable product.
  • Release level is achieved when the product finally meets all required criteria −technical, legal, marketing, financial, etc.− to be put out of the door and make it available to clients.
  • Products level means that the team has been able to put several releases out of the door; by releases we don’t refer to hot fixes and service packs. Hot fixes are symptomatic of poor quality as they usually indicate that urgent fixes are required by clients that are reporting serious problems with the product. By the same token, releasing many service packs might be a good radiator for suspecting quality problems in a product, the processes and/or the team.

Unifying the DoD with the level of accomplishment
The ultimate DoD should be such that it really helps the team to envision the true goal behind the sprint: build a potentially shippable product. In that sense the Quality Engineering DoD seems to be the one that gets us closer to that goal for the following reason:
  1. It deals with user stories, quality and completeness.
  2. It also covers automation and info dev DoD; this assertion works under the assumption that QEs are the ones who execute automated testing suites and review written product documentation for technical accuracy.
  3. User stories accepted by QE are probably the best indicators for managers because they reflect the degree of accomplishment on a sprint and are directly correlated to functionally that can be included right into the potentially shippable product.
Consequently, this QE DoD applied to task and user story level will have a positive impact in the sprint accomplishment level, which in turn will favorably impact on the release and product accomplishment levels.