Many people argue against using the "Done" checklist, because they consider this to be non-Agile and also consider this checklist to increase the stress on the team !!!
How is it non-Agile ?
Its because, it goes against the Agile values and principles. Agile methods are based on the fact that project work is unpredictable and they suggest empirical control theory of learn and adapt cycles. Now in case of "done" in Scrum, the product owner(PO) goes through the "Done" checklist, and sets an expectation for the team to achieve certain things by the end of the iteration. If the team is able to deliver "all" the things in the checklist an iteration would be termed success or could easily be termed "failed" iteration.
But don't you think "enforcing" "done" on the iteration is like enforcing the people to think that "everything in a project is predictable".
So, can we abolish "done" checklist and give a free hand to the team to do whatever they want to do ? How do we measure if team is really on the right track ? how do we know if the team is doing things which adds value to project ? How do we measure the quality of the iteration ?
Here is what Jeff Patton wants to say about measuring quality of iteration.
Jeff Patton recommends a grading system for features in an iteration. :
- In a small group, brainstorm the major features of your product.
- Independently for each feature write your "grade" for the quality of the feature. Answer the following questions: Do you like the feature?; Do you like using it?; and Is it a valuable part of the product? Let your answers help you grade the feature with an A, B, C, or D, or fail it with an F.
- When done, discuss your grades with those in your group. Agree on a grade that best represents the group's opinion of the quality of that feature.
After looking at the recommendations of "done" and "grading", I have been thinking of proposing a new model providing the best of both worlds. I am going to call this as "done grading" system as this is a mid way between the above two theories.
Here are the steps I propose:
1. At the beginning of each iteration, the team would sit with the PO and create the "done" checklist. This checklist is created to understand POs expectation for the iteration. I feel having "done" checklist sets the right expectation and provides a clear goal for the team to proceed further. Without a clear goal, the team would still be working but without a common goal.
2. At the end of each iteration, the PO would still go through the "done" checklist, but instead of calling it "yes" or "no", he/she grades the iteration based on completion of tasks.
For ex: if the team has following tasks in their "done" list
- delivering A,B,C features
- unit tests for all features
- regression test cases and testing
- automating the tests
- Introducing TDD
By applying the above mentioned "done grading" system, the team is relieved of "stress" at the end of each iteration . This is because the iteration would be measured on a more collaborative way of "done grading" system rather than the earlier binary "done" system.