Monday, December 25, 2006

Can velocity be used to measure Productivity ?

I have heard many managers in software companies trying to measure productivity based on Velocity of the team. I think it makes no sense.

Here are some of my thoughts:

1. Velocity is measure of team's capacity to deliver certain functionality within a specified interval. This definition makes it clear that, one cannot use the velocity as the stick during appraisal and performance evaluation for individual members.

2. You cannot use velocity to measure the productivity between teams also. Reason being, each team is different. Two teams with 10 developers each, cannot be compared. Each team would come from varied years of experiences, domain knowledge, maturity, support from product owners, communication skills, etc.

Toyota and moment of Zen

Here is a very good article for all who would like to improve on a daily basis. I could related many of the practices in this article to scrum meetings and retrospectives. Must read for entrepreneurs and managers to create a learning organization.

Thursday, December 21, 2006

Velocity and more

Here is nice article from Mike Cohn about Velocity

I find that one of the most common mistakes teams make is to use the
term "velocity" to refer to both
--the number of points (or ideal days) finished in a sprint
--the amount of planned work completed in a sprint

It is far more useful to use velocity to refer to the amount of
points finished in a sprint. The amount of work planned in a sprint
will be relatively constant sprint to sprint. This is essentially the
Y (vertical) intercept of a sprint burndown chart. If you need a term
for that call it "capacity." The tough concept for some teams to get
is that capacity (# of hours of work planned into a sprint) isn't
clearly correlated to the # of hours worked in a sprint. To make it
simple, consider me a team of one doing a one-week sprint. I will
work 40 hours this week. If I'm a perfect estimator (impossible) I
can say "I'll answer email for 20 hours this week (I wish!) and spend
20 developing; let me pull in 20 hours of work during sprint
planning." However suppose I'm not perfect and that my backlog items
have some uncertainty. Over time I may find that when I see a pile of
work and say "that's 15 hours" that this is the amount that perfectly
fills up my 40 hour workweek. I may get 25 hours of time on task to
do what I called 15 or I may get the 15 done in 12. It won't matter.
What matters is that what I call 15 fills up my week. That's how to
plan sprints and to work with capacity.

Pros and Cons of conducting retrospectives

Here is a nice article about retrospectives

Retrospectives PDF Print E-mail
Written by willem
Tuesday, 14 February 2006

After an event or project, all the stakeholders come together to celebrate the successes of the project, and learn from failure in a constructive manner without finger-pointing. After a retrospective participants have concrete actions for the next event or project, and can contain broader organisational change.

Retrospective benefits:

  • Closure
  • Learning from our own mistakes
  • Learning from someone else's mistakes
  • Acknowledging aligned practices and accomplishments
  • Rewriting organizational stories
  • Learning a healthy ritual
  • Discovering strengths of the organization
  • Clarifying aligned roles for players
  • Allowing to connect emotionally
  • Challenging each others assumptions
  • Providing an overview of what's actually happening in the organization
  • Building trust across boundaries
  • Clear results, that lead to realistic commitments

Retrospective risks:

  • Emotional upset
  • Unrealized expectations
  • Lack of follow up leading to frustration and distrust
  • Last minute sabotage
  • Repercussions and retaliations
  • Loss of control at the top
  • Increase in awareness of personal responsibilities and accountability
  • And of the chasm between responsibilities and accountability
  • Can be a life style changing event
  • Discovering that you waited too long to do aretrospective
  • Discovering some of your assumptions are invalid

Risks of not doing retrospectives:

  • Emotional upset piles up
  • Repeating mistakes over and over again
  • Carrying forward unhealthy relationships
  • Not using an opportunity for sustaining and reinforcing best practices
  • Lack of adaptivity
  • Less access to hidden wisdom
  • Not having your assumptions challenged
  • Would have could have should have thinking
  • Learning organization is not learning
  • Getting stuck in causality loops
  • Less access to hidden resources

Retrospectives can provide lessons on architecture, planning, communication, product information flow and possible early intervention points.

Tuesday, December 19, 2006

Tips on Iteration Planning

Stumbled upon the 17 tips to iteration planning . Even though there are more to it, one can add, but definitely these are the ones to start off with !!

Monday, December 18, 2006

Dos and Donts during Planning Poker

Here are some of the tips that could help during planning poker estimation sessions

1. Ensure that all developers show the estimation cards in one shot. If the cards are shown serially chances that it could lead to "estimation influence" factor.

2. Don't sit in crumpled space during this session. Try to use a large room, and people sitting in such a way that they can see each other

3. People who have no idea of use case or requirements can opt out of the session. They need not be forced to be present.

4. Have a spreadsheet open and the computer connected to a projector ready. This would help all the team members to see the requirements clearly.

Please note that, Planning poker estimation, even though more light weight and accurate than other estimation techniques, it could lead to in accuracies. The inaccuracies results from inadequate estimations in the technology or domain that they are working on.

Sunday, December 17, 2006

Pair Programming benifit in a short sentence

Pairing takes about 15% more effort than one individual working alone, but produces results more quickly and with 15% fewer defects [Cockburn & Williams]. Fixing defects costs more than initial programming [Boehm], so pair programming is a net win. -- Jim Shore

Monday, December 11, 2006

Relation between Sales men, Process and project delivery

Is there any relation among sales men, Process and project delivery. Whether you agree it or not, I strongly see a relation here. Consider a scenario where, the sales person bids a project for a software organization, and gets the project. Now, the job of the salesperson is over, and the next step is for the delivery team to take responsibility and deliver the project on time to the customer.
As we are aware, project delivery is controlled by the scope, time and cost. Any change happening to one or more of the parameters has impact on others. If the time is committed to the customer as in fixed bid fixed time project, then we need to ensure that scope and cost are also in line. If the sales person has gone and committed a particular time line to the customer making assumptions on estimations, it is going to directly impact the delivery and the developers have to go through terrible stress (assuming he is over committed) during implementation.

Let us talk about the impact of above two parameters onto process. Let us take the project following agile methodology. Estimation in agile projects is done using IEH(Ideal Engineering Hours). In the projects I have seen, IEH is anywhere from 6-61/2 hrs per day.
What if sales person is not aware of the IEH and sells the project with 8 Hours per day in mind ? Who will work that extra 1 1/2 hours committed by the sales person ? How do you compensate for this ? I know that you know the answers to above questions !!!!

Tuesday, November 28, 2006

Eye contacts in Scrum Meetings

I found this nice article by a blogger Abrachan:

When a team is transitioning from the conventional predictive project management to the adaptive style based on SCRUM method - very often it is like releasing a parrot under captivity. Suddenly the parrot gets absolute freedom and at the same time it is not conditioned to enjoy the new found freedom. It still thinks that it is under captivity. It has to strengthen the muscles required to fly in a world of unlimited freedom and opportunities to excellence.

When the daily scrum meetings happen, most of the members - if they are new to the team, still follow the reporting attitude - and keep reporting what I am supposed to do, What I did and the problems I faced - to the SCRUM master only, without any eye contact with the rest of the team. Even if the SCRUM master do not want to be seen as a command and control freak, the rest of the team still see him as a command and control freak - which is hang over from the previous predictive project management styles he/she was practising. One of the very effective tactic (fix) to de-promote this behavior by the rest of the team is to avoid the eye contact with the person who is `talking to you' in a SCRUM meeting, thus forcing him/her to look at the rest of the team members - contributing to the 'self directed team' spirit. :-)

Monday, November 27, 2006

Agile Offshore Development tip 3: 3 Key tools in Offshore development

From my experience, I found that the 3 key tools that are contributing towards improved communication in offshore development using Agile methods are:

1. Wiki (Any wiki should be fine, and we use Confluence by for the best wiki I have seen so far)

2. IMs (Skype, Yahoo, MSN)

3. IP phones (We use Cisco IP phones)

Thursday, November 23, 2006

Does Agility improve design of the product ?

Like Waterfall model, Agile methods are also grouped under "process" umbrella. In fact, Agile methods are much more than this. Waterfall model provides a set of steps and guidelines on how to do software development and it has no impact on what technologies you use, and how you design or architect the software. But, Agile methods in turn has a huge impact on how you design and architect software.

In agile methods, Features are built incrementally in iterations and it is expected that the components are also built incrementally. This in turn forces the designer and architects to make conscious decisions for building modular and cohesive systems. At the same time, the less experienced designers would get entangled in their own web of coupled components and have to struggle to refactor for a longer period of time. They have no escape route !! At the same time, Big Upfront design is not the answer.

Good quote by Guru from Google

Here is a nice quote by Ram Shriram on hiring:

Hire only A people, and they’ll hire other A people. If you hire the B person, they’ll hire C or D people. Someone asked a good question: How did Shriram decide who are so-called “A” people? Grooming is a part of it. “I try to find out who their mothers are,” he said. If they are raised well, they’re more likely to make good citizens, employees and entrepreneurs.

Saturday, November 18, 2006

History of Scrum

I was preparing the presentation for the new comers in my company, and thought of doing some research on history of Scrum. I found some interesting information:

1. Scrum is a variation of Sashimi

2. Sashimi originated with the Japanese and their experience with the waterfall model

3. Scrum was named as a project management style in auto and consumer product manufacturing companies by Takeuchi and Nonaka in "The new new product development game" (Harvard Business Review, Jan-Feb 1986)

4. First Documented in 1993 by Jeff Sutherland, John Scumniotales and Jeff McKenna

5. 1995: Ken Schwabber Formalized the rules for Scrum.

How XP(Xtreme Programming) got its name

I found this nice information on devx explaining how XP got its name:
XP got its name when its founders asked the question, "what would happen if we took each technique/practice and performed it to the extreme? How would that affect the software process?" An example of this is the practice of code reviews. If code reviews are good, then doing constant code reviews would be extreme; but would it be better? This led to practices such as pair programming and refactoring, which encourage the development of simple, effective designs, oriented in a way that optimizes business value.

Tuesday, November 14, 2006

80% of the requirements are clear, do you still need agile ?

Here is a project where the product owner has given a huge set of requirements. Let us say, X, Y and Z. He has given a time limit to the development team saying, you do whatever it takes but I want to see X,Y,Z at the end of one year. The team that has been following traditional waterfall model, decides to move towards agile methods. Inspite of knowing that the requirements wont change much, is it really necessary to follow Agile method ? Or just do whatever it takes to reach the goal ?

Before putting my thoughts on solutions, let me tell you that, this kind of model is mostly seen in product development companies, who already have an existing product and they are looking at enhancing them over a period of time with new versions.

Coming back to solutions:
1. First and foremost ensure that the estimations are in synch with the reality. It should not be that X,Y,Z takes 2 years to do it, and the product owner has given a deadline of 1 year. No matter what process you follow, this product would be a failure

2. Since, Product owner is not available for clarifications and collaboration on a day to day basis, one needs to identify a proxy who can make decisions on behalf of product owner

3. This sample case of product development can never follow all the values and principles of agile. But, they can certainly practice some of the core agile practices.

1. Daily Scrum meetings
2. Scrum of Scrums
3. Product Backlog --> Created initially by the product owner
4. Iteration Backlog --> Proxy making decisions during each iteration
5. Daily builds
6. Continuous integration
7. Usage of information radiators
8. Test Driven Development
9. Feature teams
10. Cross Functional teams
11. Team can work on Self organization
12. They even can have a scrum master
13. Pair programming
14. Usage of dual monitors
15. Velocity based estimation and/or any other estimation techniques

But remember, don't blame agile if you don't succeed in this kind of project environment, as agile methods needs customer collaboration. Without buy in from customers and stakeholders, don't expect agile methods to succeed.

Wednesday, November 08, 2006

Agile Offshore Development Tip 2: Keep translation time in mind during estimation

If you are working on a distributed development project with teams in countries speaking the same language(For example English, between India and US), you won't face much of problem. But if you are working in a different environment otherwise (Say, between India and Germany OR India and France, etc where they don't speak the same language), you would need to keep the language barrier in mind during estimation. (This sentence is really mouthful !!!)

For example, you might receive a requirement document in German. The customer in Germany assumes that you would do the translation to English before writing the test cases or whatever. The problem here is, the same customer also assumes that as soon as you receive the requirement document you would start the work. Which is not really so.

This is how it works: Let us say the customer hands off the requirement document on Monday, and it takes nearly 3 days to translate the same to English(or local language). (Assuming that you have an inhouse translator). Then ensure that you keep this buffer in mind while doing the estimation. Not only that, if you have a poor translator it would add much more chaos.

Testing team needs to be very cautions in this situation. Sometimes, the testing team might have to struggle for days to get the actual requirement translated to their local language, before writing the test cases. This might add undue pressure on testing team during release days.

Couple of solutions:
1. Try to have as many face to face interactions and conversations as possible with the customer. Try to write the test cases based on "Drop-in meeting" principle. Basically, you can keep updating the test cases in small iterative cycle after every conversation with customer.

2. Bring the culture of using acceptance testing framework like FIT into the team.

Saturday, November 04, 2006

When would agile fail ?

Following are some of the situations where agile methods would fail:

1. When Development team is interested in agile methods, and the product owner and his team is not aware of anything about agile. For example, consider an offshore project where the development team is practicing waterfall model. Suddenly one day, one of the leads feels they should move towards agile. The offshore development team start practicing some of the practices bit by bit. They would try explaining the benefits of agility to the customer, but in vain. But the team continues practicing in bits and pieces without informing PO. This would lead to failure. Basically, one needs complete support from the management, PO and other stakeholders for a successful implementation of agile methods.

2. When development team is managed by "traditional" project managers. For example, consider a situation where the development team wants to practice agile methods. If the management is not aligned with it, the team would fall apart. I have heard of a story, where the development team was practicing some of the agile practices. Due to change in project management, the new project manager with a traditional/CMM/Waterfall background started demanding the team to do more work, and started giving them deadlines to finish work. The team almost fell apart.

Ensure that the management is convinced about the benefits of agile methods.

3. Don't practice without a coach. Passion is a critical aspect to implement the practices of agile methods, but one needs to be aware of the value also. One should know the dependency of some of the agile practices. For example, practicing "just" refactoring without TDD cycle would add more problems than any benefits. In such situations, coach could add lot of value to the team

4. Don't try to implement all agile practices at once. Take one practice at a time, play with it for some time before taking on the new ones.

Practice which works in one organization may not work in the other.

Wednesday, November 01, 2006

Agile Offshore Development Tip 1: Estimate cautiously immediately after a release

I am planning to start a series of "Tipping" session on Agile Offshore development. I work with offshore development teams practicing agile methods day in and day out, and so I keep seeing challenges in implementing some practices.

I am taking this as an opportunity to provide some solutions(what I think as right) for these recurring problems.

Tip 1: If your team is new to agile methodology, then ensure that the iteration planning for the "new" release is done cautiously. The reason being, most of the time the estimation and planning is done based on the velocity of previous iterations. But, when you release a new version of the product to the customer, you could expect some priority defects. These "unplanned" defects needs to be fixed in the upcoming iterations. These defects might have escaped the dragnet of TDD, functional testing, etc. So, consciously take such eventuality into consideration and plan it without just relaying on previous iterations velocity.

Sunday, October 29, 2006

TDD Injection in legacy code

In a typical agile team, the team practicing TDD would write the test first, plug the business logic, make the test to pass and refactor the code (RED-GREEN-REFACTOR). This luxury may not be available for teams (legacy teams) who are new to agile methodology, and have a huge code base developed without TDD. This article explains some of the techniques that could help introducing TDD into legacy development teams.

TDD Injection in legacy systems

1. First step is to, identify the code smells that you would like to attack. You can do this by running any of the freely available code smell/quality identification tool (Checkstyle, PMD,JCosmo, etc)

Once you have the list ready, prioritize the list for developers to refactor. For example, "reducing the lengthy methods" could be of priority than the "replacing the magic numbers with static code".

According to me, not all the code smells could be tackled by the junior developers (2-4 years of experience) alone. Some of the code smells need extensive object oriented programming and design skills. For example, replacing "inheritance by delegation", needs a careful thought. In such cases, the senior members of the team Or developers with good OOAD/P knowledge needs to be involved.

2. Next step is to, consider each of these high priority code smells within each iteration. Allocate time to fix them and in parallel, write the unit tests for those parts of the refactored code. Writing the Unit tests at this point of time and automating them ensures that, the refactored code has not broken the behavior of the system. I would like to call this as "TDD injection" technique.

The reason for "TDD Injection" technique is to not only introduce TDD in the middle of development, but specifically refactoring the legacy code.

3. Some teams specifically allocate”Cleanup iteration" just before each milestone release. This is really not a good way to do agile development, but if the project planning is poor Or in a dysfunctional team, the developers might rush delivering the code to customer without TDD. In such situations, one can reserve a day or two “or” may be an entire iteration (2 weeks, say) for clean up activities.

=== "Something is better than nothing".

Saturday, October 21, 2006

Technical Debt

Here is a definition of technical debt got from KenMar's article

Technical Debt is simply deferred work not directly related to new functionality but necessary for the overall quality of the system . Examples of this include:

Some examples include:
1. Refactoring
2. Upgrading the database version
3. Upgrading any other software
4. Getting new plugins

Friday, October 20, 2006

Increasing the length of iteration for right reasons

We follow 2 week iterations and a good rhythm has been established. Recently one of the team member came and told me to increase the iteration length to 4 weeks. The reason he gave me was, "he is not getting enough time to test thoroughly". This is not the first time that I keep getting such requests, other reasons I have got so far are, "I want to deliver more features to give good visibility", "I am not getting enough time to understand requirements", etc.

One thing we have to understand is, increasing the iteration length will not solve the problems. One would end up in similar situation and problems but of a bigger size. We need to look at the root cause for the issues before increasing Or decreasing the length of iterations.

If a team is not getting enough time to understand requirements means there is inherently bigger issue with planning, estimation Or communication.

Thursday, October 19, 2006

Key question in Scrum meetings: Impediments

I keep reviewing the scrum meetings of teams around me. One thing I have noticed so far is, the team enthusiastically answers the first 2 questions. When it comes to third question, which is on "impediments", either they say everything is okay Or won't bring up the problems.
There is a lot of value in answering the "roadblocks or impediments" question. The idea here is, if the team member says, he has some roadblocks the rest of the team members are supposed to jump in and help me in solving the problem. The reason being, until the road block persists this particular team member would become the bottleneck in the team reducing the throughput.

Some of the common roadblocks I have seen with teams is:

1. They have not understood the requirements properly
2. They have not understood the framework or a particular technology
3. Computer hitch
4. Network problems
5. Issue with one of the colleagues which is ultimately blocking his/her work.

Traffic Jam and Theory of constraints

While I was driving to work today, I noticed that there is a huge traffic jam. I noticed that the vehicles are moving slowly and intermittently. I was curious to know whatÂ’s happening and so peeped out of my car. I found that there is this huge public transport vehicle carrying tons of people with many hanging out, is not able to move beyond 20KM per hour. Since the width of the road is small, each time it stops at the bus stop to pickup people, everybody behind that vehicle has to stop and wait for it move.

The first thought I got was from theory of constraints. One of the principles in TOC is,
the system throughput is dependent on the bottleneck.
In the above case related to traffic jam, the public transport bus is the bottleneck and the speed and fate of the entire traffic on the road is dependent on how fast the bus moves.

Wednesday, October 18, 2006

Introduction to scrum: video by Ken Scwhabber

Ken Schwabber visits Google, and gives an introduction to Scrum. Here is the 1 hour video worth watching:

Wednesday, October 11, 2006

Pair Programming to avoid quick naps ?

Here is another article by Obie, a thoughtworker, who thinks that pair programming has helped him warding off, his "pairee" to stop unwanted works such as instant messaging, blogging, etc. He thinks this has resulted in improved productivity(including his own).

Pair programming has been one of the most controversial topics from the day I heard about XP, sometime during 2001-2002, and even till date it is the same. There are no numbers to prove the it works. But, nowadays I am finding tweaked version of pair programming. In some organizations they use pairing for a short while to bring junior developer upto speed on the technology/frameworks,etc. There is no rule for such pairing. But still they call it pair programming. Till date, I have personally not seen companies practicing pair programming(by the book), and the only place I have seen is on the pictures scattered around the web.

But I agree with Obie on the fact that, it restricts the usage of IMs, Phone calls, Blogging, News paper readings during office hours. Again there is a clause here, either the pairs would be productive or, unproductive(if they have good understanding between each other).

Is this really a good way to use pair programming ? I don't think so. If an organization uses pair programming to ward off unproductive usage of office hours, they are doing a big mistake. There is fundamentally a big flaw in managing projects.

Bringing Large Scale Change

Here is a good quote from Kent Beck on bringing large scale changes to the organizatio:

When I am in a situation where I want to change my environment, I try to
remember that I can only change myself. However, as soon as I change myself
I also change the way I relate to those around me, and that changes my
environment. That's how large-scale change starts, with one person choosing
to relate to the world differently and having that change spread.

Thursday, October 05, 2006

Hesitation behind TDD

I have seen many people finding it difficult, even to start TDD. Key thing behind TDD is not testing but the design. Many programmers have no clue about OOAD Or OOP, then how can anybody expect them to do justice by practicing TDD ?

It is critical for the developers to be aware of OOAD before jumping to practice TDD.

Polarizing people

Today I got a chance to chat with Tim Snyder in Dallas office. He said something which was very powerful. He said
  • Don't be afraid to polarize people

  • Be brave to stand up and tell what you beleive as true.
  • People spend most of the time trying to be politically correct and not to offend people, and this leads to losing out on values.
  • Lean thinking next wave, are we really ready for it ?

    I have been traveling from the last couple of days attending conference, meeting up with customers ,agile gurus, would-be agile managers. From this travel, what I see is that ,lean thinking is kind of catching up peoples' attention. Everybody talks about Muda("waste" in Japnese), they talk about Andan systems, etc. Are we really ready to bring lean into organization ? . If you ask me, No we are not. As per the statistics, barely 10% of the software organizations are practicing agile. I think if you have not mastered the art of agility, then moving directly to lean practices from traditional development would not only be difficult but might not succeed. Ask me why ? Lean thinking provides set of principles and values. Practices needs to be derived based on these fundamental principles and values. The team practicing waterfall model,has no clue about how to derive the practices. But an agile team has some clue about the practices that could reduce the waste. For ex: TDD, Refactoring,etc. This leads me to conclude that the team practicing agile methods have greater chances of implementing lean practices rather than the traditional development teams.

    Monday, October 02, 2006

    Decrypting "tools" in agile manifesto

    As I am preparing presentations for the upcoming Valtech days conference, I am convinced that the tools are integral part of software development and it should be used thoughtfully.

    The first value in agile manifesto says:
    Individuals and interactions over processes and tools

    As per the value mentioned above, the agile gurus' valued "individuals and interactions " more, than the processes and tools. They are not saying don't use tools Or processes, but use cautiously. The tools and processes should not drive the project but should complement them. Another evidence is from the book Goal, the highly efficient robots in the factory didn't increase the output. Similarly, having more number of tools in an organization or a project, does not make any difference to output, but using the tools effectively will inturn increase the throughput.

    Taking the above evidence to the processes (Whether it is XP,Scrum,etc),they should not drive the project. They should be used to help the individuals to deliver the product to the customers.

    When programmers get excited about agile methods, they get obsessed with it and start looking at these processes as magic wand to solve all their project related problems. Which is totally untrue, and most of the time, problems are due to people problems rather than process !!

    Friday, September 29, 2006

    Blogging from Hyederabad airport

    I am quite amazed to see indian airports becoming hi-fi day by day. On my way to Dallas, while waiting for my next flight I could see many people browsing in the lounge. I wanted to give it a try, and upon switching my laptop I could clearly was able to get atleast 2 signals, and one of them helped me to blog this post !!

    During my journey from Bangalore to Hyderabad, I was able to read some 20 pages of Goldratts book Goal. The book kind of takes your through a trance, you will forget your surroundings !

    Monday, September 25, 2006

    Importance of coaching in distributed agile development

    When an organization starts a new practice or introduces a new process into software development team, there would be some slack time before it really sees some gain. This is true especially when an organization decides to move from traditional development to Agile methodology. The practices like test driven development, refactoring are not so easy to learn without a coach in place.

    In a distributed agile development scenario, one needs to be careful about the gap about the knowledge of agile methods between onsite customer and offshore customer. More the gap, more resistance resulting in reduced productivity. Generally this type of productivity problem happens, if both the onsite customer and the offshore team are new to agile methods. If one of them has really mastered agile methods then the gap can be reduced fast and there would be a productivity gain.

    For Ex: if the onsite team coordinator "thinks" TDD is good, and starts pushing offshore team to practice "TDD". The offshore team, which is new to TDD would not only resist this, but also will try to write code to make the coordinator happy. This results in overstepping the values behind TDD or any other agile practice for that matter.

    Friday, September 22, 2006

    Agility and maturity

    I keep hearing from people time and again that agile methods are for matured teams. My immediate reaction to such conversations is, agile makes the team more matured. I have been applying agile methods from the last two years, and each one of the team member (including myself) had come from CMM (or synonymously called traditional water fall model) have changed a lot after certain time. When I say change, it is not only the ability to deliver quality software but a mindset to learn and adapt. The reason being, the key practice, I would like to put it as "heart" of any agile methods practice is the "iteration". The rhythm and responsibility that gets created within an iteration makes the team more responsible and matured.

    I am becoming more and more confident with great stories to prove that, one can apply agile methods to any projects and any "type" of people, and the result from it is always going to be something positive "on" the mindset of the team.

    Valtech Days Conference Dallas Oct 2-3

    I have prepared couple of presentations to share at Valtech Days conference going to be held on Oct 2-3. I would be presenting paper on "Monitoring distributed agile projects - tools comparison". I would be leaving India on Sept 29th and planning to be back by mid october.

    I have done enough research in the last few weeks comparing various Agile PM tools available in the market(both open source and commercial). I was amazed to see that there are nearly 20 tools available and in various stages of development.

    Valtech cockpit is tremendously ahead of other tools when it comes to third party integrations.

    Thursday, September 21, 2006

    What is Agile

    I know that you would be surprised to see the title !!! But, if you read the article by David Anderson you would definitely realize that most of us are forgetting the goal that was created to create agility in software development.

    Just to summarize the point from the above article,
    A process is agile if it

    * enables companies to easily respond to change
    * delivers working code to market faster (than previously or with other methods)
    * delivers high quality working code
    * improves productivity
    * improves customer satisfaction
    * and provides an environment for a well motivated team with high job satisfaction

    Here is a good comparison between defined process and agile from Mishkin Berteig

    Agile Axioms: We are Creators, Reality is Perceived, Change is Natural
    Defined Process Axioms: Humans are Resources, Reality can be Legislated, Change is Bad

    Agile Disciplines: Empower the Team, Amplify Learning, Eliminate Waste
    Defined Process Disciplines: Enforce the Process, Avoid Experiments, Eliminate Variance

    Sunday, September 17, 2006

    Looks like I am able to blog again

    From the last couple of days, the entire blog thingy was working weird. All my posts seemed to get into some kind of black hole not able to see the outside world. Hopefully this should work now.

    Thursday, September 14, 2006

    Benefit of tools in Software development

    Today we were having a great debate on features and flaws of several tools. One thing I noticed during the discussion was, tools are being developed to control the people and process. I am feeling there is something terribly wrong in this concept. Even though I have been using several tools from many years, I had never noticed such a huge influence of tools in software development.

    Many a times, the management decision to use a particular set of tools in the organization forces the development team to constrain themselves to adopt to the tool. This constraint reduces the creativity, productivity and enthusiasm in teams.

    Saturday, September 02, 2006

    Difference between Test First programming and TDD

    I have seen people using the words "Test First Programming" and "Test Driven Development" interchangeably, which is wrong. There is a subtle difference between the two.

    In Test first programming, you just write the test using xUnit family(mostly) and then you write your code. There is not much rule associated with it.

    Test Driven Development is a special case of test first programming where there are certain rules defined. For Ex: You write the test first, ensure that the program fails, then you write the code to make the test work, and finally refactor. This cycle is continued.

    Thursday, August 31, 2006

    Fake Email Server for TDD

    I found this nice tool called Dumbster which is a fake SMTP server. This can be used for unit and system testing applications that send email messages.

    Tuesday, August 29, 2006

    Agile 2.0 ?

    Recently buzz about Agile 2.0 is going around in agile community, looks like sponsored by Microsoft. Here are my thoughts about Agile 2.0

    1. From the above article, it looks like MSF is concentrating on agile practices. This would the right recipe for failure. Agile manifesto never talked about practices but only on principles and values which are global and constant. Practices needs to be tailored to individual project environments based on principles and values. A practice that works in one environment may not work on the other

    2. Name Agile 2.0 itself is a misnormer. In continuation of point 1 above, there was no such thing called Agile 1.0 before !! We had just agile principles and values.

    Saturday, August 26, 2006

    Law of diminishing marginal returns

    I found this nice article on wikipedia about law of diminishing returns.

    Quoting from wikipedia:

    The "law" of diminishing marginal returns says that after a possible initial increase in marginal returns, the MPP(Marginal Physical Product) of an input will fall as the total amount of the input rises (holding all other inputs constant)

    Here are some of the examples from software development environment where we can see the above law in practice:

    1. Addition of new team members to the project "seems" to increase productivity marginally but reduces over a period of time.
    2. Developers excited about implementing the new technology, and after a while losing interest in it. For ex: implementing Webservices

    Monday, August 21, 2006

    Software development and mass production mentality

    In the last few decades the mass manufacturing industries went on a spree to create "intelligent" machines. The intention of this creation was to solve complex industrial problems quickly, and reduce the manufacturing defects. The only thing the "isolated" worker would do in the assembly line, was to pickup the parts from one line and feed it to next. There was no need to "talk" or "communicate" to the fellow worker. Can you see any similarity between the above example and, "traditional" software development teams ? . The traditional software development is plagued by "mass production mentality". The requirements team would feed set of documents to analysts, who in turn analyze and feed to design team. The design team would feed the high level and low level design docs to development team, etc. Each team has there own specialty and is happy doing the routine work !.

    In the lean enterprise, the employees would talk and take help from each other while solving complex problems. They would make use of big visible "andan" systems to constantly monitor the status of production, defects, inventory information. This leads to better communication, leading to creative solution.

    Sunday, August 13, 2006

    I have become Scrum Master HEY !!!

    Craig conducted the course on "Scrum Master Certification" in Taj Gateway on Residency road. We were around 40 people, mostly from Valtech India. It was fun and also learnt a lot. Since, I have been working with Craig from the last 6 months, I knew many of the concepts and much more. I was really moved when Craig gifted me the Critical Chain book. Looks like he is asking me to start looking towards next wave coming up in process improvement !!

    Finally I can call myself as "SCRUM MASTER" HEY....HEY.. I am just excited about this.

    Agile journal article

    My article on Benefits Of Tool Integration In Distributed Agile Development was published, and so far received good feedback from friends and well wishers !

    Monday, August 07, 2006

    Technology experts rescuing process team

    From the last couple of months, I have been working closely with Craig Larman and I got a chance to learn not only about Agile methods but also a lot about OOA and OOP. I am also a big fan of people like Martin Fowler, Ron Jefferies, Jeff Sutherland who have contributed a lot to Agile methods. If you carefully observe their past life, they all come from technological background rather than the pure process background. This is leading me to believe that a person with both technology and process background could contribute more to software development world than being purely in one of them. I would like to quote one of the real world examples, I was having a chat with group of SQA team, and still they think TDD is testing rather than a design !!. I saw today Craig checking the code of a team member and smoking code smells out, and I don't think this could be handled by a "pure" QA person.

    Wednesday, July 26, 2006

    Motivation behind scrum meetings

    Scrum meeting is just not "Standing up" ,"meet" and ask 3 questions. There is certain motivation behind this meet. While reading an article I found one of the motivation to be:

    When one team member shares an obstacle in the Scrum meeting, the entire team’s resources come together to bear on that problem. Because the team is working together toward a shared goal, every team member must cooperate to reach that goal. The entire team immediately owns any one individual’s problems. [Rising and Janoff, 2000]

    This is also in accordance with the lean thinking, Toyota way !!

    Monday, July 17, 2006

    Agile Pit fall

    Even though Agile methods are supposed to reduce the risk of the project, improve the quality of the deliverables, one thing I feel which would complement agile is "measuring the quality in each step". Agile principles and values never talk about any particular "measurement", and the Agile methods who have tons of practices have not given much importance to "measurable result".


    I am sure you would be using the word "quality" hundreds of times a day if you are in software industry. Even though there are various definitions of quality from various school of thoughts, for example:
    Quality is conformance to requirements.
    and the other one here
    Quality is the capacity to remain robust and conformant, while adapting to new requirements.

    I strongly agree with the second definition. The reason being, I can write my program to conform to requirements but what if, the design is poor ? what if the code is crappy ?

    Wednesday, July 12, 2006

    Toyota Way and the Stand up meeting

    I remember somebody telling me that, in Toyota manufacturing plant and in the assembly line if somebody discovers some fault in the product, immediately they would raise an alarm. This in turn signals everybody to stop what they have been doing, and approach the person who raised the alarm. All of them in that shop floor in turn would discuss and help the person in resolving the fault he/she has seen.

    I believe the same principle is being applied in our daily stand up meeting. Most of the time, the daily stand up meeting is considered as "status meeting". In fact it is not, especially if you look at the third question "Any obstacles stopping the developer from proceeding further?". The reason I believe behind this question is to make the other team members aware of this issue, and in turn all of them can jump together in helping this person out. This is in line with the Toyota manufacturing analogy shown above.

    Until we don't understand the principles behind practices, practices remains as practice !!

    Discussion with Craig Larman

    Nowadays I am working closely with Craig to get a deeper understanding of Agile and Scrum. Here is the summary of key what I understood

    1. There is nothing called agile "method". Agile provides principles and values. There are agile methods like XP,Scrum,etc

    2. Most of the time people who go through scrum master certification end up becoming "Project managers having Scrum master certification"

    3. Scrum Masters are obstacle removers in a project rather than managers controlling the project. I can clearly see in my surroundings, how difficult it is to achieve this particular goal. The reason being most of the people are coming from "Project Management" background !!

    Becoming a scrum master is not "practicing some things", it needs total change of mindset, the way we lead life at work, the way we think.

    I have started working on implementing "Virtual Lava Lamp", and it looks like a lot of configuration changes needs to be done while integrating Cruise control with lava lamp.

    Good quote from Ron Jefferies

    The greatest mistake we make is living in constant fear that we will make one.
    -- John Maxwell

    Sunday, July 02, 2006

    Why some teams hesitate to maintain charts

    I was reading Jim Shores blog, and came across an article where he has recommended to put some thought process if your team hesitates to update the "Big Visible Charts". I am not going to rephrase what he already said, but going to paste his words here:

    he first question to ask is, "Did the team really agree to this chart?" An informative workspace is for the team's benefit, so if team members aren't keeping a chart up to date, they may not think it's beneficial. It's possible that the team is passively-aggressively ignoring the chart rather than telling you that they don't want it.

    If people won't take responsibility, perhaps you're being too controlling.

    I find that when no one updates the charts, it's because I'm being too controlling about them. Dialing back the amount of involvement I have with the charts is often enough to get the team to step up to the plate. Sometimes that means putting up with not-quite perfect charts or sloppy handwriting. I like pretty charts, so that's hard for me to do... but it pays off.

    Saturday, July 01, 2006

    Manual Testing Vs Automated Testing

    I have seen posts on the web debating about ONLY in favor of automated testing. My view is we need to have a balance between the automated and manual testing. Couple of points

    1. In order to have the tests automated and working with cruise control, a sort of maturity and experience is needed. I have tried implementing TDD with my team, it took really a long time for them to grasp.

    2. Even if somebody is trying to automate testing, till date I have not seen anybody implementing them for the entire features. What I have observed is, developers would start writing unit tests and at some point of time, either to meet deadlines or for some reason they would skip writing tests and get into coding straight.

    I always think it is a good idea to automate as many tests as possible and also keep manual testing continued.

    Manual Testing Vs Automated Testing

    I have seen posts on the web debating about ONLY in favor of automated testing. My view is we need to have a balance between the automated and manual testing. Couple of points

    1. In order to have the tests automated and working with cruise control, a sort of maturity and experience is needed. I have tried implementing TDD with my team, it took really a long time for them to grasp.

    2. Even if somebody is trying to automate testing, till date I have not seen anybody implemeting them for the entire features. What I have observed is, developers would start writing unit tests and at some point of time, either to meet deadlines or for some reason they would skip writing tests and get into coding straight.

    I always think it is a good idea to automate as many tests as possible and also keep manual testing continued.

    Saturday, June 24, 2006

    EJDC 2006 Conference

    I was planning to post this blog from some time.
    on 25th I had been to EJDC conference as a speaker and shared my thoughts on Agile best practices and metrics at EJDC Conference. It was supported by Kushal a non profit organization helping poor children in India.

    My presentation was mostly towards the best agile practices that are in town and also some of the most popular agile metrics. I got my presentation reviewed by Craig Larman on thursday and got very good suggestions on usage of some of the "terminologies".

    Overall I feel the presentation went very well. First half of the conference was devoted to technology related topics(mostly on SOA).

    Sunday, June 18, 2006

    Humane Interface Vs Minimal Interface

    I am very new to Ruby(In fact I am trying to understand what it is), and today I stumbled upon an article by ' Martin Fowler ' which is an eye opener for me and I beleive, this could be a good start for me to start visualizing why there is so much hype about Ruby !! I have suddenly started seeing coding in Java/J2EE as x86 programming !!

    Wednesday, June 07, 2006

    Traditional "Done" Vs Agile "Done"

    I found this interesting post on the web. Here the author williamcaputo is trying to describe the "thinking" of "done" by the traditional team Vs Agile team.

    Traditionally 'done' means "We won't have to work on that anymore - and if we do, somebody must've screwed up." For Agilists 'done' means "We have accounted for what is known. If the situations changes, we expect to revisit this work."

    Thus the two approaches have different goals: One is to finish, the other is to keep pace with change.

    Monday, May 22, 2006

    How Idiotic is to track % completion of the project

    In recent days, I have been working closely with Craig larman to bring out a new domain modal based on scrum for Valtech cockpit. During the discussion with Craig, he talked about how stakeholders ask for % completion of the project and, how our PMs give them a %(percentage) !! It all looks idiotic if you read the rest of the following section.

    The core reason for introducing agile methods in software is to manage the changes, adapt accordingly and be open to get changes. As per the statistics, change requests increases as the project size increases. If we are getting more changes obviously it would add more effort to existing estimated effort and this becomes a moving target. How can anybody estimate how much % of project has completed in this scenario? It is as good as telling not the truth, if I tell my stakeholder that we have finished 50% project !! (Unless you add a clause saying "as of today")

    So, instead of using "% Complete" as the measurement, one can use the "estimated effort" remaining to complete the project

    Saturday, May 20, 2006

    What we miss when we talk about TDD

    Here is a classic definition of TDD from Agile discussion group,

    Classic TDD (and just TDD for that matter) is: write a little example of a (sub)set of behavior, write a little code so that the example works, refactor, repeat.

    What we miss all the time while defining TDD is the "Refactor" part !!!

    Thursday, May 18, 2006

    Valtech's global development can lead to world class software development

    I was having a chat with Thierry Cavrel while sipping coffee, and we were discussing a host of subjects. In fact, I wanted to share my perspective of how collaborative development can be achieved in delivering quality software at Valtech. Each of the Valtech's units distributed across the world has unique features. Valtech Germany is famous for their technology presence, France for there Project management/delivery capability, India for having expertise in Agile, US for there training and consulting expertise, so on and so forth. Each of these centers have key resources who are making difference. Considering that these resources are used collaboratively, Valtech can deliver any critical project and deliver it successfully.

    So, Valtech is a classic example of Global Software development Village !!!

    Tuesday, May 16, 2006


    Here is another interesting quote read online (Author: William Bogaerts)

    A timeline is going to be difficult, even if you know _all_ the details
    beforehand. You know, there is an "anti-determinism-law" in physics
    says you cannot know both the position and speed of a particle exactly.
    There's a similar law in software engineering that says you cannot know
    the effort and the requirements together exactly.

    Making decisions late in the game

    I found the following interesting discussion in the agile discussion forum, and was written by a software developer John: The discussion was about XPs practice "Make the decision as late as possible"

    Reminds me of an older paper I recently read (but whose title eludes

    According to the premise of the paper, the boundaries that define
    components should be drawn based on the difficulty of design choices.

    An example might help to illustrate this. Suppose Joe and Harry are
    contracted to write a spell-checker.

    The customer wants multilingual support, so Joe and Harry agree to use
    unicode. This decision is trivial (customer requirement), so there is
    no need to abstract the concept of text within the application---they
    can use unicode characters and strings.

    Now let's say Joe suggests using a map (hashtable) to perform spell-
    checking. This seems like an obvious choice to Joe because it makes it
    easy to see if a word is spelled correctly. Harry, however, points out
    that a map doesn't provide any means of suggesting a corrected
    spelling, and recommends a sorted list instead, because it's easier to
    find a closest match.

    The fact that Joe and Harry didn't come up with the same solution to
    the problem indicates the design of the spell-checking algorithm is
    more difficult than the question of how to represent text.
    Consequently, it's probably a good idea to encapsulate the
    spell-checking algorithm, isolating its implementation details from
    the rest of the application. This way, if they head down the wrong
    path, they can correct their mistake with minimal impact to the rest
    of the application.

    I agree with this general philosophy. If a design choice isn't obvious
    to you, that's because you don't have enough information. If you don't
    have enough information, and make a decision anyway, you're more
    likely to get it wrong. So in those areas where you can't immediately
    (or unanimously) decide, it's best to introduce enough abstraction so
    that if you choose poorly, you can correct your mistake without having
    to rewrite the whole application.

    Monday, May 08, 2006

    Elephant looking from various directions

    I found this interesting piece of information on the agile forum by a member (Philip, no last name). This is what I always preach when somebody asks me the relation. Also, it is in synch with my thought that Agile is spirituality rather than a religion !!

    All the agile processes use automated tests and frequent releases.

    All the books about Agile development represent consultants telling
    specific teams what to do, and making up jargon. They are all blind
    men groping the same elephant, but each has named their part of it
    something different. It's all the same elephant.

    Sunday, May 07, 2006

    Crossing the chasm

    I found this nice article on Dr.Dobb's portal, which talks about companies based on their ability to take risk and move towards agility.;jsessionid=4MASEABOIHHWYQSNDBCCKH0CJUMEKJVN

    Wednesday, May 03, 2006

    Pros and Cons of using Index cards over computer

    Agile proposes people to make use of Index cards during requirement analysis and estimation phases. As per agile, it would help the development team and other stakeholders to change things quickly, ask questions without really "handing off" documents and waiting for things to happen.

    Here are some of my thoughts on pros and cons of using 3*5 Index cards:

    1. Very agile. You can modify the information with ease.
    2. Sticking them to the wall keeps the information in front of your eyes all the time.
    3. Easy to carry from one place to other.

    1. Chances of getting lost are high
    2. You may need a good secure storage space. Because over a period of time, the paper might become old and the chances of paper getting torn, and information getting lost is high.

    One of the key advantage of using Spreadsheet(in parallel with Index cards) are
    1. Quality of information would stay intact over a long period of time.
    2. Sharing the information with onsite team located in a distributed team is easy
    3. Spreadsheet is idea to keep track of information which keeps changing quite frequently.

    Sunday, April 16, 2006

    Insert more bugs by working late hours !!!

    Here is what Ron Jeffries says about the impact of working late hours, and how this could result in programmers inserting more defects :

    If the risk of an accident is essentially doubled at twelve hours compared to eight, what about the risk of inserting bugs? By the time a measurable accident occurs, the worker has probably been fumbling around, working erratically, for quite some time, and finally got so far off track that an accident was the result. My educated guess would be that the insertion rate for defects increases far more rapidly than the risk of industrial accidents. What the accident statistics probably tell us is that a tired programmer has double the chances of putting in a very serious bug, but has probably also put in a much higher percentage of smaller ones.

    A recent Circadian study addressed productivity in white collar workers. They found that even as little as a ten percent increase in work hours could result in a 2.4 percent drop in productivity, while sixty hour weeks could result in a 25 percent drop in productivity.

    Saturday, March 25, 2006

    Blogger for dummies

    Found this interesting link for dummies:

    It is really cool

    Thursday, March 23, 2006

    Change request in XP

    Dave Rooney wrote the following on handing change requests in XP:

    A change is simply another Story or Stories, so use new cards. They can be estimated by the developers and prioritized by the Customer. During Release Planning, we've filled the iterations based on the team's velocity. So, in order to schedule a new story into an existing iteration, a story or stories of equal size must be removed. There's no 'We'll squeeze it in', or 'We'll try to get it done'. When you do that, the developers will start to fall off the testing bandwagon in a hurry. Then, the system's quality goes out the window. The key point is that time is not variable, but scope is."

    Wednesday, March 22, 2006

    Documentation in agile

    In one of the agile discussion forums following question was asked and the answer made sense to me(in bold)
    >I have tried to limit documentation and verbally brief my >engineers on what I want doing - normally a white board >discussion. Ask them to work in pairs, keep documentation to a> minimum and get every one to understand the design by >verbal communication.

    Limiting documentation isn't the world's greatest idea. It's better to let them figure out what they actually need and what's a waste of time."

    Monday, February 27, 2006

    User Stories and What they are in "Simple terms"

    A Very good explanation from the Mayford technology website :

    A User Story is a short description of the behavior of the system from the point of view of the Customer. They are in the format of about three sentences of text written in the Customer’s terminology without technical jargon. In simplistic terms, user stories are the response to, "Tell me the stories of what the system will do. Write down the name of the story and a paragraph or two."

    The conversation takes place during Iteration Planning, at which time the Programmers ask questions of the Customer in order to flesh out the details of the Story, and then brainstorm the Tasks required to implement it.

    Typically, the stories are written on 3" x 5" index cards, although an electronic copy may be used. Since index cards are somewhat small, they automatically limit the amount of writing that can be put on them (which is a good thing). This forces the Customer to focus on making the text clear and concise, while being as simple as possible.

    Story cards are a great conceptual tool. When they are laid out on a table, the Customer can visualize the entire system. Since the cards can be easily moved around, the Customer can 'see' the system from different perspectives. They also work well during development, providing a very concrete reference to how much has been completed, and how much work is left.

    Once the Stories have been estimated and the initial Project Velocity set, the Customer can make tradeoffs about what to do early and what to do late and how various proposed releases relate to concrete dates.

    There has to be at least one story for each major feature in the system, and it must always be written by the users. The programmers shouldn't write the stories, but they have conversations with the users that are then attached to the stories together with pointers to supporting documentation.

    Different than Requirements

    A typical misunderstanding with user stories is how they differ from traditional requirements specifications. The biggest difference is in the level of detail. User stories should only provide enough detail to make a reasonably low risk estimate of how long the story will take to implement. When the time comes to implement the story developers will go to the Customer and receive a detailed description of the requirements face to face.

    Another difference between stories and a requirements document is a focus on user needs. Details of specific technology, data base layout, and algorithms should be avoided. Stories should be focused on user needs and benefits as opposed to specifying GUI layouts.

    How z user story different from use Cases ?

    One approach that has been suggested is to define the Stories during the Planning Process, and then create a Use Case for each Story at the beginning of an Iteration. This allows the deferral of specifying the details of a Story until they're really needed. Some would argue that Use Cases aren't really needed, although it may be a requirement for your organization. At any rate, Stories and Use Cases aren't mutually exclusive.

    Friday, February 24, 2006

    Lean Manufacturing and Software

    A Nice article on lean Software Development:

    Lean Manufacturing and Software: "Value and Waste

    A good starting point is to consider what value is, and what waste is.

    What are things with value?

    * Raw materials
    * An end product that someone is willing to buy
    * Tools used by the team
    * The skills acquired by the team

    What about intermediate products? They have some value in the sense that they give us the option to create the final product more cheaply than if we started from the beginning. They have no value if nobody is willing to buy the end result. And intermediate products clearly have a cost.

    And what is waste?
    If it doesn't add value, it's waste.
    --Henry Ford

    Waste – anything other than the minimum amount of equipment, materials, parts, space, and worker's time, which are absolutely essential to add value to the product.
    --Fuji Cho, chief engineer at Toyota

    Taiichi Ohno is known as the father of lean manufacturing, and he identified seven types of waste. We'll consider them and find an example of each applied to software:


    Software Example


    Features that nobody ever uses.

    Waiting Time

    QA waiting for a feature to be written so they can test it.

    Transportation Waste

    Time waiting for files to copy.

    Processing Waste

    Software written that doesn't contribute to the final product.

    Inventory Waste

    A feature for which development has started, but nobody is currently working on it.

    Wasted Motion

    Doing a refactoring manually when the development environment can do it automatically.

    Waste from Product Defects


    Ohno talked about a diagram of a boat that represents the pipeline translating raw materials to a finished product, traveling across the sea of inventory. Every waste in the system is like a hidden rock at the bottom of this sea, which may be a hazard to travel over, and which raises the level of the water and makes the boat travel further.

    To make these rocks visibl"

    Monday, January 30, 2006

    Cost of Change Model

    I found this nice article on website !! It explains about the cost of change and how it can be controlled through agile process.

    Exponential Cost of Change model, sometimes known as the Boehm Cost of Change Model, states that the cost of changing software increases exponentially with time. This is often expressed as "if it costs €1 to make a change in the definition phase, then it will cost €10 to make the change during design, €100 during coding, and €1000 after the software has been delivered.
    This model has had a profound impact on software methodology: if change is caused by defects in the software, and if the cost of fixing a defect rises this dramatically, it makes sense to prevent defects from occurring at all. This is why rigorous methodologies 1 recommend that a lot of effort is put into gathering requirements and doing as much design as possible before coding starts.

    This model meshes well with traditional engineering knowledge in other disciplines. For example, if you are an engineer responsible for building a bridge, you don't want to be told to move the bridge just a little bit to the left, when three quarters of it has already been built. Manufacturing industry has used a similar exponential model to estimate the cost of making changes since the early 20th century.

    This model seems so self evident that for a long time very few people questioned it. When software projects went wrong it was a natural assumption that not enough effort had been spent gathering requirements and designing before coding started.

    Amazingly, Highsmith is right. Boehms data indicates a cost increase of 20-40 times for a typical large well run project, with an almost flat curve for the best ones. This is far from the 100-1000 times increase most people believe Boehm's data indicates. On small to medium size projects (projects with 50 people or less), which is by far the majority of all software projects today, the curve can be expected to be even flatter than on a large project, which is exactly what Kent Beck and other agilists claim.

    Boehm's work also shows that it is possible to change the shape of the curve through good management practise. This is conceptually different from the rigorous methodology view that the curve is fixed and that management practises must adapt to it.

    Cost of Change, Part 1: Dispelling Myths

    Ultimately, the most important reason to use a software development methodology is to make as much money as possible. Even though we don't always think of it in those terms, every software development methodology is based on an economic model. The dominating economic model since the seventies has been the exponential Cost of Change model, usually attributed to Barry Boehm. Until the mid nineties, this model was rarely challenged. Over the past ten years there has been a mounting body of evidence that contradicts the exponential model. This article explores the exponential Cost of Change model, its main competitor, and the importance of choosing the right model as basis for software development.
    The Exponential Cost of Change Model

    The Exponential Cost of Change model, sometimes known as the Boehm Cost of Change Model, states that the cost of changing software increases exponentially with time. This is often expressed as "if it costs €1 to make a change in the definition phase, then it will cost €10 to make the change during design, €100 during coding, and €1000 after the software has been delivered."

    The exponential Cost of Change curve, used as the economic basis for most current software development methodologies.

    This model has had a profound impact on software methodology: if change is caused by defects in the software, and if the cost of fixing a defect rises this dramatically, it makes sense to prevent defects from occurring at all. This is why rigorous methodologies 1 recommend that a lot of effort is put into gathering requirements and doing as much design as possible before coding starts.

    This model meshes well with traditional engineering knowledge in other disciplines. For example, if you are an engineer responsible for building a bridge, you don't want to be told to move the bridge just a little bit to the left, when three quarters of it has already been built. Manufacturing industry has used a similar exponential model to estimate the cost of making changes since the early 20th century.

    This model seems so self evident that for a long time very few people questioned it. When software projects went wrong it was a natural assumption that not enough effort had been spent gathering requirements and designing before coding started.
    The Challenge

    There was one little problem: despite ever increasing efforts to gather requirements and make design up front, software projects kept failing with alarming frequency. The theoretical model and reality just did not match.

    In the year 2000 a well known industry profile wrote a book that challenged the exponential cost of change model. The book was Extreme Programming Explained, and the author was Kent Beck.

    Beck claimed that the exponential curve was no longer valid. He claimed that with a combination of technology and programming practises the shape of the curve could be changed. The shape change, he claimed, could be dramatic. Under the right conditions, the Cost of Change curve could even be asymptotic, i.e. it flattens out so that the cost of change hardly rises at all throughout most of a development project.

    Beck's asymptotic Cost of Change curve.

    Becks's claims went contrary to established ideas. It was like being told that the Earth revolves around the Sun, when you have always believed it is the other way around. It was hard to ignore Beck, or dismiss him as a loon. He was well established in the field, a pioneer in the field of Design Patterns, co-creator of Class Responsibility Collaboration (CRC) cards, a well established software design tool, and he had the support of industry luminaries like Erich Gamma, Ward Cunningham, and many others.

    Beck's book paved the way for a new breed of software development methodologies, the Agile Methodologies. The Agile movement had existed for some time when Extreme Programming Explained was published. The book got the agilists attention outside software conferences and elite software companies. A flood of other books, describing Extreme Programming, other Agile methods, and the basic principles of Agile software development, followed. Project teams, and companies began to openly declare that they were using Agile practises. Today, Agile methodologies have a well established hold among early adopters, and is well on its way into mainstream software practises.

    There are many Agile methodologies. One of the things that they all have in common is the belief that the Cost of Change curve can be altered through technology and sound development practises.
    Competing Belief Systems

    So far, I have described the two competing economic models as belief systems, without much regard to their conformance to reality. Of course, the Agile methodologies, and their underlying economic model, was born out of dissatisfaction with the methods based on the exponential Cost of Change model, but this does not prove that the economic model itself is at fault.

    To better be able to judge the relative merits of the two systems, it is helpful to look into their history. Under what circumstances did they emerge? Who created them? Does either of them have a sound scientific basis?
    History of the Exponential Cost of Change Model

    In 2002 , in the book Agile Software Development Ecosystems, author Jim Highsmith made an astounding claim: the Boehm Cost of Change model does not exist, and it never has. A gigantic misunderstanding has shaped software methodologies for three decades.

    Would the software industry bank all their money on an economic model that is just a figment of the imagination, for decades? Can't happen! Outrageous! Nevertheless, Highsmith got support from the most unlikely of sources, Barry Boehm, supposedly the originator of the exponential Cost of Change model.

    In 1976 Boehm published an article in IEEE Transactions on Computers that presented data on the Cost to Change rate gathered from TRW, and corroborative data from IBM, GTE, and Bell Labs Safeguard program. This data is also in Boehm's book Software Engineering Economics, from 1981.

    Highsmith went straight to the source and contacted Barry Boehm to ask him about the data, and Boehm's interpretation. Boehm's data indicated a 100:1 cost growth on large projects, if they were badly run. It was also possible to identify and classify about 20% high-risk defects that caused about 80% of the cost of rework. Using this information, it was possible to change the shape of the Cost of Change curve. In an email to Highsmith, Barry Boehm stated that On our best early ´90s projects, this lead to fairly flat cost-to-fix-curves. In other words, the best of the large projects using Boehm's method in the early ´90s, had the flat curve Beck claimed to be able to achieve with more consistency for small projects in 2002.

    In reference to a later study, Boehm noted that large projects with risk management, where attention was also paid to architectural issues, usually had a cost growth of between 20:1 and 40:1.

    Boehm's data indicates that defect prevention is certainly worthwhile. On the other hand, Boehm never made the assumption, or drew the conclusion, that most changes are due to defects. The rigorous methodologies, on the other hand, tend to assume that nearly all changes are due to defects, either in the software, the design, or the requirements process.

    In 1987 Boehm introduced a spiral, risk driven, change tolerant, management model that is designed to mitigate costs over the entire life-cycle of a project. In other words, the Boehm Spiral Method, as it is called, seeks to prevent unnecessary defects, but also strives to accomodate changes due to other reasons, such as changing requirements.

    Amazingly, Highsmith is right. Boehms data indicates a cost increase of 20-40 times for a typical large well run project, with an almost flat curve for the best ones. This is far from the 100-1000 times increase most people believe Boehm's data indicates. On small to medium size projects (projects with 50 people or less), which is by far the majority of all software projects today, the curve can be expected to be even flatter than on a large project, which is exactly what Kent Beck and other agilists claim.

    Boehm's work also shows that it is possible to change the shape of the curve through good management practise. This is conceptually different from the rigorous methodology view that the curve is fixed and that management practises must adapt to it.

    How can the popular conception Boehm's Cost of Change model be so different from the reality? Probably because most managers are familiar with the 1:10:100:1000 rule of thumb from engineering and the manufacturing industry, and they just assume the same is true for software development. Old ideas die hard, it is as simple as that. It was a long way from Galileo proving that Earth orbits the sun, to people in general believing it.
    History of the Flat Cost of Change Model

    Beck's flat Cost of Change curve didn't just spring into existence one day. The idea that costs can be mitigated during the course of a development project has roots that go back quite a while.

    Central to object oriented programming is the idea that if different parts of a program can be built so that they are independent of each other, then the parts can be changed independently of each other when the need arises. This is a way of mitigating the Cost of Change late in a project. Adding or changing an independent part costs no more the week before delivery than it does the first week of planning. For example, adding a new printer driver to an operating system can be done late in the development cycle without incurring extra costs. (Given of course that the resources to develop the driver are available.)

    The first object oriented language was Simula-2, designed in the ´60s. In the early ´80s the language Smalltalk was made widely available. Smalltalk programmers are credited with either inventing or promoting many important programming concepts. In 1987 Kent Beck and Ward Cunningham used Smalltalk to experiment with design patterns, a concept they had borrowed from building architect Christopher Alexander. Today, design patterns are ubiquitous in object oriented software design.

    Beck, as a design pattern pioneer, was of course well aware of the economic implications of using object oriented programming languages and designing loosely coupled software. So did many other people that were working on their own Agile methodologies at approximately the same time.

    The agilists also had inspiration from sources outside the software industry. In the late ´40s and early ´50s the Japanese worked hard to recover from WWII. At the time, Toyota wrestled with the problem of how to compete with American car companies that had vast resources and higly optimized mass production systems. They did figure it out, and created a system that would become world famous, the Toyota Production System. The Toyota Production System inspired Lean Manufacturing, and the practises and ideas of Lean Manufacturing found its way into the world of software development in the form of Agile methodologies.

    It took twenty years for Lean Manufacturing ideas to gain a foothold in manufacturing industries outside of Japan. Even today those ideas are often very poorly understood in the west. It should not be a great surprise that it wasn't until the ´90s that those same ideas made the jump to the software industry, or that they are still poorly understood today.

    Lean Manufacturing, and Lean Software Development, are complex methodologies, and their parts cannot be readily understood in isolation, no more than you can understand how a complex program works by studying a single class. Nevertheless, I have picked a few ideas that have a very direct bearing on the Cost of Change, and that are worth describing:

    * Options Thinking
    * Last Responsible Moment
    * Cost of Delay
    * Refactoring
    * Testing

    The same ideas recur in various Agile methodologies, sometimes under different names, and in slightly different contexts.
    Options Thinking

    Options Thinking is a technique for delaying irreversible decisions until uncertainty is reduced. Options thinking is common in the financial and commodities market, where an option is the right, but not an obligation, to buy stock in the future. Options thinking means keeping as many alternatives as possible open as long as possible. Decoupling software components keeps options open. So does training developers in many different skills, so that they can solve many different types of problems.

    Options Thinking reduces complexity by delaying decisions until as much information as possible is known. Rigorous methodologies do exactly the opposite, they try to reduce complexity by limiting the number of options as early as possible. Experience from the manufacturing industries indicate that when these two systems of decision making compete in complex and dynamic situations, Options Thinking wins out in economic terms. The risk is lower, wasteful effort is reduced, the decisions are better.
    The Last Responsible Moment

    The Last Responsible Moment is the last moment when it is still possible to choose between options. Delaying a decision beyond that is procrastrinating. Acting at the last responsible moment is akin to when a karateka (karate practitioner) punches just at the moment when an opponent prepares to attack. Until the moment of the punch, the karateka manouvres to keep as many options open as possible. If the karateka misses the crucial moment, the he will have lost the initiative, and will be forced to respond to events instead of initiating them. Delaying commitment to just the right moment is a game of tactics, and agilists have an array of techniques at their disposal. Some rigorous methodologiess, like RUP, use many of the same tactics as part of the design process. The difference is that the techniques are thought of as software design techniques only, and not as management tools. (Some of these techniques will be described in a future article.)

    It is worth noting that some decisions will have to be made early on. This is one of the points were Boehm and the agilists are in agreement. For example, automated unit testing works best if it is used from the start. Refactoring must be a part of the development method from day one for maximum effect, and a build machine should be ready for use when coding starts. Committing to a particular database implementation, to the exclusion of others, on the other hand, is a decision that can often be deferred forever, by using a suitable Object Relation Mapping (ORM) framework.

    Options Thinking is an important factor in flattening the Cost of Change curve, but it must be combined with the skill to identify and act at the Last Responsible Moment, or defeat will be snatched from the jaws of victory.
    Cost of Delay

    Most rigorous methods assume an even tradeoff between development time and development cost. Halve the development time, and you halve the development cost. Double the development time, and you double the development cost. This is a comfortably simple model. Unfortunately, in most cases it is wrong.

    When starting a software project, it is possible to create a simple economic model, a Profit & Loss Statement, that shows the expected economic benefit of the project. The Cost of Delay can then be calculated by adding a time delay to the model. In most (but not all) cases, the delay will have a much greater impact on profitability than just the cost of development. Time to market is usually the crucial factor. This is true even for software applications that are for internal use only, such as an economy system, intranet web site, or document management system. The earlier the system can be used, the earlier the company that uses it can start reaping the economic benefits, even if the system has only partial functionality at first.

    The upside is that if time to market can be reduced, for example by making an early release with reduced functionality, this will often have a great positive effect. It is not uncommon for a development project to pay for itself during the development time, if it can make an early delivery with partial functionality, followed by frequent partial releases. This is the reason for the Extreme Programming mantra "release early, release often".

    The Cost of Delay has a direct impact on management decisions. For example, if the developers want a tool that will reduce development time, the tool may be worth buying, even if it costs more than the direct cost of the development time saved. Conversely (and perhaps more common) forcing developers to use tools that are poorly suited to a particular job to impose a corporate "standard", is a far more expensive undertaking than the management would ever imagine.

    Understanding the economic effects of delays, and time gains is an important factor when minimizing the total cost of a project. There is also an impact on the Cost of Change. For example, it is possible to model the cost of feedback delays imposed by different testing strategies. Which is most economical, having a test phase at the end of the project, test at the end of an iteration, or use automated tests that run every few minutes as part of the development cycle? With an understanding of the Cost of Delay, it is possible to construct an economic model and come up with the correct answer.


    Refactoring is a technique for improving the quality of code without changing the functionality. Many managers shudder when developers tell them they want to refactor code. Why spend effort "beautifying" code that already works? It must be a waste of time and money. Wrong! The managers should rejoice instead, because the desire to refactor code shows they have a team that understands the detrimental effects of letting bad code remain in a system. And yes, the detrimental effects are "detrimental economic effects".

    In most projects code quality deteriorates rather rapidly over time, even if the code works. What does this mean? It means:


    The code is tightly coupled, so changing one part of the system causes a cascade of changes in other parts of the system.

    The code is unnecessesarily complex:

    There may be hidden bugs that will strike when least expected, for example after the system has gone into production.

    Every time a developer reads the code, he must spend extra time and effort understanding it. This can cause serious time loss, because even though a piece of code is written only once, it is read many times during a development project.

    Important cross-cutting functionality, like error handling, logging, and security management may be poorly implemented, or not implemented at all. At best, this slows down both implementation of new features, and changes to existing ones. At worst, it may stop a system from going into production.

    Performance is poor, often due to unnecessary database accesses, poorly implemented search or sorting routines, the wrong choice of technology (for example parsing XML with SAX when DOM is better, or parsing XML with DOM when SAX is better), and so on. Poor performance in a system means reduced profitability, sometimes to the point where the system ends up costing more money to use than it saves.

    Poor code quality may not be noticeable to a manager at first, but eventually it makes a project leak money like a sieve leaks water. Refactoring is a primary method of plugging the leaks. It is an important tool for bending the Cost of Change curve from the exponential disaster-in-waiting shape to the flat curve of a well managed project.

    Reviewing Software Programmers

    Nice Article on Reviewing software Programmers

    Hiring Software Programmers

    Nice Link about hiring Software Programmers.

    Friday, January 27, 2006

    Agile Process and Impact on hiring

    Recently while driving to a customer’s place, I had a discussion with one of my colleague regarding the agile process and its impact on hiring.  After the discussion it is becoming clear that, indeed there is a relation between hiring good people and the software process.  

    Take for example, it becomes very important to hire good people with hands on experience while hiring for an agile project. The reason being, in agile projects the developers are expected to start delivering the small chunks of the application in 2-3 weeks duration (iterations), and the person getting on to the project needs to have quick learning ability and hands on experience.

    But in a traditional development method, where the product release is done once in 6 months or so, the new hire would have enough breathing time to get acquainted with the process and technology.