Pages

Monday, January 30, 2006

Cost of Change Model

I found this nice article on henric.vrensk.com website !! It explains about the cost of change and how it can be controlled through agile process.

Exponential Cost of Change model, sometimes known as the Boehm Cost of Change Model, states that the cost of changing software increases exponentially with time. This is often expressed as "if it costs €1 to make a change in the definition phase, then it will cost €10 to make the change during design, €100 during coding, and €1000 after the software has been delivered.
This model has had a profound impact on software methodology: if change is caused by defects in the software, and if the cost of fixing a defect rises this dramatically, it makes sense to prevent defects from occurring at all. This is why rigorous methodologies 1 recommend that a lot of effort is put into gathering requirements and doing as much design as possible before coding starts.

This model meshes well with traditional engineering knowledge in other disciplines. For example, if you are an engineer responsible for building a bridge, you don't want to be told to move the bridge just a little bit to the left, when three quarters of it has already been built. Manufacturing industry has used a similar exponential model to estimate the cost of making changes since the early 20th century.

This model seems so self evident that for a long time very few people questioned it. When software projects went wrong it was a natural assumption that not enough effort had been spent gathering requirements and designing before coding started.

Amazingly, Highsmith is right. Boehms data indicates a cost increase of 20-40 times for a typical large well run project, with an almost flat curve for the best ones. This is far from the 100-1000 times increase most people believe Boehm's data indicates. On small to medium size projects (projects with 50 people or less), which is by far the majority of all software projects today, the curve can be expected to be even flatter than on a large project, which is exactly what Kent Beck and other agilists claim.

Boehm's work also shows that it is possible to change the shape of the curve through good management practise. This is conceptually different from the rigorous methodology view that the curve is fixed and that management practises must adapt to it.


Cost of Change, Part 1: Dispelling Myths

Ultimately, the most important reason to use a software development methodology is to make as much money as possible. Even though we don't always think of it in those terms, every software development methodology is based on an economic model. The dominating economic model since the seventies has been the exponential Cost of Change model, usually attributed to Barry Boehm. Until the mid nineties, this model was rarely challenged. Over the past ten years there has been a mounting body of evidence that contradicts the exponential model. This article explores the exponential Cost of Change model, its main competitor, and the importance of choosing the right model as basis for software development.
The Exponential Cost of Change Model

The Exponential Cost of Change model, sometimes known as the Boehm Cost of Change Model, states that the cost of changing software increases exponentially with time. This is often expressed as "if it costs €1 to make a change in the definition phase, then it will cost €10 to make the change during design, €100 during coding, and €1000 after the software has been delivered."

The exponential Cost of Change curve, used as the economic basis for most current software development methodologies.

This model has had a profound impact on software methodology: if change is caused by defects in the software, and if the cost of fixing a defect rises this dramatically, it makes sense to prevent defects from occurring at all. This is why rigorous methodologies 1 recommend that a lot of effort is put into gathering requirements and doing as much design as possible before coding starts.

This model meshes well with traditional engineering knowledge in other disciplines. For example, if you are an engineer responsible for building a bridge, you don't want to be told to move the bridge just a little bit to the left, when three quarters of it has already been built. Manufacturing industry has used a similar exponential model to estimate the cost of making changes since the early 20th century.

This model seems so self evident that for a long time very few people questioned it. When software projects went wrong it was a natural assumption that not enough effort had been spent gathering requirements and designing before coding started.
The Challenge

There was one little problem: despite ever increasing efforts to gather requirements and make design up front, software projects kept failing with alarming frequency. The theoretical model and reality just did not match.

In the year 2000 a well known industry profile wrote a book that challenged the exponential cost of change model. The book was Extreme Programming Explained, and the author was Kent Beck.

Beck claimed that the exponential curve was no longer valid. He claimed that with a combination of technology and programming practises the shape of the curve could be changed. The shape change, he claimed, could be dramatic. Under the right conditions, the Cost of Change curve could even be asymptotic, i.e. it flattens out so that the cost of change hardly rises at all throughout most of a development project.

Beck's asymptotic Cost of Change curve.

Becks's claims went contrary to established ideas. It was like being told that the Earth revolves around the Sun, when you have always believed it is the other way around. It was hard to ignore Beck, or dismiss him as a loon. He was well established in the field, a pioneer in the field of Design Patterns, co-creator of Class Responsibility Collaboration (CRC) cards, a well established software design tool, and he had the support of industry luminaries like Erich Gamma, Ward Cunningham, and many others.

Beck's book paved the way for a new breed of software development methodologies, the Agile Methodologies. The Agile movement had existed for some time when Extreme Programming Explained was published. The book got the agilists attention outside software conferences and elite software companies. A flood of other books, describing Extreme Programming, other Agile methods, and the basic principles of Agile software development, followed. Project teams, and companies began to openly declare that they were using Agile practises. Today, Agile methodologies have a well established hold among early adopters, and is well on its way into mainstream software practises.

There are many Agile methodologies. One of the things that they all have in common is the belief that the Cost of Change curve can be altered through technology and sound development practises.
Competing Belief Systems

So far, I have described the two competing economic models as belief systems, without much regard to their conformance to reality. Of course, the Agile methodologies, and their underlying economic model, was born out of dissatisfaction with the methods based on the exponential Cost of Change model, but this does not prove that the economic model itself is at fault.

To better be able to judge the relative merits of the two systems, it is helpful to look into their history. Under what circumstances did they emerge? Who created them? Does either of them have a sound scientific basis?
History of the Exponential Cost of Change Model

In 2002 , in the book Agile Software Development Ecosystems, author Jim Highsmith made an astounding claim: the Boehm Cost of Change model does not exist, and it never has. A gigantic misunderstanding has shaped software methodologies for three decades.

Would the software industry bank all their money on an economic model that is just a figment of the imagination, for decades? Can't happen! Outrageous! Nevertheless, Highsmith got support from the most unlikely of sources, Barry Boehm, supposedly the originator of the exponential Cost of Change model.

In 1976 Boehm published an article in IEEE Transactions on Computers that presented data on the Cost to Change rate gathered from TRW, and corroborative data from IBM, GTE, and Bell Labs Safeguard program. This data is also in Boehm's book Software Engineering Economics, from 1981.

Highsmith went straight to the source and contacted Barry Boehm to ask him about the data, and Boehm's interpretation. Boehm's data indicated a 100:1 cost growth on large projects, if they were badly run. It was also possible to identify and classify about 20% high-risk defects that caused about 80% of the cost of rework. Using this information, it was possible to change the shape of the Cost of Change curve. In an email to Highsmith, Barry Boehm stated that On our best early ´90s projects, this lead to fairly flat cost-to-fix-curves. In other words, the best of the large projects using Boehm's method in the early ´90s, had the flat curve Beck claimed to be able to achieve with more consistency for small projects in 2002.

In reference to a later study, Boehm noted that large projects with risk management, where attention was also paid to architectural issues, usually had a cost growth of between 20:1 and 40:1.

Boehm's data indicates that defect prevention is certainly worthwhile. On the other hand, Boehm never made the assumption, or drew the conclusion, that most changes are due to defects. The rigorous methodologies, on the other hand, tend to assume that nearly all changes are due to defects, either in the software, the design, or the requirements process.

In 1987 Boehm introduced a spiral, risk driven, change tolerant, management model that is designed to mitigate costs over the entire life-cycle of a project. In other words, the Boehm Spiral Method, as it is called, seeks to prevent unnecessary defects, but also strives to accomodate changes due to other reasons, such as changing requirements.

Amazingly, Highsmith is right. Boehms data indicates a cost increase of 20-40 times for a typical large well run project, with an almost flat curve for the best ones. This is far from the 100-1000 times increase most people believe Boehm's data indicates. On small to medium size projects (projects with 50 people or less), which is by far the majority of all software projects today, the curve can be expected to be even flatter than on a large project, which is exactly what Kent Beck and other agilists claim.

Boehm's work also shows that it is possible to change the shape of the curve through good management practise. This is conceptually different from the rigorous methodology view that the curve is fixed and that management practises must adapt to it.

How can the popular conception Boehm's Cost of Change model be so different from the reality? Probably because most managers are familiar with the 1:10:100:1000 rule of thumb from engineering and the manufacturing industry, and they just assume the same is true for software development. Old ideas die hard, it is as simple as that. It was a long way from Galileo proving that Earth orbits the sun, to people in general believing it.
History of the Flat Cost of Change Model

Beck's flat Cost of Change curve didn't just spring into existence one day. The idea that costs can be mitigated during the course of a development project has roots that go back quite a while.

Central to object oriented programming is the idea that if different parts of a program can be built so that they are independent of each other, then the parts can be changed independently of each other when the need arises. This is a way of mitigating the Cost of Change late in a project. Adding or changing an independent part costs no more the week before delivery than it does the first week of planning. For example, adding a new printer driver to an operating system can be done late in the development cycle without incurring extra costs. (Given of course that the resources to develop the driver are available.)

The first object oriented language was Simula-2, designed in the ´60s. In the early ´80s the language Smalltalk was made widely available. Smalltalk programmers are credited with either inventing or promoting many important programming concepts. In 1987 Kent Beck and Ward Cunningham used Smalltalk to experiment with design patterns, a concept they had borrowed from building architect Christopher Alexander. Today, design patterns are ubiquitous in object oriented software design.

Beck, as a design pattern pioneer, was of course well aware of the economic implications of using object oriented programming languages and designing loosely coupled software. So did many other people that were working on their own Agile methodologies at approximately the same time.

The agilists also had inspiration from sources outside the software industry. In the late ´40s and early ´50s the Japanese worked hard to recover from WWII. At the time, Toyota wrestled with the problem of how to compete with American car companies that had vast resources and higly optimized mass production systems. They did figure it out, and created a system that would become world famous, the Toyota Production System. The Toyota Production System inspired Lean Manufacturing, and the practises and ideas of Lean Manufacturing found its way into the world of software development in the form of Agile methodologies.

It took twenty years for Lean Manufacturing ideas to gain a foothold in manufacturing industries outside of Japan. Even today those ideas are often very poorly understood in the west. It should not be a great surprise that it wasn't until the ´90s that those same ideas made the jump to the software industry, or that they are still poorly understood today.

Lean Manufacturing, and Lean Software Development, are complex methodologies, and their parts cannot be readily understood in isolation, no more than you can understand how a complex program works by studying a single class. Nevertheless, I have picked a few ideas that have a very direct bearing on the Cost of Change, and that are worth describing:

* Options Thinking
* Last Responsible Moment
* Cost of Delay
* Refactoring
* Testing

The same ideas recur in various Agile methodologies, sometimes under different names, and in slightly different contexts.
Options Thinking

Options Thinking is a technique for delaying irreversible decisions until uncertainty is reduced. Options thinking is common in the financial and commodities market, where an option is the right, but not an obligation, to buy stock in the future. Options thinking means keeping as many alternatives as possible open as long as possible. Decoupling software components keeps options open. So does training developers in many different skills, so that they can solve many different types of problems.

Options Thinking reduces complexity by delaying decisions until as much information as possible is known. Rigorous methodologies do exactly the opposite, they try to reduce complexity by limiting the number of options as early as possible. Experience from the manufacturing industries indicate that when these two systems of decision making compete in complex and dynamic situations, Options Thinking wins out in economic terms. The risk is lower, wasteful effort is reduced, the decisions are better.
The Last Responsible Moment

The Last Responsible Moment is the last moment when it is still possible to choose between options. Delaying a decision beyond that is procrastrinating. Acting at the last responsible moment is akin to when a karateka (karate practitioner) punches just at the moment when an opponent prepares to attack. Until the moment of the punch, the karateka manouvres to keep as many options open as possible. If the karateka misses the crucial moment, the he will have lost the initiative, and will be forced to respond to events instead of initiating them. Delaying commitment to just the right moment is a game of tactics, and agilists have an array of techniques at their disposal. Some rigorous methodologiess, like RUP, use many of the same tactics as part of the design process. The difference is that the techniques are thought of as software design techniques only, and not as management tools. (Some of these techniques will be described in a future article.)

It is worth noting that some decisions will have to be made early on. This is one of the points were Boehm and the agilists are in agreement. For example, automated unit testing works best if it is used from the start. Refactoring must be a part of the development method from day one for maximum effect, and a build machine should be ready for use when coding starts. Committing to a particular database implementation, to the exclusion of others, on the other hand, is a decision that can often be deferred forever, by using a suitable Object Relation Mapping (ORM) framework.

Options Thinking is an important factor in flattening the Cost of Change curve, but it must be combined with the skill to identify and act at the Last Responsible Moment, or defeat will be snatched from the jaws of victory.
Cost of Delay

Most rigorous methods assume an even tradeoff between development time and development cost. Halve the development time, and you halve the development cost. Double the development time, and you double the development cost. This is a comfortably simple model. Unfortunately, in most cases it is wrong.

When starting a software project, it is possible to create a simple economic model, a Profit & Loss Statement, that shows the expected economic benefit of the project. The Cost of Delay can then be calculated by adding a time delay to the model. In most (but not all) cases, the delay will have a much greater impact on profitability than just the cost of development. Time to market is usually the crucial factor. This is true even for software applications that are for internal use only, such as an economy system, intranet web site, or document management system. The earlier the system can be used, the earlier the company that uses it can start reaping the economic benefits, even if the system has only partial functionality at first.

The upside is that if time to market can be reduced, for example by making an early release with reduced functionality, this will often have a great positive effect. It is not uncommon for a development project to pay for itself during the development time, if it can make an early delivery with partial functionality, followed by frequent partial releases. This is the reason for the Extreme Programming mantra "release early, release often".

The Cost of Delay has a direct impact on management decisions. For example, if the developers want a tool that will reduce development time, the tool may be worth buying, even if it costs more than the direct cost of the development time saved. Conversely (and perhaps more common) forcing developers to use tools that are poorly suited to a particular job to impose a corporate "standard", is a far more expensive undertaking than the management would ever imagine.

Understanding the economic effects of delays, and time gains is an important factor when minimizing the total cost of a project. There is also an impact on the Cost of Change. For example, it is possible to model the cost of feedback delays imposed by different testing strategies. Which is most economical, having a test phase at the end of the project, test at the end of an iteration, or use automated tests that run every few minutes as part of the development cycle? With an understanding of the Cost of Delay, it is possible to construct an economic model and come up with the correct answer.

Refactoring

Refactoring is a technique for improving the quality of code without changing the functionality. Many managers shudder when developers tell them they want to refactor code. Why spend effort "beautifying" code that already works? It must be a waste of time and money. Wrong! The managers should rejoice instead, because the desire to refactor code shows they have a team that understands the detrimental effects of letting bad code remain in a system. And yes, the detrimental effects are "detrimental economic effects".

In most projects code quality deteriorates rather rapidly over time, even if the code works. What does this mean? It means:

*

The code is tightly coupled, so changing one part of the system causes a cascade of changes in other parts of the system.
*

The code is unnecessesarily complex:
*

There may be hidden bugs that will strike when least expected, for example after the system has gone into production.
*

Every time a developer reads the code, he must spend extra time and effort understanding it. This can cause serious time loss, because even though a piece of code is written only once, it is read many times during a development project.
*

Important cross-cutting functionality, like error handling, logging, and security management may be poorly implemented, or not implemented at all. At best, this slows down both implementation of new features, and changes to existing ones. At worst, it may stop a system from going into production.
*

Performance is poor, often due to unnecessary database accesses, poorly implemented search or sorting routines, the wrong choice of technology (for example parsing XML with SAX when DOM is better, or parsing XML with DOM when SAX is better), and so on. Poor performance in a system means reduced profitability, sometimes to the point where the system ends up costing more money to use than it saves.

Poor code quality may not be noticeable to a manager at first, but eventually it makes a project leak money like a sieve leaks water. Refactoring is a primary method of plugging the leaks. It is an important tool for bending the Cost of Change curve from the exponential disaster-in-waiting shape to the flat curve of a well managed project.

Reviewing Software Programmers

Nice Article on Reviewing software Programmers

http://home.att.net/~geoffrey.slinker/maverick/MaverickReviews.html

Hiring Software Programmers

Nice Link about hiring Software Programmers.

http://home.att.net/~geoffrey.slinker/maverick/MaverickHiring.html

Friday, January 27, 2006

Agile Process and Impact on hiring


Recently while driving to a customer’s place, I had a discussion with one of my colleague regarding the agile process and its impact on hiring.  After the discussion it is becoming clear that, indeed there is a relation between hiring good people and the software process.  

Take for example, it becomes very important to hire good people with hands on experience while hiring for an agile project. The reason being, in agile projects the developers are expected to start delivering the small chunks of the application in 2-3 weeks duration (iterations), and the person getting on to the project needs to have quick learning ability and hands on experience.

But in a traditional development method, where the product release is done once in 6 months or so, the new hire would have enough breathing time to get acquainted with the process and technology.

Agile and Traditional Way Comparison


I picked up a good discussion in an agile forum comparing the high ceremony process and the agile process:

Now replace one inexperienced developer with an experienced developer as team lead. Next add an otherwise good but non-agile higher ceremony process. This team will have low efficiency, but they could succeed. However even with success, after a couple of years, the dozen developers would still be quite inexperienced, because their lead insured their quality instead of themselves.

In contrast, if one had eight inexperienced developers, four   experienced developers and good agile process that promoted shared   experiences, communication quick dissemination of information and   ideas (say pairs who switched off every week), those developers   would likely not only succeed, but their junior members would become experienced much more quickly.

Now it will also be true that the high ceremony shop will also   increase its likelihood of success from having a greater   percentage of experienced team members. With the greater   percentage of experienced team members, the inexperienced developers might gain experience more quickly or they might not,
because high ceremony processes do not inherently promote learning and  sharing, as agile processes do.

Camille Bell says:
> Agile practices assume a high level of self discipline. High ceremony does not. >Self discipline is easier for experienced developers and the type of discipline >experienced developers are likely to impose on themselves often has a core of >quality mined effective practices.

Ron Jeffries Says:
I believe I would modify the thoughts above to include the notion of "team" discipline. The members of a well-functioning Agile team support each other in maintaining discipline. In that sense, it's not just "self". It remains a valid concern that a team without experience may not know what discipline to maintain, or how to maintain it.


Sunday, January 22, 2006

Maverick at GE and Agile


Workers without boss and Maverick way of working

Somebody has shared this link in the agile discussion group. This article reference to “Maverick” s way of working, taking the example from GEs Jet Engine lab.

In fact the same principle can be applied while implementing agile methodology in the company

http://home.att.net/~geoffrey.slinker/maverick/generalelectric.html

Monday, January 16, 2006

>If the team had one day iterations, but found that they couldn't
>deliver something of value to business, would you recommend

> a) shortening the iteration, or
> b) lengthing it

Neither. I'd want to find out what the issue was. I might sit down
with them and try to do something of value in a day. But a week is
five! times longer than a day.

If a team had one millisecond iterations and couldn't deliver
anything of value, I'd definitely suggest that they would lengthen
it.

> I generally agree with most of the points you are making, I just try
> not to think in absolutes (i.e. shortening iterations is always the
> answer).

Well, with a team having week long iterations and difficulty
delivering business value, I absolutely would not make my first
remote suggestion be longer iterations. I would want to know whether
the issue was that the customer's expectations were too high;
whether the team has figured out how to split stories; what the
issues were in the development environment; and many other things.

After all that, it might be that longer would be better.

I think lengthening the iteration, for a team
that's already not clicking along nicely, is a risky strategy.
Getting XP / Agile clicking along involves learning and discomfort.
I don't want people tortured, certainly, but making the exercise too
easy isn't the first resort I'd choose either.

It's just that my first reaction to "my, this is difficult isn't it"
isn't "oh, well, don't do it then". I don't want to demoralize the
team, mind you. But not doing the hard stuff isn't the way to learn
to do the hard stuff.

How about trying to do just one story, of value in the iteration, or
two, instead of one or two per programmer?