In his book Object-Oriented and Classical Software Engineering, Stephen Schach describes the relative proportion of cost for each of the phases of the life-cycle of a software product. He comes to conclusion that some may find startling: 67% of the cost of software is in its maintenance: changes made to the software after the project is deemed complete. Some organizations with experience in maintaining large code bases, and by large here I mean millions of lines of code, place this number closer to 70% or even higher.
In "Does OO really match the way we think?", Les Hatton breaks this 67% into three categories: half of code maintenance is corrective (fixing bugs), and the other half is either adaptive (modifying due to changing requirements) or perfective (improving working code). So even if by some miracle we write perfect code (and in these days of compressed schedules, it would indeed be a miracle), the maintenance portion of the life-cycle pie is only reduced by half.
In his paper "Software Aging", computer scientist David Parnas writes about software entropy: software systems become more and more expensive to modify over time due to the cumulative effect of changes. Then perhaps it is no surprise that the bulk of the cost of developing software is in making these changes. This is as much a limit to system scalability as processor speed or network bandwidth. Increases in efficiencies in tools and processes must be applied not just to new code development, but to long-term code maintenance as well.
This is an area where code refactoring – the ability to substantially improve the design of code, including improving its ability to be maintained, without altering its external behavior – will continue to play a major role. Likewise, this calls for more thought into designing new code to be easy to modify, since the effort spent in changing code after the fact is the bulk of the cost of its development.
In my experience, this issue is largely ignored among software developers except among the refactoring proponents. Most developers -- and truth be told their managers as well, if widely used development processes are any indication -- are happy if code passes unit testing and makes it into the code base anywhere near the delivery deadline. Long term maintenance is someone else's problem.
I find that this long term cost is seldom taken into account, and its omission arises in sometimes surprising ways. I once heard a presentation from a software development outsourcing company. It happened to be based in India, but I am sure that there are plenty of home-grown culprits too. The company described several cost estimation techniques used by their ISO 9001 certified process which was assessed at SEI CMM Level 5 and used Six Sigma methodology. None of the cost estimation techniques addressed the long term cost of code maintenance. Code maintenance after delivery of the product was billed on an hourly basis.
I almost leaped from my chair, not because I was angry, but to go found a software development company based on this very business model. The idea of low-balling the initial estimate then making a killing on the 67% of the software life-cycle cost pie was a compelling one. Only two things stopped me. First, I had already founded a software development company. And second, I had read a similar suggestion made by Dogbert in a recent Dilbert cartoon strip, so I knew that everyone else already had the same idea.
It is as if once we deliver a line of code to the base, we think that the investment in that code is over. In fact, the numbers tell us it has just begun. Every single line of code added to a code base contributes to an ever increasing total cost of code ownership.
Sources
Les Hatton, "Does OO really match the way we think?", IEEE Software, 15.3, May/June 1998
David Parnas, "Software Aging" , Proc. 16th International Conference on Software Engineering, IEEE, May 1994
Stephen Schach, Object-Oriented and Classical Software Engineering, McGraw-Hill, 2002
Sunday, May 28, 2006
Subscribe to:
Post Comments (Atom)
4 comments:
The book I'm currently reading Agile Estimating and Planning (wich I am smitten with) talks about the concept of debt in your software. It's basically just a concept to use when considering when to do certain things. As in normal debt, sometimes it makes sense to take it on. The question is what is the interest.
As far as making new code easy to modify. My default reaction is similar to early performance improvement. I personally am rarely clever enough to do the right things to improve performance. Likewise little red flags go off in my head when I hear a lot of talk about making systems highly flexible. The goal is a noble one but sometimes you end up with solutions that are too enterprisey.
http://en.wikipedia.org/wiki/Enterprisey
You always hear about "feature debt", where you drop some features from your development plan in order to reduce time to market. Jack Ganssle talks about another kind of debt, where you make quick hacks or fixes in order to meet schedules. Unless you eventually pay off your debt, over time your software grows more and more expensive to modify and debug.
I've worked for organizations in which development was funded entirely by market managers. If you tried to get funding for refactoring or rearchitecting work, they would ask "And what difference will the customer see?" "Well, if we do everything absolutely perfectly, nothing!" You can guess the outcome of that conversation. This is the marketing equivalent of only looking at quarterly results.
I recall one eight million line code base which has one module featuring a four thousand line case statement. How do you get a four thousand line case statement? One case at a time, baby, one case at a time. No one plans such a monstrosity.
In all fairness, having the pendulum swing the other way -- funding a lot of pet projects as a crazy form of software developer welfare -- isn't a good idea either.
In my experience, refactoring is frowned upon due to risk. Risk that working features will break; risk that too much time will be spent, reducing ROI; risk that schedules will slip. I've also noticed that very simple reactorings are avoided in the name of the above reasons.
I've seen code refactoring go awry. Someone doesn't understand the code well enough, the code isn't supported by an adequate test suite, or the engineer is just sloppy, and breaks stuff.
A test suite should be a part of every software project. And a good-enough test suite will allow practically at-will refactoring by almost any software engineer. I've had the luxury of working in such an environment recently, and I gotta say that's it's really refreshing to be able to add functionality and refactor existing code without fear of breaking anything (obviously, to a degree).
Anyway, I think the answer is out there. A culture of test-first needs to be grown. If a piece of software starts out with 80-90% code coverage, the likelihood of breaking things goes down dramatically. Adding new fuctionality? Add a test. Fixing a bug? Add a test. In fact, write a test that tests the breakage, and write the code to the test.
It really works!
As far as businessheads not understanding the value of quality, maintainable software... no idea. For many of them, it's all about numbers, and quality isn't a concern. The pie chart seems like a good start. Speak in terms that they can understand.
It's not hard as a developer to become a fan of test driven development. That plus a good source code control system makes you absolutely fearless in doing refactoring or making other changes. And tools like Eclipse further lowers the cost of doing so.
I recall one project back in 1999 in which the project "architect" stated that we didn't have time for unit testing. I'm still not sure to this day what exactly he thought we were going to ship, but his faith in our ability to write perfect code without testing it fell somewhere between flattering and terrifying.
Most of my background is in real-time embedded development, although at the moment I'm doing Java message-oriented middleware. It's my not so secret goal in life to bring the embedded domain kicking and screaming into more current practices that the Java folks have been enjoying for a decade or so.
Post a Comment