Agile Zone is brought to you in partnership with:

Frank was born in Ireland and now lives in the USA. His initial training was in Computer Science from Trinity College, Dublin (so they're really to blame for this). He also has a Ph.D. in Computational Neuroscience and after realizing that computers were still as dumb as a bag of hammers realized that he had better go earn some money and stop fooling around. Right now he likes to think he is a part-manager, part-developer, part-architect developing applications for a major telecommunications firm because "code monkey" just doesn't sound as good. Frank is a DZone MVB and is not an employee of DZone and has posted 16 posts at DZone. You can read more from them at their website. View Full User Profile

Book review of "Software Estimation"

09.25.2012
| 10818 views |
  • submit to reddit
Published over 6 years ago "Software Estimation" by Steve McConnell is a great read.
As a practitioner of the agile arts I must say in reading it now this book seems like the last great attempt to "fix" waterfall and "big design up front" (BDUF) methodologies which were known for their very distinct big phases of requirements, design, development, testing and release. The kicker for these techniques was often that the development and testing estimates were VERY far off. If they followed McConnell's advice as he laid out in this book they'd have had more success.

Agile basically works around many of the problems McConnell tries to solve by focusing on short iterations (of < 4 weeks) with new releasable and functional software produced at the end. Basically it avoids many of the risks inherent in Waterfall/BDUF by making the software cycle "too small to fail".

That said there are a great many things still to be learned from Steve's great book. Waterfall and BDUF are by no means dead and even so there are some lessons here about the inherent nature of (errors in) estimations by developers.  Even in agile I have experienced serious under-estimation (by a factor of 2x or 3x) by developers - where a story that should have taken 1 sprint takes 3 or more.  So we still have much to learn.  However the key theme this book drove home to me was pretty much that "Software estimation is so hard that we pretty much gave up and are doing short iterations because that's the most we can estimate". I am sure that wasn't Steve's point but that was my inference because since the book's  release estimation has taken the back-burner to story points, burn down charts, stand-ups and sprints.
More people are becoming Scrum masters and fewer are taking PMP
Agile, PMP Job Trends graph
Agile, PMP Job Trends Agile jobs - PMP jobs

In this long blog article I try to capture some of the key learnings I made from this book

Part 1: Critical estimation concepts

CHAPTER 1: What Is an "Estimate"?

Tip #1: Distinguish between Estimates, Targets and Commitments

  • When business people are asking for an "estimate", they're really often asking for a commitment or a plan to meet a target
  • Estimation is not planning
  • When you see a "single point estimate" ask whether the number is an estimate or whether its really a target
  • A common assumption is that the distribution of software project outcomes follows a bell curve. The reality is much more skewed.
  • What is a good estimate?
  • The approach should provide estimates that are withing 25% of the actual results 75% of the time 
  • Events that happen during the project always invalidate the assumptions that were used in the estimate.
  • The primary purpose of software estimation is to determine whether a project's targets are realistic enough to allow the project to be controlled to meet them.  The executives want a plan to deliver as many features as possible by a certain date.
  • "A good estimate is an estimate that provides a clear enough view of the project reality to allow the project leadership to make good decisions about how to control the project to hit its targets"

CHAPTER 2: How Good an Estimator are you?
  • Studies have confirmed that most people's intuitive sense of "90% confident" is really comparable to something closer to "30% confident" 
  • Where does the pressure to use narrow ranges come from? You or an external source?

CHAPTER 3: Value of Accurate Estimates?
  • Is it better to overestimate or underestimate?
  • If a project is overestimated, stakeholders fear that Parkinson's law will kick in - work will expand to fill the time allotted.
  • Another concern is given too much time, developers will procrastinate until late in the project.
  • A related motivation for underestimating is the desire to instill a sense or urgency
  • Figure 3.1: The penalties for underestimation are more severe than the penalties for overestimation
  • The Software Industry's Estimation Track Record
    • Failure rates
    • 1000 LOC   2%
    • 10,000 LOC   7%
    • 100,000 LOC 20%
    • 1,000,000 LOC 48%
    • 10,000,000 LOC 65%
  • The Software industry has an underestimation problem.
  • What top executives value most is predictability - business need to make commitments to customers, investors, suppliers, the marketplace and other stakeholders

CHAPTER 4: Where Does Estimation Error Come From
  • Four Generic sources
    • Inaccurate information about the project being estimate
    • Inaccurate information about the capabilities of the organization that will perform the project
    • Too much chaos IN the project to support accurate estimation (i.e. try to estimate a moving target)
    • Inaccuracies arising from the estimation process itself
  • Simple example of a Telephone Number checker and the requirements questions / uncertainties that could result in very different design approaches.
  • The cone of uncertainty
    • Initial Concept  0.25x to 4x  (Range = 16x)
    • Approved Product Definition  0.5x to 2x (Range = 4x)
    • Requirements Complete 0.67x to 1.5x
    • UI Design Complete 0.8x to 1.25x
    • Detailed Design Complete 0.9x to 1.1x
  • The cone of uncertainty is the BEST-case accuracy possible to have. It isn't possible to be more accurate - it's only possible to be more lucky.
  • The cone does not narrow itself - if a project is not well controlled you can end up with a cloud of uncertainty that contains even more estimation error. 
  • "What you give up with approaches that leave requirements undefined until the beginning of each iteration if long-range predictability"
  • Sources of project chaos
    • Requirements that were not investigated very well
    • Poor designs leading to lots of code rewrite
    • Poor coding practices leading to extensive bug fixing
    • Inexperienced personnel
    • Incomplete or unskilled project planning
    • Prima Donna team members
    • Abandoning planning under pressure
    • Developer gold-plating
    • Lack of source code control software
  • In practice, project managers often neglect to update their cost and schedule assumptions as requirements change.
  • Omitted Activities (pp.44)
    • Missing Requirements
      • Non functional requirements: Accuracy, modifiability, Performance, Scalability, Security, Usability etc.
    • Missing software-development activities
      • Ramp-up time for team members
      • Mentoring
      • Build & Smoke Test support
      • Requirements clarification
      • Creating test data
      • Beta program management
      • Technical reviews
      • Integration work
      • Attendance at meetings
      • Performance tuning
      • Learning new tools
      • Answering questions
      • Reviewing technical documentation etc.
    • Missing non-software-development activities
      • Vacations, Holidays, Sick days, Training, Weekends(!?!?)
      • Company meetings, department meetings, setting up new workstations
    • Developer estimates tend to contain an optimism factor of 20 to 30%. Although managers complain that developers sandbag their estimates - the reverse is true.  Boehm also found a "fantasy factor" of 1.33

CHAPTER 5: Estimate Influences

  • The obvious one: Project Size
  • Diseconomies of scale a 1M LOC project takes more than 10x the effort of a 100k LOC project.
  • The basic issues is that in larger projects coordination among larger groups of people requires more communication. As Project size increases, the number of communication paths among people increases as a SQUARED function of the number of people on the project.
Lines of code per staff per year
  • 10k LOC project --> 2k to 25k
  • 100k LOC project --> 1k to 20k
  • 1M LOC project --> 700 to 10k
  • 10M LOC project --> 350 to 5k
  • Other influences: The kind of software being developed
  • Personnel factors
    • According to Cocomo II on a 100k LOC project the combined effect of personnel factors can swing a project estimate by as much as a factor of 22!
    • The KEY personnel decision: Requirements Analyst Capability and only THEN the programmer
    • The magnitude of these factors has been confirmed in numerous other studies
  • Other influences: Programming Language
  • Lots of other adjustment factors: See table 5-4 on page 66
  • Key Learning: Small and Medium-sized projects can succeed largely on the basis of strong individuals. Large projects however still need strong individuals but project management, organizational maturity and how well the team coalesces are just as significant.

PART II: Fundamental Estimation Techniques

CHAPTER 6: Introduction to Estimation Techniques
Considerations in choosing estimation techniques

  1. What's being estimated - features, schedule, effort
  2. Project Size 
    1. Small: < 5 total technical staff. Best estimates are usually "botom-up" techniques created by individuals who will do the actual work
    2. Large: 25+ people that lasts 6 to 12 months or more. For these teams the best estimation approaches tend to be "top-down" approaches in the early stages. As the project progresses more bottom-up techniques are introduced and the projects own historical data will provide more accurate estimates.
    3. Medium: 5 to 25 people lasting 3 to 12 months. Can use any of the techniques above.
  3. Software Development Style: Iterative vs. Sequential
    1. Evolutionary Prototyping
    2. Extreme Programming
    3. Evolutionary Delivery
    4. Staged Delivery
    5. RUP
    6. Scrum
CHAPTER 7: Count, Compute, Judge
  • Count first
  • Count if at all possible, compute when you can't cout. Use judgement alone ONLY as a last resort
  • What to count? Find something to count that's highly correlated with the size of the software you are estimating. And find something to count that is available sooner rather than later in the development.
  • Historical data
    • Average effort hours per requirement for development
    • Average total effort hours per use case / story
    • Average dev/test/doc effort per change request 

CHAPTER 8: Calibration and Historical Data
  • Used to convert counts to estimates - lines of code to effort, user stories to calendar time, requirements to number of test cases
  • Your estimates can be calibrated using any of three kinds of data
    • Industry data
    • Historical data
    • Project data
  • Using data helps avoid subjectivity, unfounded optimism and some biases.
  • It also helps reduce estimation politics
  • Start with a small set of data
    • Size (LOC)
    • Effort (Staff months)
    • Time (Calendar months)
    • Defects (classified by severity)
  • Be careful how you measure e.g. 8 hour work days? How about vacations? Overtime?
  • It is surprisingly difficult in many organizations to determine how long a particular project lasted
CHAPTER 9: Individual Expert Judgment
  • To create the task-level estimates, have the people who will actually do the work create the estimates
  • When estimating at the task level decompose estimates that will require no more than about 2 days of effort. Tasks larger than that will contain too many places that unexpected work can hide. Ending up with estimates that are 0.25 to 0.5d of granularity is appropriate.
  • Use Ranges to help identify risks and where things can (and often do) go wrong
    • Best Case
    • Most Likely Case
    • Worst Case
    • Expected Case
  • Estimate Checklist
    • Is what's being estimated clearly defined?
    • Does the estimate include all the KINDS of work needed to complete the task?
    • Does the estimate include all the FUNCTIONALITY AREAS needed to complete the task?
    • Is the estimate broken down into enough detail to expose hidden work?
    • Have you looked at notes from past work rather than estimating from pure memory?
    • Is the estimate approved by the person who will actually do the work?
    • Is the productivity assumed in the estimate similar to what has been achieved on similar assignments in the past
    • Does the estimate include a Best Case, Worst Case and Expected Case?
    • Have the assumptions in the estimate been documented?
    • Has the situation changed since the estimate was prepared?
  • Compare actual performance to estimated performance so that you can improve estimates over time.

CHAPTER 10: Decomposition and Recomposition
  • The key is if you create several smaller estimates some of the estimation errors will be on the high side and some will be on the low side. The errors will tend to cancel each other out to some extent. Research has found that summing task durations was negatively correlated with cost and schedule overruns.
  • Since developers tend to give near-Best case estimates, schedule overruns often compound on one another since the chance of each of the estimates coming in as scheduled is so very low.

CHAPTER 11: Estimation by Analogy
  1. Get detailed size, effort and cost results for a similar previous project
  2. Compare the size of the new project to a similar past project
  3. Build up the estimate for the new project's size as a percentage of the old project's size
  4. Create an effort estimate based on the size of the new project compared to the previous project
  5. Check for consistent assumptions across the old and new projects

CHAPTER 12: Proxy-based Estimates
  • Fuzzy Logic
    • Very Small
    • Small
    • Medium
    • Large
    • Very Large
  • As a rule of thumb the differences in size between adjacent categories should be at least a factor of 2
  • Story Points e.g. Fibonacci sequence. 
  • Cautions about rating scales - the use of a numeric scale implies that you can perform numeric options on the numbers: multiplication, addition, subtraction and so on. But if the underlying relationships aren't valid - that is a story worth 13 points doesn't really require 13/3 as much effort as a story worth 3 points - then performing numeric operations on the 13 isn't any more valid than performing numeric operations on "Large" or "Very Large"
  • T-Shirt Sizing
    • Remember that the goal of software estimation is not pinpoint accuracy but estimates that are accurate enough to support effective project control
    • In this approach developers classify each feature's size relative to other features as Small, Medium, Large, XL etc.
    • This allows the business to trade-off and look for features with the most business value and lowest development cost.
CHAPTER 13: Expert Judgment in Groups
  • Group Reviews
    • Have each team member estimate pieces of the project individually, and then meet to compare estimates
    • Don't just average estimates - discuss the differences
    • Arrive at a consensus estimate that the whole group accepts
  • Individual estimates have a Magnitude of Relative Error (MRE) of 55%.
  • Group-reviewed estimates average an error of only 30%
  • Studies have found that the use of 3 to 5 experts with different backgrounds seems to be sufficient.
  • Wideband-Delphi Technique

CHAPTER 14: Software Estimation Tools
  • Allows you to simulate different project outcomes
  • Data you'll need to calibrate tools
    • Effort in staff months
    • Schedule, in elapsed months
    • Size, in lines of code
  • Summary of available tools - see pp.163 (valid as of 2006)

CHAPTER 15: Use of Multiple Approaches
  • Use multiple estimation techniques and look for convergence or spread among the results

CHAPTER 16: Flow of Software Estimates on a Well-Estimated Project
  • When you reestimate in response to a missed deadline base the new estimate on the project's ACTUAL progress not on the project's planned progress.

CHAPTER 17: Standardized Estimation Procedures
Estimation should be fit into a Stage-Gate process
  • Discovery
    • Approved preliminary business case
  • Scoping
    • Approved product vision
    • Approved marketing requirements
  • Planning
    • Approved software development plans
    • Approved budget
    • Approved final business case
  • Development
    • Approved software release plan
    • Approved marketing launch plan and operations plan
    • Approved software test plan
    • Pass release criteria
  • Testing and Validation 
    • Pass release criteria
  • Launch
The process should
  • Emphasize counting and computing rather than use of judgement
  • Calls for the use of multiple estimation approaches
  • Communicates a plan at predefined points 
  • Contains a clear description of an estimate's inaccuracy
  • Defines when an estimate can be used as the basis for a project budget
  • Defined when as estimate can be used as the basis for internal or external commitments.

PART III: Specific Estimation Challenges
CHAPTER 18: Special Issues in Estimating Size
  • Using Lines of Code in Size estimation (data is easily collected but translation into "staff months" of effort is error prone)
  • Function-Point Estimation
    • The number of function points in a program is based on the number and complexity of
    • External inputs (e.g. screens, forms, dialog boxes)
    • External outputs (e.g. screens, reports, graphs etc)
    • External queries
    • Internal Logical files
    • External interface files

CHAPTER 19: Special Issues in Estimating Effort
  • Productivity variations among different kinds of software projects can show very different effort estimates (per LOC) and cost (per LOC)
CHAPTER 20: Special Issues in Estimating Schedule
  • Basic Schedule Equation
    • Schedule In Months = 3.0 x cubeRoot(StaffMonths)
  • Schedules compress and the shorted schedule
    • If the feature set is not flexible, shortening the schedule depends on adding staff to do more  work in less time
    • Numerous estimation researchers have investigated the effects of compressing a nominal schedule.
    • All researchers have concluded that shortening the nominal schedule will increase total development effort.
    • There is also an impossible zone and you can't beat it - the consensus of researchers is that schedule compression of more than 25% nominal is not possible
    • Similarly you can reduce costs by lengthening the schedule and conducting the project with a smaller team
    • Lawrence Putnam conducted fascinating research on the relationship between team size, schedule and productivity.
    • Schedule decreases (and effort increases) as you add team members - until you hit 5-7 on a team. After  that the effort goes up very much more quickly and schedule ALSO starts to get longer.
    • Thus a team size of 5 to 7 people appears to be economically optimal for medium-sized business system projects.
CHAPTER 21: Estimating Planning Parameters
  • Estimating Architecture, Requirements, Management effort for projects of different sizes. The larger the project the more the architecture, test, requirements and management costs.
  • Developer-to-test ratio is settled more by planning than by estimation - that is it is determined more by what you think you SHOULD do than by what you predict you will do.
  • Good analogy about ideal time and planned time: football game - 60 minutes vs. 2 to 4 hours elapsed time.
  • Defect Removal
    • Formal Design Inspections: 55% rate of removal (mode)
    • Informal design review: 35%
    • Formal code inspection: 60%
    • Informal code review: 25%
    • Low Volume (< 10 sites) Beta Test: 35%
    • High Volume (> 1,000 sites): 75%
    • System Test: 40%
  • Other rules of thumb
    • To go from one-company, one-campus development to multi-company, multi-cit: allow for 25% increase in effort.
    • To go from one-company, one campus development to international outsource, allow for a 40% increase in effort.
CHAPTER 22: Estimate Presentation Style
  • Communicating Estimate Assumptions
    • Which features are in scope
    • Which features are out of scope 
    • Availability of resources
    • Dependencies on 3rd-parties (and their performance)
    • Unknowns
  • Expressing Uncertainty
  • Try to present your estimate in units that are consisten with the estimate's underlying accuracy
  • Ranges are the most accurate way to reflect the inherent uncertainty in estimates at various points in the Cone of uncertainty.
  • Do not present a commitment as a range, a commitment needs to be specific

CHAPTER 23: Politics, Negotiation and Problem Solving
  • Estimate negotiations tend to be between introverted and more junior technical staff and seasons professional negotiators.
  • Understand that executives are assertive by nature and by job description and plan your estimation discussions accordingly.
  • You can negotiate the commitment but do NOT negotiate the estimate
  • Educate nontechnical stakeholders about effective software estimation practices
  • Treat estimation discussions as problem solving, not negotiations. Recognize that all project stakeholders are on the same side of the table. Everyone wins, or everyone loses.
  • Getting to Yes
    • Separate the people from the problem
    • Focus on interests, not positions
    • Invent options for mutual gain
    • Insist on using object criteria


Frank's Summary This is a great book I should have read a few years ago. Everyone should. Even if you are doing agile development there are tons of great tips and tricks (e.g. effectiveness of design & code inspections, using best/worst case estimates, negotiation techniques) that are useful regardless.
Like I said above I think the rise of agile techniques pretty much indicates, at least to me, that most software practitioners do not have the patience, determination and doggedness to follow the practices McConnell outlines above.  Because of that their estimates (and thus their perceived performance by their customers) is poor.  Agile methods especially Scrum and Kanban have achieved success by trying to limit the cognitive planning and estimating load - keeping the process simple and light and "result" focused (ship!). What I like to call "Too Small To Fail". Even so a lot of organizations have trouble adopting agile and need help.   There are various reasons for this but they are the same reasons their other processes were flawed - the problem is not in the process itself but how it is being executed and the ability of those trying to do the execution.
I just wish Steve McConnell was on twitter though - I could do with a daily dose of the knowledge and wisdom he puts into his books.
Published at DZone with permission of Frank Kelly, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)