330 likes | 520 Views
IS 556 Project Management. Week 5 - Project Estimating Readings: On Time Within Budget Ch.12 Case: Vandelay Industries. David Lash. Last Week Objectives. Project Plan Work Breakdown Structure Pert Chart Gannt Chart Dealing with Human Resources. Objectives - Estimation.
E N D
IS 556 Project Management Week 5 - Project Estimating Readings: On Time Within Budget Ch.12 Case: Vandelay Industries David Lash
Last Week Objectives • Project Plan • Work Breakdown Structure • Pert Chart • Gannt Chart • Dealing with Human Resources
Objectives - Estimation • Stepwise Estimation • Prototyping – Engineering orientation • Constructive Cost Models • Estimate project in 5 levels 1. Level of Personnel 2. Level of complexity 3. Project size 4. Development Environment 5. Reliability level • Range of Estimates (normal distrbution) • Hardware estimates - CPU, Memory, disk, resp time
Some things needing estimation • We estimate things we don’t know for sure: • Project Development Cost • Estimate in dollars • Development Schedule : time • Development Team • Number of people • Type of people • Amount of Software • LOC • Kilobytes • Better measures • Hardware Resources • Time needed in test Lab • CPU Usage • Disk, memory
Estimation Is Inherently Hard • Critical to success of schedule but hard to do: • R&D projects may be solving “new” problem • New technology may use new, “unknown” productivity enhancements • What is the productivity of each developer? (What is an average worker?) • If 1 portion of project slips, coordination of resources may throw schedule off (e.g., equipment shows up and not ready for it.)
Estimating Techniques • Experience: What worked before • Works if we keep data • Metrics and measures for success • COCOMO: COnstructive COst MOdel • Estimation model of the new era • Continuous model improvement • Quantitative • Software life cycle costs
Stepwise Estimation: 4 Categories • Decompose Problem into small units (modules) and Classify them Components similar to others. (e.g., develp cellular billing w/ experence in telephone billing.) Can you identify whats new and familiar w/i component? Evolutionary steps. No significant experience. No reliable basis for estimation. Applications with lots of experience. Area of specialty (e.g., Internet Ap or DB module). Confident can repeat past success. Risk of miscalculation. Often well evaluated and tested. E.g., open source modules, math library.
Development Estimating Methods • Prototyping – Engineering orientation • Reduced functionality • May be able to Refine and reuse prototype • The better the prototype the most it costs to develop You typically prototype to “learn” something. It might be requirements, or how to implement something or to estimateoverall complexity of the module. Usefulness of prototype (to reuse it) often depends upon reason it was built. If implementing a portion of system to get better estimate might very well be useful later. If learning how to work with hardware…
Development Estimating Methods • Statistical – Scientific orientation • The idea is to categorize modules, build a % of them and extrapolate • Some steps • Identify all new development software components. • Initial design of each component: ID software modules that implement them • Divide the modules into similar categories, according to: • complexity (degree of difficulty) • function (such as communications, data base, human interface, etc.) • type (screen, operating system, library utility, service task, etc.) • From each category, select one module that is representative of the others. • Implement the selected modules. • Based on this experience estimate the resources required for each category. • Combine the estimates for each category The more “samples” the more estimation accuracy but more samples increases costs of estimation!
Constructive Cost Models • Any development estimate is difficult • Skill - Are all programmers skill equal? (Consider small case at chapter start) • Complexity - How determine complexity of task? • Size - Estimating the size by LOC, number of objects?, type of work? • Reliability – Highly reliable software much more difficult • Environment – Lots of process, meetings overhead? • Using Estimation Models • Most basic is to gather relevant data. • Compare estimate of task and date to actual completion • There are several different estimation models • CoCoMO is the oldest
Constructive Cost Models • COCOMO – Dr. Barry Boehm - USC • 1981 - original COCOMO introduced in Software Engineering Economics. • Uses previous 5 factors, and estimates costs, schedule and staff size • 1981-1996 – incremental developments • 1997 - calibration of COCOMO II is released by Boehm. • More OO approach using obect points and function points to determine software size. From informit.com - Barry Boehm is among the most respected names in the software world. A TRW professor of software engineering and director of the USC Center for Software Engineering, he earlier served as director of the DARPA Information Science and Technology Office and as a chief scientist at TRW. His contributions to the field include the Constructive Cost Model (COCOMO), the Spiral Model of the software process, the Theory W (win-win) approach to software management and requirements determination, and his classic book, Software Engineering Economics (Prentice Hall, 1982).
COCOMO Overall Idea • Estimate project in 5 levels 1. Level of Personnel 2. Level of complexity 3. Project size 4. Development Environment 5. Reliability level • Need to gather data and “calibrate” model over time. • Will overview at a high level some major factors in model.
1. Level of Personnel • Productivity of developers varies widely (can be 400%). • Performance varies by many factors • As project size increases • Effect of individual performance decreases • PL < 1 is good PL while > 1 is not good … Personnel level Expected Performance (assigned betw 1-4) for N engineers using KLOC as project size
Personnel Level Cost Multipliers PL estimates Aver Expected performance 10 KLOC, 3 people, 1@lvl 2 1@lvl 3, 1@lvl4 Predicted costs of A 1/3 of B 100 KLOC, 35 people, 9@lvl 1 1@lvl 2, 6@lvl3, 2@lvl4 500 KLOC, 500 people, 20@lvl 1 100@lvl 2, 65@lvl3, 5@lvl4
2. Levels of Software Complexity: Consider 4 classes of software complexity: • System - operating, comm., etc. • SEM = 3.6 X (KSLOC)1.20 • Algorithmic – logic, scientific, sort • SEM = 3.2 X (KSLOC)1.15 • Service – utilities, graphics • SEM = 2.8 X (KSLOC)1.10 • Data Processing – DB, inventory, spreadsheet, etc. • SEM = 2.4 X (KSLOC)1.05 Estimated Software Engineering Months (SEM). These formula suggest that as size gets > 100 KLOC classification becomes more important. > 300KLOC can be > 200%.
3. Reliability Levels Reliability is not a cheap requirement • Must formulate based on experience • Levels based on effects of system failure • Slight inconvenience • Losses easily recovered • Moderately difficult to recover losses • High financial loss • Risk to human life A reliability function is calibrated for your environment and generate reliability multiplier
4. Development Environment Productivity is impacted by DE factors such as: • OS • Programming language • Computer Aided Software Engineering (CASE) • Multipliers can be developed for these levels also
5. Subsystems • Subsystems are components that can be viewed as systems in their own right. • Decomposition can result in subsystems • Alternatives • Can use model to calculate subsystem effort • Divide into subsystems and calculate effort separately.
Some Basic CoCoMo Steps • Decompose software into subsystems and into modules • Use size estimation to estimate module size • Determine effort multipliers • (personnel, size, reliability, development environ, module complex) • Apply multipliers to modules and rollup into module and susbsystem effort • Review costs missed and any missed factors • Have another independent estimate made
Function Point Analysis • There is some controversy about using KLOC as a size/complexity measure. (C++ VS Perl VS VB code). • FPA is a measure of complexity based on problem size (functionality amount) • There exists Intl Function Point Users Group (IFPUG) => promote FPA and software measurement • FPA can be used to compare • Project complexity • Effort to complete • Generate other measures such as KLOC • Used as a component in COCOMO, • There are also several software packages that calculate
Function Point Analysis • FPA based on • UFP -> unadjusted function point -> Complexity of I/O component • CAF -> Complexity adjustment factor -> overall complexity of project. • AFP => Adjusted function point
Function Point Analysis - UFP • UFP -> measure of I/O Dependent-ness • Identify the number of functions that are Input/output dependent • E.g., User inputs/inquires, external data output, control files/data, external shared files/data/control info • Classify these functions and rate them as • Simple (minimal i/O, minimal user involvement) • Average – • Complex – (many file accesses, many different data types, extensive user involvement) • Weighted values – Assigned to each function type, calibrated for your environment. Remember: CFP= CAF x UFP
Function Point Analysis - CAF • CAF – Calculated Adjustment Factor - Attributes of Processing complexity • Look at overall problem and estimate attributes of processing complexity: • Data communication, performance, transaction rate, online data entry, reusability, Operational ease, installation ease and many more) • Rate complex influence from 0->none, 5->strong complex influence • Calculate the CAF. Text suggests one formula might be:
Range of Estimates • Idea is to ask several engineers to estimate effort. • Estimations will form normal distribution (if ‘N’ is large enough. (Shaper of distribution important) • Idea is to approximate a normal distribution so you have some statistical basis for determining an estimation range. • Can calculate expected value • Standard deviation: • 1 sd = 66% of data • 2 sd = 95% of data • 3 SD = 99% of data • Idea- 95% confidence of estimate from 4-9 days
Using Normal Distribution • Steps would be: 1. Get a solid estimate of best and worst case 2. Get several estimates of expected time - Probability of best case = .2 - Probability of worst case = .2 - Probability of most likely = .6 - Expected value = summary of probabilities. - Can approximate std dev = P(word)*worst - P(best)*best and therefore can calc 3*SD
How does that work? Remember assign probabilities .2 and probability .6 • Assume estimates as: • Best case – 4 weeks • Worst case – 12 weeks • Expected case estimate – 7 Weeks • Approximate Std Dev • Therefore you could say • 66% confident effort is between 5.8-9 days • 95% confident effort is between 4.2-10.6 Remember has 66% confidence 95% confidence and 99% confidence
Hardware Resources • Sometimes application targets a specific hardware with limited CPU • E.g., Palm pilot, some network switches, real-time software • Sometime must design software to work with hardw limitations • Will consider CPU, Data storage and response time
Hardware Resources – CPU Load • CPU Load - difficult with multi-processors so often use percentage • Choose specific time frame. • Why? E.g., Many systems on aver10-20% busy. Concerned about the critical time the AP is running. • Usually Real-Time systems • For example, a communication system, needs • A main loop with to read 2 data inputs every 400 Ms and require 60 ms each • Therefore within 400 ms consume 120 ms • A fast loop that reads 2 inputs every 40 Ms and require 10ms. That is, consumes 20 ms every 40 ms. • Would consume 200 ms within 400 • So within 400 ms • (120+200)/400 or 80% consumed
Hardware Resources - Data Storage • Sometimes early estimates need based on user input • Estimate worst case memory needs • Decompose system & estimate memory needs • ID indirect memory needs (work areas, etc.) • ID largest set of modules resident at any one time • ID system level needs (buffers, stack, resident tables) • Total OS memory utilization • Combine all • Disk usage still comes up from time to time • E.g., disk needs for DB application • Establish an estimated number of tables and variables within each • Estimate number of records.
Table 12.7: Example Response Time Distribution Table Hardware Resources - response time • System feedback – user perception • Response time interval • Estimate response times • Average • Worst case system usage • 95th percentile Event Process Response Response time
Table 12.8: Overhead of Nondevelopment Activities It is better to withstand the assault on a schedule at the beginning of a project and be applauded when it is completed on time, than to be applauded at the beginning of a project for accepting an impossible schedule and being rebuked at the end for failing to deliver it on time. Nondevelopment Overhead • Update estimates and report continuously • At end of design phase • At milestones • Before IntegrationBefore Testing • Update when requirements change • Use a safety factor or fudge factor (%) • Project Overhead doesn’t include Senior management, sales
Summary • Stepwise Estimation • Prototyping – Engineering orientation • Constructive Cost Models • Estimate project in 5 levels 1. Level of Personnel 2. Level of complexity 3. Project size 4. Development Environment 5. Reliability level • Range of Estimates (normal distrbution) • Hardware estimates - CPU, Memory, disk, resp time