850 likes | 1.36k Views
Value-Based Software Engineering: Motivation and Key Practices. Barry Boehm, USC CS 510 Lecture Fall 2011. Outline. Value-based software engineering (VBSE) motivation, examples, and definitions VBSE key practices Benefits realization analysis Stakeholder Win-Win negotiation
E N D
Value-Based Software Engineering:Motivation and Key Practices Barry Boehm, USC CS 510 Lecture Fall 2011 ©USC-CSSE
Outline • Value-based software engineering (VBSE) motivation, examples, and definitions • VBSE key practices • Benefits realization analysis • Stakeholder Win-Win negotiation • Business case analysis • Continuous risk and opportunity management • Concurrent system and software engineering • Value-based monitoring and control • Change as opportunity • Conclusions and references ©USC-CSSE
Software Testing Business Case • Vendor proposition • Our test data generator will cut your test costs in half • We’ll provide it to you for 30% of your test costs • After you run all your tests for 50% of your original cost, you are 20% ahead • Any concerns with vendor proposition? ©USC-CSSE
Software Testing Business Case • Vendor proposition • Our test data generator will cut your test costs in half • We’ll provide it to you for 30% of your test costs • After you run all your tests for 50% of your original cost, you are 20% ahead • Any concerns with vendor proposition? • 34 reasons in 2004 ABB experience paper • Unrepresentative test coverage, too much output data, lack of test validity criteria, poor test design, instability due to rapid feature changes, lack of preparation and experience (automated chaos yields faster chaos), … • C. Persson and N. Yilmazturk, Proceedings, ASE 2004 • But one more significant reason ©USC-CSSE
Software Testing Business Case • Vendor proposition • Our test data generator will cut your test costs in half • We’ll provide it to you for 30% of your test costs • After you run all your tests for 50% of your original cost, you are 20% ahead • Any concerns with vendor proposition? • Test data generator is value-neutral* • Assumes every test case, defect is equally important • Usually, 20% of test cases cover 80% of business case * As are most current software engineering techniques ©USC-CSSE
20% of Features Provide 80% of Value: Focus Testing on These (Bullock, 2000) 100 80 % of Value for Correct Customer Billing 60 Automated test generation tool - all tests have equal value 40 20 5 10 15 Customer Type ©USC-CSSE
Value-Based Testing Provides More Net Value 60 Value-Based Testing (30, 58) 40 Net Value NV (100, 20) 20 20 40 60 80 100 0 Percent of tests run -20 Test Data Generator -40 ©USC-CSSE
Value-Based Reading (VBR) Experiment— Keun Lee, ISESE 2005 • Group A: 15 IV&V personnel using VBR procedures and checklists • Group B 13 IV&V personnel using previous value-neutral checklists • Significantly higher numbers of trivial typo and grammar faults ©USC-CSSE
Motivation for Value-Based SE • Current SE methods are basically value-neutral • Every requirement, use case, object, test case, and defect is equally important • Object oriented development is a logic exercise • “Earned Value” Systems don’t track business value • Separation of concerns: SE’s job is to turn requirements into verified code • Ethical concerns separated from daily practices • Value – neutral SE methods are increasingly risky • Software decisions increasingly drive system value • Corporate adaptability to change achieved via software decisions • System value-domain problems are the chief sources of software project failures ©USC-CSSE
Why Software Projects Fail ©USC-CSSE
The “Separation of Concerns” Legacy • “The notion of ‘user’ cannot be precisely defined, and therefore has no place in CS or SE.” - Edsger Dijkstra, ICSE 4, 1979 • “Analysis and allocation of the system requirements is not the responsibility of the SE group but is a prerequisite for their work” - Mark Paulk at al., SEI Software CMM* v.1.1, 1993 *Capability Maturity Model ©USC-CSSE
I wonder when they'll give us our requirements? SOFTWARE AERO. ELEC. G & C MGMT. MFG. COMM PAYLOAD Resulting Project Social Structure ©USC-CSSE
20% of Fires Cause 80% of Property Loss:Focus Fire Dispatching on These? 100 80 % of Property Loss 60 40 20 20 40 60 80 100 % of Fires ©USC-CSSE
Missing Stakeholder Concerns:Fire Dispatching System • Dispatch to minimize value of property loss • Neglect safety, least-advantaged property owners • English-only dispatcher service • Neglect least-advantaged immigrants • Minimal recordkeeping • Reduced accountability • Tight budget; design for nominal case • Neglect reliability, safety, crisis performance ©USC-CSSE
Key Definitions • Value (from Latin “valere” – to be worth • A fair or equivalent in goods, services, or money • The monetary worth of something • Relative worth, utility or importance • Software validation (also from Latin “valere”) • Validation: Are we building the right product? • Verification: Are we building the product right? ©USC-CSSE
Conclusions So Far • Value considerations are software success-critical • “Success” is a function of key stakeholder values • Risky to exclude key stakeholders • Values vary by stakeholder role • Non-monetary values are important • Fairness, customer satisfaction, trust • Value-based approach integrates ethics into daily software engineering practice ©USC-CSSE
Outline • Value-based software engineering (VBSE) motivation, examples, and definitions • VBSE key practices • Benefits realization analysis • Stakeholder Win-Win negotiation • Business case analysis • Continuous risk and opportunity management • Concurrent system and software engineering • Value-based monitoring and control • Change as opportunity • Conclusions and references ©USC-CSSE
DMR/BRA* Results Chain Order to delivery time is an important buying criterion ASSUMPTION INITIATIVE OUTCOME Contribution OUTCOME Contribution Reduced order processing cycle (intermediate outcome) Implement a new order entry system Increased sales Reduce time to process order Reduce time to deliver product *DMR Consulting Group’s Benefits Realization Approach ©USC-CSSE
The Model-Clash Spider Web: Master Net - Stakeholder value propositions (win conditions) ©USC-CSSE
Projecting Yourself Into Others’ Win Situations Counterexample: The Golden Rule • Do unto others .. As you would have others do unto you ©USC-CSSE
Projecting Yourself Into Others’ Win Situations Counterexample: The Golden Rule • Build computer systems to serve users and operators .. Assuming users and operators like to write programs, and know computer science • Do unto others • .. As you would have others do unto you • Computer sciences world (compilers, OS, etc.) • Users are programmers • Applications world • Users are pilots, doctors, tellers ©USC-CSSE
EasyWinWin OnLine Negotiation Steps ©USC-CSSE
Red cells indicate lack of consensus. Oral discussion of cell graph reveals unshared information, unnoticed assumptions, hidden issues, constraints, etc. ©USC-CSSE
Example of Business Case Analysis ROI= Present Value [(Benefits-Costs)/Costs] Option B 3 Return on Investment 2 Option A 1 Time -1 ©USC-CSSE
Example of Business Case Analysis ROI= Present Value [(Benefits-Costs)/Costs] Option B-Rapid Option B 3 Return on Investment 2 Option A 1 Time -1 ©USC-CSSE
Value Value Time Time Mission Planning, Competitive Time-to-Market Real-Time Control; Event Support Value Time Event Prediction - Weather; Software Size Examples of Utility Functions: Response Time Critical Region Value Time Data Archiving Priced Quality of Service ©USC-CSSE
How Much Testing is Enough?- Early Startup: Risk due to low dependability - Commercial: Risk due to low dependability - High Finance: Risk due to low dependability- Risk due to market share erosion Sweet Spot ©USC-CSSE
Value-Based Defect Reduction Example:Goal-Question-Metric (GQM) Approach Goal: Our supply chain software packages have too many defects. We need to get their defect rates down Question: ? ©USC-CSSE
Q: How do software defects affect system value goals? ask why initiative is needed Order processing Too much downtime on operations critical path Too many defects in operational plans Too many new-release operational problems G: New system-level goal: Decrease software-defect-related losses in operational effectiveness With high-leverage problem areas above as specific subgoals New Q: ? Value-Based GQM Approach – I ©USC-CSSE
New Q: Perform system problem-area root cause analysis: ask why problems are happening via models Example: Downtime on critical path Value-Based GQM Approach – II Validate items in stock Schedule packaging, delivery Produce status reports Prepare delivery packages Validate order Order Deliver items order • Where are primary software-defect-related delays? • Where are biggest improvement-leverage areas? • Reducing software defects in Scheduling module • Reducing non-software order-validation delays • Taking Status Reporting off the critical path • Downstream, getting a new Web-based order entry system • Ask “why not?” as well as “why?” ©USC-CSSE
Defect tracking weighted by system-value priority Focuses defect removal on highest-value effort Significantly higher effect on bottom-line business value And on customer satisfaction levels Engages software engineers in system issues Fits increasing system-criticality of software Strategies often helped by quantitative models COQUALMO, iDAVE Value-Based GQM Results ©USC-CSSE
Outline • Value-based software engineering (VBSE) motivation, examples, and definitions • VBSE key practices • Benefits realization analysis • Stakeholder Win-Win negotiation • Business case analysis • Continuous risk and opportunity management • Concurrent system and software engineering • Value-based monitoring and control • Change as opportunity • Conclusions and references ©USC-CSSE
Is This A Risk? • We just started integrating the software • and we found out that COTS* products A and B just can’t talk to each other • We’ve got too much tied into A and B to change • Our best solution is to build wrappers around A and B to get them to talk via CORBA** • This will take 3 months and $300K • It will also delay integration and delivery by at least 3 months *COTS: Commercial off-the-shelf **CORBA: Common Object Request Broker Architecture ©USC-CSSE
We just started integrating the software and we found out that COTS* products A and B just can’t talk to each other We’ve got too much tied into A and B to change ******* No, it is a problem Being dealt with reactively Risks involve uncertainties And can be dealt with pro-actively Earlier, this problem was a risk Is This A Risk? ©USC-CSSE
Earlier, This Problem Was A Risk • A and B are our strongest COTS choices • But there is some chance that they can’t talk to each other • Probability of loss P(L) • If we commit to using A and B • And we find out in integration that they can’t talk to each other • We’ll add more cost and delay delivery by at least 3 months • Size of loss S(L) • We have a risk exposure of RE = P(L) * S(L) ©USC-CSSE
How Can Risk Management Help You Deal With Risks? • Buying information • Risk avoidance • Risk transfer • Risk reduction • Risk acceptance ©USC-CSSE
Risk Management Strategies:- Buying Information • Let’s spend $30K and 2 weeks prototyping the integration of A and B’s HCI’s • This will buy information on the magnitude of P(L) and S(L) • If RE = P(L) * S(L) is small, we’ll accept and monitor the risk • If RE is large, we’ll use one/some of the other strategies ©USC-CSSE
Other Risk Management Strategies • Risk Avoidance • COTS product C is almost as good as B, and its HCI is compatible with A • Delivering on time is worth more to the customer than the small performance loss • Risk Transfer • If the customer insists on using A and B, have them establish a risk reserve. • To be used to the extent that A and B have incompatible HCI’s to reconcile • Risk Reduction • If we build the wrapper right now, we add cost but minimize the schedule delay • Risk Acceptance • If we can solve the A and B HCI compatibility problem, we’ll have a big competitive edge on the future procurements • Let’s do this on our own money, and patent the solution ©USC-CSSE
Is Risk Management Fundamentally Negative? • It usually is, but it shouldn’t be • As illustrated in the Risk Acceptance strategy, it is equivalent to Opportunity Management Opportunity Exposure OE = P(Gain) * S(Gain) = Expected Value • Buying information and the other Risk Strategies have their Opportunity counterparts • P(Gain): Are we likely to get there before the competition? • S(Gain): How big is the market for the solution? ©USC-CSSE
What Else Can Risk Management Help You Do? • Determine “How much is enough?” for your products and processes • Functionality, documentation, prototyping, COTS evaluation, architecting, testing, formal methods, agility, discipline, … • What’s the risk exposure of doing too much? • What’s the risk exposure of doing too little? • Tailor and adapt your life cycle processes • Determine what to do next (specify, prototype, COTS evaluation, business case analysis) • Determine how much of it is enough • Examples: Risk-driven spiral model and extensions (win-win, anchor points, RUP, MBASE, CeBASE Method) • Get help from higher management • Organize management reviews around top-10 risks ©USC-CSSE
Example Large-System Risk Analysis:How Much Architecting is Enough? • Large system involves subcontracting to over a dozen software/hardware specialty suppliers • Early procurement package means early start • But later delays due to inadequate architecture • And resulting integration rework delays • Developing thorough architecture specs reduces rework delays • But increases subcontractor startup delays ©USC-CSSE
How Soon to Define Subcontractor Interfaces?Risk exposure RE = Prob(Loss) * Size(Loss)-Loss due to rework delays Many interface defects: high P(L) Critical IF defects: high S(L) RE = P(L) * S(L) Few IF defects: low P(L) Minor IF defects: low S(L) Time spent defining & validating architecture ©USC-CSSE
How Soon to Define Subcontractor Interfaces? - Loss due to rework delays- Loss due to late subcontact startups Many interface defects: high P(L) Critical IF defects: high S(L) Many delays: high P(L) Long delays: high S(L) RE = P(L) * S(L) Few delays: low P(L) Short Delays: low S(L) Few IF defects: low P(L) Minor IF defects: low S(L) Time spent defining & validating architecture ©USC-CSSE
How Soon to Define Subcontractor Interfaces? - Sum of Risk Exposures Many interface defects: high P(L) Critical IF defects: high S(L) Many delays: high P(L) Long delays: high S(L) RE = P(L) * S(L) Sweet Spot Few delays: low P(L) Short delays: low S(L) Few IF defects: low P(L) Minor IF defects: low S(L) Time spent defining & validating architecture ©USC-CSSE
How Soon to Define Subcontractor Interfaces? • Very Many Subcontractors Higher P(L), S(L): many more IF’s Many-Subs Sweet Spot Mainstream Sweet Spot RE = P(L) * S(L) Time spent defining & validating architecture ©USC-CSSE
How Much Architecting Is Enough? • A COCOMO II Analysis 10000 KSLOC Percent of Project Schedule Devoted to Initial Architecture and Risk Resolution Added Schedule Devoted to Rework (COCOMO II RESL factor) Total % Added Schedule Sweet Spot 100 KSLOC Sweet Spot Drivers: Rapid Change: leftward High Assurance: rightward 10 KSLOC ©USC-CSSE
Outline • Value-based software engineering (VBSE) motivation, examples, and definitions • VBSE key practices • Benefits realization analysis • Stakeholder Win-Win negotiation • Business case analysis • Continuous risk and opportunity management • Concurrent system and software engineering • Value-based monitoring and control • Change as opportunity • Conclusions and references ©USC-CSSE
Sequential Engineering Neglects Risk $100M Arch. A: Custom many cache processors $50M Arch. B: Modified Client-Server Original Spec After Prototyping 5 3 1 2 4 Response Time (sec) ©USC-CSSE
Change As Opportunity: Agile Methods • Continuous customer interaction • Short value - adding increments • Tacit interpersonal knowledge • Stories, Planning game, pair programming • Explicit documented knowledge expensive to change • Simple design and refactoring • Vs. Big Design Up Front ©USC-CSSE