1 / 15

What is SAS?

Discover what SAS (Software Analysis and Synthesis) is and how it can enhance process improvement efforts. Explore a variety of best practices, proven techniques, and tools that can help optimize software development projects. Gain valuable insights from industry experts and researchers. Join the SAS conference and be part of the discussion.

Download Presentation

What is SAS?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What is SAS? Tim Menzies West Virginia Universitytim@menzies.us

  2. What is SAS? • SAS is a zoo? • Don’t feed the funny animals, er, researchers

  3. What else is SAS? • A spectator blood sport? • Come see the dueling paradigms? Process improvement or death! Let none deny us our formal methods!

  4. SAS is a progress report # scored projects: • 2003: 34 • 2004: 48 • The research infusion team • Defects found during the initial training session!

  5. SAS recognizes good research Grade= “A” Candidates for “best project” award Penetration factor= 9

  6. Anything else happens at SAS? • Where OSMA looks for new answers Cost method1 So many tools… How do they compare? method2 method3 project N project 1 Benefit project 2 When won’t they work? external validity

  7. VXworks@NASA Q: what are our best answers? Examples of proven “best practices”? • Stardust • PROBA (ESA) • Mars Odyssey • X-38 (space station Lifeboat”) • RHESSI • Reuven Ramaty High-Energy Solar Spectroscopic Imager • Swift • (Gamma Ray Observatory; RHESSI Heritage) • Gamma ray Large Area Space Telescope • GLAST; Swift Heritage • Mars Pathfinder Rover • Mars Exploration Rovers • etc • Welcome to “knowledge elicitation by irritation” • Here are some example “best practices” (Timm’s views only) • Your homework (for Day 3): • What would you add? • E.g.#1: Attach research to commonly used platforms • e.g. MDP (Chapman, Galaxy Global; Menzies / Cukic,WVU) • e.g. Vxworks (Beims, SAIC) • e.g. CASPER (Smith, JPL; Offutt, Interface&Control Systems Inc) http://mpd.ivv.nasa.gov CASPER@JPL • Autonomous Spacecraft - 3C3 • Autonomous Spacecraft - TS-21 • Rover Sequence Generation • Distributed Rovers • CLEaR (Closed Loop Execution and Recovery) • etc

  8. Other example“best practices”? • E.g. #2: Process maturity reduces the amount of avoidable rework • Says: • [Shull02] • If so: • then demand higher levels of process maturity • E.g. #3: Peer reviews catch >= 50% of defects • Says: • [Shull02] • [SEI03]: SEI workshop on software risk at NASA, Pittsburgh, 2003 • [McConnell00]: The Best Influences on Software Engineering, Boasson, Billinger, Card, Cochran, Ebert, Glass, Ishida, Mead, Mello, Moitra, Strigel , Wiegers, IEEE Software, Jan 2000 • If so: • Then demand peer reviews on software artifacts Already in NPR 7150.x SWE-097 SWE-098 Fuel for thought: are any of our great tools more cost-effective than “mere” manual peer reviews?

  9. Yet more examples of“best practices”? • Low cost defect detection methods • E.g. #4 • Thrashing • Just crank it up and let it rip • Berens, GRC • Powell, JPL • E.g. #5: • V&V of SQA via static defect measures • Menzies, WVU • detectors stable across multiple NASA projects • sampling policy to check where else to place your effort • E.g. #6: • Temporal queries over control/data-flows in C programs • Beims, SAIC (tool= Codesurfer) • Why low cost? No need to abstract code to a formal model. • Other important ideas: • E.g. #7: • Bidirectional tractability matrix between requirements, test cases, code modules • Hayes, UK • E.g. #8: • Automated test suites

  10. Hence, SAS ‘04 • Day 1,2: • Morning: • 1 track • Executive summaries (short) • Afternoon: • 5 parallel tracks • All the gory details (longer briefings) • Day 3: • Morning: • 1 track • Report back from tracks/ discussions • Lunch • Your table: • list 3 best “best practice” practices • Afternoon • 1 track • Build “the” list of best practices • Leave early! A good summary attracts an audience to the afternoon sessions • Afternoon sessions: • vital they start and end at advertised times (so folks can jump between them) So, if you run over time, STOP! If run under-time, WAIT!

  11. Slides= 15 to 20 minutes Good slides generate lively discussion After slides, 20 minutes discussion Topic: “In the future, what to do more? What to do less?” Important note: The report back material can cover MORE than just the SAS-presented work The SAS work are examples from a field The report back should try to sketch that field Brief notes on the field E.g., potential benefits to NASA. High water mark in this area, Brief notes on the track presentations Readiness and guidelines E.g. Technology readiness levels for various tools (see next slide) and methodological guidelines Opportunities and potentials and road-blocks E.g. any factors inhibiting this work? Standard traps, costs and benefits, limits to the technologies E.g. hot topics, open issues. Key players E.g. NASA groups or university research groups of commercial companies that can support this kind of thing War stories E.g. projects that have used this stuff, successfully Where to find more information E.g. tutorials, manuals, landmark papers, supply Urls if possible Day -3 report back slides Report-back slides; any or all of:

  12. A plea for more empiricism From NPR: 7150.x: The requirements in this NPR are easily traceable to … proven NASA experience in software engineering. This idea isn’t even false-- Niels Bohr So what is true about engineering quality software?

  13. More pleas for more empiricism Delphi statement: I think, they think Empirical statement: I saw, they saw 1300 years! Galen’s views prevailed till Andreas Vesalius (1514-1564) had the gall (pun intended) to descend Into the dissection pit and perform his own anatomical experiments. And there he found that much of Galen’s writing were actual wrongings Claudius Galenus of Pergamum (131-201 AD)’s 2nd century anatomy text Galen's authority dominated medicine all the way to the 16th century. Experimenter's disciples did not bother to experiment and studies of physiology and anatomy stopped - Galen had already written about everything. Note: Galen mainly dissected animals, not humans

  14. empiricism@sarp.nasa.gov • Process conclusions • Q: When does cost-estimation get accurate? • A: after a dozen projects • Demonstrably adequate. • Repeatable • Data sets on the web • Refutable: • A better cost-estimation method would tune faster, decrease variance faster Learning software cost estimation models Learning aircraft controller 30 times , shuffle order on 60 NASA projects Estimationvariance Max/minconfidence intervalin alearner Failure: Loss of left stabilizer, 50% missing surface Lyapunov Stability Analysis and On-Line Monitoring (Cukic WVU + an ISR collaboration “A Methodology for V&V of Neural Networks”) See more! Learn more! Tell more! (Menzies, WVU)

More Related