80 likes | 185 Views
Are the days of textual requirements numbered? (e.g., by “model based”). (Textual) Requirements Assistant* Hayes : works on repositories of textual documents (requirements, designs, failure reports, …) Scale: readily works with CM1 (in MDP)
E N D
Are the days of textual requirements numbered? (e.g., by “model based”) (Textual) Requirements Assistant* Hayes: works on repositories of textual documents (requirements, designs, failure reports, …) Scale: readily works with CM1 (in MDP) Nikora: works on repositories of textual requirements Scale: studies of 1300+ requirements WHAT Automate onerous yet important tasks related to requirements tracing RETRO tool – coming soon to the SARP site! WHY Hayes: well-chosen modest linguistic processing; trainable from user inputs; thorough, thoughtful metricsNikora: scripts to automate search (pattern-based), closure (of tracing graph), comparison (of result sets); identifies the frequency of various expressive styles HOW * akin to the “Apprentice” name used by Rich, Waters & Reubenstein
Are models really going to be prevalent as we think? Analyst’s Assistant Menzies: where models exist, subsumes “heuristic” activities (debugging, diagnosis, optimization, tuning…) Scale: no fear! Adoptability: taught to students Powell: property validation of designs expressed in formal model including C code Scale: MER arbiter; Adoptable: 3rd party experimenter WHAT but am *I* too old to learn? Analysis results for the masses – timely, scalable, meaningful WHY Menzies : random search; simple yet expressive language; underlying language mechanism Powell : utilizing LURCH (David Owen), in turn based on Menzies’ ideas; bugs’ connection to code obvious HOW
Systems, Interfaces, COTS Designer’s Assistant Whittle: assists in progression of requirements, use cases, scenarios, design; Applicability: event driven systems UML Scale: 10 dense pages of requirements, 40 scenarios Beimes : Model checking of interface (proper use of: OS calls, sequencing; synchronization, commanding) Applicability: interfaces, including those of COTS Scale: CM1 study; VxWorks study results imminent WHAT WHY Help do/check design Whittle : prioritize among use cases; elicit off-nominal scenarios; form relationships among scenarios; generalize scenarios to state machines Beimes : Model checking, static analysis platform; control-flow graph; properties easily expressed (C like language, plus standard templates) HOW
Early days of NASA mission use of OO; perception of increase – correct? OO Metrician’s Assistant Lateef : Critiquing of use cases and beyond Application: attitude control system; OO models Etzkorn : semantic metrics (ignores syntactic variability), Application: works on OO designs before code exists Scale: (anyone contemplating working with 20 CDRoms’ worth of data does not fear scale…) WHAT WHY Help measure/validate OO designs Lateef : OO-specific risks; prioritization of critical (essential) use cases, etc.; OO metrics; BBN model to combine multiple metricsEtzkorn : leverage significant prior work on program understanding (conceptual graphs for knowledge representation); empirical and comparative comparisons HOW
*SARP complaint line Obstacles Information access: Foreign Nationals & ITAR, etc.“Sanitizer”: Helps, but sometimes discards information needed for study; MDP helps Challenge: Getting enough case studies SARP quarterly reporting more onerous than NSF (but not DARPA or DoD):Can be discouraging/daunting to typical academics However, NASA cares: University thrilled by visit from NASA! Ominous trend: level of documentation in average project is decreasing. Forthcoming NPR may remedy this, but its effect still several years in future; also, beware of old and unaudited information
Publicationilty a problem? Apparently not. Credibility: Nicely documents past successes, key to gaining interest of future customers! SARP values them: Called for in quarterly report; published impact ratings for venues (conferences, journals); AWARD! Outreach: via SARP web site! SARP : SARP encourages empiricism* (application and evaluations), key to (e.g.) experience paper Note: may need longer lead time for release approvalNASA source of real problems, which researchers QUESTION: Is there a NASA strategy for (e.g.) conference participation? *Empirically based work: new territory for some Recommended reading: Basili, Selby & Hutchins, TSE, 1986
Aunt Aardvark's Advice Column*:“Working with projects” Initially: $: Pay for time of project personnel Gets you the data/insight/feedback Gets you an inside advocateHowever, you’re often their lowest priority task Having established credibility: Offer technique to project, in return for their supply of, and guidance on, their data Interactions: Make few rather than many queries of project; be willing to wait; formulate query to be of interest (address significant issues) Free: Offer free for experimentation / limited time use Pitfalls: Don’t force your process on the user(s) Anticipate (& prepare for) how your tool will be used (in ways you could hardly imagine!!!) *SARP quarterly newsletter
QUESTIONS Is it better to solve a new problem than to improve upon an old solution? “Lines in the sand” – is anyone going the same way? What NEW tools would we wish for? It’s like asking what requirements are we missing… What is the SARP product (or products)? Should we have a common goal (or few goals)? E.g., multiple SARP projects working towards a single tool/process/course/… SARP is for reporting out – what about reporting in to SARP?