330 likes | 504 Views
An Industrial Case Study on Requirements Volatility Measures. Annabella Loconsole Department of Computing Science Ume å University, Sweden bella@cs.umu.se. Requirements: how do we get them?. A requirement is something that the product must do or a quality that the product must have.
E N D
An Industrial Case Study on Requirements Volatility Measures Annabella Loconsole Department of Computing ScienceUmeå University, Sweden bella@cs.umu.se
Requirements: how do we get them? A requirement is something that the product must do or a quality that the product must have. Req. 1:…. Req. 2:…. Req. 3:…. I would like a system that does… Requirements Specifications (in natural language) Requirement Engineer Customer
Actor Requirements: what happens to them? Use case 1 Use case 2 Requirements specifications (Use case model) Design Begin if a<b then…..else Customer Source code Final product
Definitions of volatility • Volatile= easily changing [Hornby] • Intensity and distribution of changes [Rosenberg et al.] • Ratio of requirements change (addition, deletion, and modification) the total # requirements for a given period of time [Stark et al.].
Entity, attributes, measures (in general) Measures Attributes Entity Number of years Number of months Number of grey hairs ……… Internal Attribute External Attribute Age Wisdom
Actor Entity, attributes, measures (in this study) Attributes Measures Entity # lines # words # use cases # actors Total # changes # min, mod, maj changes # revisions Use case 1 Use case 2 Use case model (UCM) External Attribute Internal Attribute Size of UCM Size of change of UCM Volatility
So, what have I done? • An empirical validation of a subset of measures from my previous work [Loconsole 2001, Loconsole and Börstler 2003] • Investigating if the size of requirements affect their volatility • Why: To find out if the measures can be predictors of volatility • How: An industrial case study • What results: • Data analysis did not support empirical validation • There is a significant correlation between the size of use case model and the measure total number of changes
Context of the study • The company • The project • Size, people, process, time… • Subjects • Objects • Variables • Instruments
Hypotheses formulation Null hypotheses H0 : • There is no significant correlation between the measures of size of change and size of UCM and the subject’s rating of volatility of the UCMs. In formulas: • volatility f(size of UCM); • volatility f(size of change of UCM) • There is no significant correlation between the size of change and the size of UCM. • size of change of UCM f(size of UCM)
Spearman correlation coefficients First hypothesis Second hypothesis
Surprising results!!! • we accepted H0 there is no consistent relationship between the two internal attributes of size and subject’s rating of volatility of the UCMs. • So what is volatility then?? • we rejected H0 (with level of significance = 0.05)in case of total number of changes there is a strong relationship between the size of a Use case model and the total number of changes to it.
Threats … • to conclusion validity • Size of the sample. • to construct validity • Minimal, because we used the goal question metrics and theoretical validation. • to external validity • Minimal, industrial study (real data, real projects..). • to internal validity • Accuracy of collected data, of responses, subject’s motivation, plagiarism.
Conclusions • In this case study the measures are not validated • The bigger the size of a use case model the greater the total number of changes larger use case models are more volatile respect to smaller ones. • These results are preliminary, more studies are needed.
Future work • Analyse the data in more details • By phase, by reason of change, … • at different levels of abstraction • Investigating multiple correlation: • volatility = f(size of UCM, size of change of UCM) • Validation of the results through a replication of the study
References • Hornby A.S., The Oxford Advanced Learner's Dictionary of Current English, Oxford University Press, 1980. • A. Loconsole, “Measuring the Requirements Management Key Process Area”, Proceedings of ESCOM - European Software Control and Metrics Conference, London, UK, April 2001. • A. Loconsole, and J. Börstler, “Theoretical Validation and Case Study of Requirements Management Measures”, Umeå University Internal Report, Uminf 03.02, July 2003. • L. Rosenberg, and L. Hyatt, “Developing a Successful Metrics Program”, The Second Annual Conference on Software Metrics, Washington, DC - June, 1996. • G. Stark., P. Oman., A. Skillicorn, and R. Ameele, “An Examination of the Effects of Requirements Changes on Software Maintenance Releases”, in Journal of Software Maintenance Research and Practice, Vol. 11, 1999, pp. 293-309.
Selected publications • Loconsole, A. and Börstler J., (2005) An Industrial Case Study on Requirements Volatility Measures, submitted to RE05, IEEE International conference on requirements engineering, Paris, France, 10 pages. • Loconsole, A. Empirical studies on Requirement Management Activities,(2004) ICSE '04 , 26st IEEE/ACM International Conference on Software Engineering, Edinburg Scotland, UK; May 2004, Doctoral symposium, 3pages. • Loconsole, A. and Börstler J., (2004) A Comparison of two Academic Case Studies on Cost Estimation of Changes to Requirements - Preliminary Results, SMEF, 28-29-30 January 2004 in Rome, Italy, 10 pages. • Loconsole, A., and Börstler J., (2003) Theoretical Validation and Case Study of Requirements Management Measures, Umeå University Internal Report UMINF 03.02, July 2003, 20 pages. • Loconsole, A., (2002) Non-Empirical Validation of Requirements Management Measures, in Proceeding of WoSQ - Workshop on Software Quality, ICSE, Orlando, Fl, May 2002, 4 pages. • Loconsole, A., (2001) Measuring the Requirements Management Key Process Area - Application of the Goal Question Metric to the Requirements Management Key Process Area of the Capability Maturity Model. Proceedings of ESCOM - European Software Control and Metrics Conference, London, UK. April 2001, 10 pages. • Loconsole, A., (2001) Measuring the Requirements Management Process Area. in Proceedings of the Thirty Second SIGCSE- Technical Symposium on Computer Science Education, Charlotte, NC., 21-25 February 2001, abstract.
Other publications • Jürgen Börstler, Annabella Loconsole (editors) (2002): Proceedings of Umeå's Sixth Student Conference in Computing Science, Technical Report UMINF-02.06, Department of Computing Science, Umeå University, Sweden, Jun 2002. • Loconsole, A., Rodriguez D., Börstler J., Harrison R.,(2001) Report on Metrics 2001: The Science & Practice of Software Metrics Conference. Software Enginnering Notes - ACM SIGSOFT newsletter, vol 26, num 6, November 2001. • Jürgen Börstler, Annabella Loconsole, and Thomas Pederson (editors) (2001): Proceedings of USCCS'01, Umeå's Fifth Student Conference in Computing Science, Technical Report UMINF-01.10, Department of Computing Science, Umeå University, Sweden, May 2001. • Loconsole, A., (2000) Application of the Goal Question Metrics to the Requirements Management Key Process Area. Proceedings of USCC&I´00 - Umeå's 4th Student Conference in Computing Science & Informatics, Umeå University Report, Uminf 00.08, ISSN-0348-0542, 32-43. • Loconsole, A., (1998) Generazione automatica di metafore per le interfacce utente di database multimediali. MSc thesis, Bari University report.
Research background • Software requirements are a key issue for project success • Important to control changes to be able anticipate and respond to change requests • Software measurement can help us in providing guidance to the requirements management activities by quantifying changes to requirements and in predicting the costs related to changes • Few empirical studies have been performed in this field….
Measures Validation Introduction Measures Definition Case Study
What precisely was your contribution • What question did you answer • What kind of questions do software engineers investigate? Method for analysis or evaluation (how can I evaluate the quality of the requirements) • Why should the reader care • What larger question does this address Clear statement of the specific problem you solved and an explanation of how the answer will help solve an important software engineering problem
What is your new result • What new knowledge have you contributed that the reader can use elsewhere • What previous work do you build on • What do you provide a superior alternative to • How is your work different from and better than this prior work • What precisely and in detail is your new result • What kind of results do SE produce and which are the most common
Why should the reader believe your result • What standard should be used to evaluate your claim • What concrete evidence shows that your result satisfies your claim • There is no other set of validated RM measures • In case of other goal, I will show that my measures are better predictors of volatility
The problem I am trying to solve • Poor management of requirements in industry (Poor measurement ) • Measures can help (increase) to control, understand, predict requirements • Low formality in software engineering • In particular in software measurement there is a lack of validated measures In case of different goal the problem is: can we predict volatility and stability of requirements through my measures? Can we improve the predictions?
What I have done (and how) • Definition (or collection) of 38 RM measures in paper 1 (escom). • How? By applying the GQM to the RM KPA of CMM • Theoretical validation of 10 of the 38 measures in paper 2 (internal report, not published internation.) • How? By applying 2 theoretical validation definitions to the measures
What I am doing (and how) • Performing an empirical validation of those measures connected to volatility • showing the connection between the measures and the attributes associated to them. In particular I am showing that some (how many) of the 38 measures are connected to the attribute volatility of requirements. • How? I am measuring requirements on 2 historical projects at a company and checking the volatility of the requirements. I am creating a mapping from (my) RM measures and what is measurable at the company and doing the data collection
What is left to do • Finish the empirical validation • …..Once the data collection on the 2 projects is completed I will predict the volatility of the requirements on a third project based on the historical data and check how much my predictions are correct. • Write a journal paper which includes the definition and validation of the measures • This is the way for me to validate my results
Questions/problems I have… • Are my (eventual) results enough for a PhD thesis? • If not, I could demonstrate that some of my measures are better predictors of stability/ volatility and compare my results with some other results • Doubts on abstraction levels and atomic requirements