1 / 5

Background

NTNU OORT Experiment, March 2000 Some qualitative observations Reidar Conradi Software Engineering Group Dept. of Computer and Information Science (IDI) NTN U. Background. Repeated OORT experiment taken from CS735 course at UMD, Fall 1999. All artifacts and instructions in English.

matia
Download Presentation

Background

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NTNU OORT Experiment, March 2000 Some qualitative observationsReidar ConradiSoftware Engineering Group Dept. of Computer and Information Science (IDI)NTNU

  2. Background • Repeated OORT experiment taken from CS735 course at UMD, Fall 1999. • All artifacts and instructions in English. • Part of 4th year QA/SPI course, taught by local responsibles; material adapted by R. Conradi in the USA. • Students get pass/no-pass on the assignment. • Main change: operationalized OORT instructions as Qij.x questions.

  3. Overall impressions • Big variation in effort, dedication and results: • E.g. some teams did not report effort data, even did the wrong OORTs. • Big variation in UML expertise. • Students felt frustrated by the extent of the assignment, and that indicated efforts were too low -- felt cheated. • Lengthy and tedious pre-annotation of artifacts, before real defect detection could start. Discovered many defects already during annotation, even defects that remained unreported. • OORTs too ”heavy” for the given (small) artifacts? • Some confusion about the assigments: what to be done and how, on what artifacts, ...?

  4. OORT results • Found many defects, not previously reported: • Loan Arranger: 30 (13+17) seeded defects & 23 more + 26 comments. • Parking Garage: 32 (21+11) seeded defects & 14 more + 30 comments. • Defects actually reported, 4 groups for LA and 5 for PG, average and variance: • LA: 11 (7..14) seeded & 13 (3..27) more + 9 (6..16) comments. • PG: 7 (4..10) seeded & 4 (0..9) more +10 (0..21) comments. • Effort spent: • LA: 5-6 hours. • PG: 10-13 hours. • Lacking access to background/questionnaire data (delayed). • In general: more data analysis to come.

  5. OORT comments • Some unclear instructions: Executor/Observer role, Norwegian file names, file access, some typos. First read RD? • Some unclear concepts: service, constraint, condition, … • UML: not familiar by some groups. • Technical comments on artifacts and OORTs: • Add comments/rationale to diagrams: UC and CDia are too brief. • CDe hard to navigate in -- add separators. • SqD had method parameters, but CDia not -- how to check? • Need several artifacts (also RD) to understand some OORT questions. • Many trivial checks could have been done by an automatic UML tool. • Many trivial typos and naming defects in the artifacts: • Parking Garage artifacts need more work • LA vs. Loan Arranger vs. LoanArranger, gate vs. Gate, CardReaders vs. Card_Readers. • Fanny May = Loan Arranger? Lot = Parking Garage?

More Related