1 / 15

Software Testing and QA- ARC middleware

Jozef Černák , jcernak@upjs.sk Marek Kočan, Martin Savko P. J. Safarik University in Kosice Slovak Republic. Software Testing and QA- ARC middleware. The software development process. Basic models Code and Fix model Waterfall model Spiral model Complex models

ida
Download Presentation

Software Testing and QA- ARC middleware

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Jozef Černák, jcernak@upjs.sk Marek Kočan, Martin Savko P. J. Safarik University in Kosice Slovak Republic Software Testing and QA- ARC middleware

  2. The software development process • Basic models • Code and Fix model • Waterfall model • Spiral model • Complex models • Combination of basic models

  3. Code-and-Fix model nordugrid middleware –predecessor of ARC (2001-2006)‏

  4. Waterfall model

  5. Complex model ARC middlewar (2006-2010)-project KnowARC http://www.knowarc.eu

  6. Testing • Each stage of development process could be tested • Ideas, design by model analyzer for example Alloy project http://alloy.mit.edu/ • Development • unit tests, • build tests, • functional tests, performance tests,interoperability tests • usability tests

  7. Tests in KnowARC • Testing started at coding stage • Pieces of software -Unit tests (smal number of tests)‏ • Building binaries from sources was tested for each revision • Functional tests were performed for any night builds • Performace tests were performed only two two times during 3.5 years • Usability test (one at the end of the project)‏ • Otcome of the Design stage was not be systematicaly tested

  8. QA • We concentrated to set up procedures for two main outcomes of the project: • software, • Deliverables. • We have defined the proces for Relase management http://wiki.nordugrid.org/index.php/Release_management • Testing (testing plan, metric)‏. • Analyses of bug status. • We could not implement the best practices of QA due to limited number of project participants many of them had many roles in the project.

  9. Testing and bugfixing processes

  10. What have we learned? • The most significant bugs were found during performance tests (memory leak, scalability issues). • The new features of new releases were not right defined in some cases. • The functional tests on bigger infrastructure (operational tests) were not realised. Pilot Grid System (infrastructure of several servers 10) does not fulfil initial expectations. • Testing in permanent stress due to accumulation of delays in previous activities (bug fixing of critical bugs). • Small numbers of beta-testers • We were looking for other frameworks, for example internal review of ETIC. • Our test framework is capable to detect many bugs for many platforms Linux, MAC OS an Windows.

  11. EMI TESTING • Will all project use the same testing tools? • What is procedure to propose test cases? • Interoperability tests? • Usability tests? • Performance tests?

More Related