450 likes | 507 Views
Construction of testing processes from the scratch. History of one project. Anna Tymofeeva and Ivanka Mykulovych. About Anna. Almost 12 years in IT, 6 years QA Lead 4 years at SoftServe QMO Partner Graduated from Leadership Development Program SoftServe Hillel IT School Coach
E N D
Construction of testing processes from the scratch. History of one project Anna Tymofeeva and Ivanka Mykulovych
About Anna Almost 12 years in IT, 6 years QA Lead 4 years at SoftServe QMO Partner Graduated from Leadership Development Program SoftServe Hillel IT School Coach Volunteer “Open Eyes” Foundation Mother of teenage girl e-mail: atymof@gmail.comskype: ann_alen
About Ivanka 2+ years in IT, working at SoftServe discovering test automation world during last two years working with team on adapting interesting and useful things to project needs main passion - travelling e-mail: x-iva@ukr.netskype: ivano4ka_hiii
Why Test Automation? Client’s Perspective Reduce operational cost. Increase revenue.
“We have Automated Tests”!!! Test starts failing randomly
“We have Automated Tests”!!! When you need the test to run - It never works
Problem, eh? Test execution is slow Tests are fragile Tests are not reusable Tests needs more infrastructure Test works on my box but fails on daily run.
What? More time spent on figuring why the tests are failing Resource tied up just reviewing failures Maintenance takes way too much time
How to fix it? Prepare Project for automation Cooperation with Manual QC Teams
Acceptance criteria Precondition is generated additionally Script covers all all scenarios’ steps Automation test is stable (It is better to have 50 stable test cases than 500 fragile ones that breaks regularly.) Automation test can be run in parallel Not more than accepted time (should be set for every test by QA Lead/Manual QA/Product Owner) Test is independent and separated Test is passed during first regression without issues
Precondition is generated additionally split test steps and test preconditions use API (if possible) to generate test preconditions
Automation test is stable avoid repeatable UI steps in bunch of tests split long test into few smaller move often used hardcoded test data out to test data file use test fixtures (set up, teardown, etc) clean up after yourself!
Test Fixtures eliminate code duplication (test set up and teardown) provide fixed test environment
Test is independent and separated Test = test + precondition for next test separate test ENV for each test
Automation test can be run in parallel • parallel and non-parallel test scope • rerun parallel scope failures after first run in one flow
Not more than accepted time smoke vs regression test scope (per app / per feature) limited test steps number test tags
Tags • feature • environment • test type • browser • parallel / non parallel 1 test = few tags • Keep test marks list file in project • Align test marks with test suites. It will make life easier
Tests are reusable browsers/OS support test parameterization
Test is passed during first regression without issues 2b|!2b oint
Test Planning Project roadmap creating Documentation developing: Collect typical uses cases and associated scenarios; Formalize functional and non-functional requirements to System; Develop HeatMap with a business value per feature; Create list of scenario for automation (only Critical test cases); Develop Test Plan per sprint for automation; Formalize story definition of done: All issues are fixed; Test scenario is created (could be used for automation) Test scenario is added to regression/functional scope Unit test is developed Heat map is updated Estimates: Estimate time for automation tests developing according automation test plan
Automation Critical UI Regression tests Set priority for automation from regression scope Tests will be run every regression Manual tests will be run using Heat Map
Automation process 1. Test Scenario should be developed by QA Lead/Manual QA2. Script will be developed by Automation QA3. Developed script should be reviewed by QA Lead 4. Test will be added to automation regression scope5. New tests will be run in regression scope6. When test is stable in parallel run can be added to smoke
Automation run result Test Plan should be created for every regression Automation run result should be set in Test Plan Every fail should be analyzed and bug in Bug Tracking system should be created System will track time for every page loading
Automation Result TOOL
Implementation test teardown Custom report methods Tool API endpoints SWITCH to enable tool reporting APIClient
Usage Report test run status to Test Plan Report test failure to Bug tracking tool
Supported Browsers Chrome is main browser for Smoke tests FireFox and IE are run during regression Safari should be run for Mac (can be run locally)