650 likes | 804 Views
Automated Testing of Massively Multi-Player Games Lessons Learned from The Sims Online. Larry Mellon Spring 2003. Context: What Is Automated Testing?. Random Input. Classes Of Testing. System Stress. Feature Regression. Load. QA. Developer. Collection & Analysis. Startup &
E N D
Automated Testing of Massively Multi-Player Games Lessons Learned fromThe Sims Online Larry Mellon Spring 2003
Random Input Classes Of Testing System Stress Feature Regression Load QA Developer
Collection & Analysis Startup & Control Repeatable, Sync’ed Test Inputs System Under Test System Under Test System Under Test Automation Components
What Was Not Automated? Startup & Control Repeatable, Synchronized Inputs Results Analysis Visual Effects
Lessons Learned: Automated Testing 1/3 Design & Initial Implementation Architecture, Scripting Tests, Test Client Initial Results 1/3 Fielding: Analysis & Adaptations Wrap-up & Questions What worked best, what didn’t Tabula Rasa: MMP / SPG 1/3 Time (60 Minutes)
Automation (Repeatable, Synchronized Input) (Data Management) Strong Abstraction Design Constraints Load Regression Churn Rate
Single, Data Driven Test Client Load Regression Reusable Scripts & Data Single API Test Client
Data Driven Test Client “Testing feature correctness” “Testing system performance” Load Regression Reusable Scripts & Data Single API Test Client Single API Key Game States Pass/Fail Responsiveness Configurable Logs & Metrics
Problem: Testing Accuracy • Load & Regression: inputs must be • Accurate • Repeatable • Churn rate: logic/data in constant motion • How to keep testing client accurate? • Solution: game client becomes test client • Exact mimicry • Lower maintenance costs
Test Client Game Client Test Control Game GUI State State Commands Presentation Layer Client-Side Game Logic Test Client == Game Client
Game Client: How Much To Keep? Game Client View Presentation Layer Logic
What Level To Test At? Game Client View Mouse Clicks Presentation Layer Logic Regression: Too Brittle (pixel shift) Load: Too Bulky
What Level To Test At? Game Client View Internal Events Presentation Layer Logic Regression: Too Brittle (Churn Rate vs Logic & Data)
Buy Lot Enter Lot Buy Object Use Object … Gameplay: Semantic Abstractions Basic gameplay changes less frequently than UI or protocol implementations. NullView Client View ~ ¾ Presentation Layer Logic ~ ¼
Scriptable User Play Sessions • SimScript • Collection: Presentation Layer “primitives” • Synchronization: wait_until, remote_command • State probes: arbitrary game state • Avatar’s body skill, lamp on/off, … • Test Scripts: Specific / ordered inputs • Single user play session • Multiple user play session
Scriptable User Play Sessions • Scriptable play sessions: big win • Load: tunable based on actual play • Regression: constantly repeat hundreds of play sessions, validating correctness • Gameplay semantics: very stable • UI / protocols shifted constantly • Game play remained (about) the same
SimScript: Abstract User Actions include_scriptsetup_for_test.txt enter_lot $alpha_chimp wait_until game_state inlot chatI’m an Alpha Chimp, in a Lot. log_message Testing object purchase. log_objects buy_object chair 10 10 log_objects
SimScript: Control & Sync # Have a remote client use the chair remote_cmd $monkey_bot use_object chair sit set_data avatar reading_skill 80 set_data book unlock use_object book read wait_until avatar reading_skill 100 set_recording on
Event Generators Event Generators Event Generators Composable Client - Scripts - Cheat Console - GUI Presentation Layer Game Logic
Event Generators Event Generators Event Generators Viewing Systems Viewing Systems Viewing Systems Composable Client - Console - Lurker - GUI - Scripts - Console - GUI Presentation Layer Game Logic Any / all components may be loaded per instance
Lesson: View & Logic Entangled Game Client View Logic
Few Clean Separation Points Game Client View Presentation Layer Logic
Solution: Refactored for Isolation Game Client View Presentation Layer Logic
Lesson: NullView Debugging ? Without (legacy) view system attached, tracing was “difficult”. Presentation Layer Logic
Solution: Embedded Diagnostics Timeout Handlers … Diagnostics Diagnostics Diagnostics Presentation Layer Logic
Talk Outline: Automated Testing 1/3 Design & Initial Implementation Architecture & Design Test Client Initial Results 1/3 Lessons Learned: Fielding Wrap-up & Questions 1/3 Time (60 Minutes)
Mean Time Between Failure • Random Event, Log & Execute • Record client lifetime / RAM • Worked: just not relevant in early stages of development • Most failures / leaks found were not high-priority at that time, when weighed against server crashes
Monkey Tests • Constant repetition of simple, isolated actions against servers • Very useful: • Direct observation of servers while under constant, simple input • Server processes “aged” all day • Examples: • Login / Logout • Enter House / Leave House
QA Test Suite Regression • High false positive rate & high maintenance • New bugs / old bugs • Shifting game design • “Unknown” failures Not helping in day to day work.
Talk Outline: Automated Testing ¼ Design & Initial Implementation Fielding: Analysis&Adaptations Non-Determinism Maintenance Overhead Solutions & Results Monkey / Sniff / Load / Harness ½ ¼ Wrap-up & Questions Time (60 Minutes)
Analysis: Critical Path Test Case: Can an Avatar Sit in a Chair? use_object () • Failures on the Critical Path block access to much of the game. buy_object () enter_house () buy_house () create_avatar () login ()
Solution: Monkey Tests • Primitives placed in Monkey Tests • Isolate as much possible, repeat 400x • Report only aggregate results • Create Avatar: 93% pass (375 of 400) • “Poor Man’s” Unit Test • Feature based, not class based • Limited isolation • Easy failure analysis / reporting
Talk Outline: Automated Testing 1/3 Design & Initial Implementation Lessons Learned: Fielding Non-Determinism Maintenance Costs Solution Approaches Monkey / Sniff / Load / Harness 1/3 1/3 Wrap-up & Questions Time (60 Minutes)
Analysis: Maintenance Cost • High defect rate in game code • Code Coupling: “side effects” • Churn Rate: frequent changes • Critical Path: fatal dependencies • High debugging cost • Non-deterministic, distributed logic
Turnaround Time Tests were too far removed from introduction of defects.
Pre-Checkin Regression: don’t let broken code into Mainline. Solution: Sniff Test
Solution: Hourly Diagnostics • SniffTest Stability Checker • Emulates a developer • Every hour, sync / build / test • Critical Path monkeys ran non-stop • Constant “baseline” • Traffic Generation • Keep the pipes full & servers aging • Keep the DB growing
Analysis: CONSTANT SHOUTING IS REALLY IRRITATING • Bugs spawned many, many, emails • Solution: Report Managers • Aggregates / correlates across tests • Filters known defects • Translates common failure reports to their root causes • Solution: Data Managers • Information Overload: Automated workflow tools mandatory
ToolKit Usability • Workflow automation • Information management • Developer / Tester “push button” ease of use • XP flavour: increasingly easy to run tests • Must be easier to run than avoid to running • Must solve problems “on the ground now”
Load Testing: Goals • Expose issues that only occur at scale • Establish hardware requirements • Establish response is playable @ scale • Emulate user behaviour • Use server-side metrics to tune test scripts against observed Beta behaviour • Run full scale load tests daily
Load Testing: Data Flow Resource Debugging Data Load Testing Team Metrics Client Metrics Load Control Rig Test Test Test Test Test Test Test Test Test Client Client Client Client Client Client Client Client Client Test Driver CPU Test Driver CPU Test Driver CPU Game Traffic Internal System Server Cluster Probes Monitors
Load Testing: Lessons Learned • Very successful • “Scale&Break”: up to 4,000 clients • Some conflicting requirements w/Regression • Continue on fail • Transaction tracking • Nullview client a little “chunky”
Current Work • QA test suite automation • Workflow tools • Integrating testing into the new features design/development process • Planned work • Extend Esper Toolkit for general use • Port to other Maxis projects
Talk Outline: Automated Testing 1/3 Design & Initial Implementation 1/3 Lessons Learned: Fielding Wrap-up & Questions 1/3 Biggest Wins / Losses Reuse Tabula Rasa: MMP & SSP Time (60 Minutes)
Biggest Wins • Presentation Layer Abstraction • NullView client • Scripted playsessions: powerful for regression & load • Pre-Checkin Snifftest • Load Testing • Continual Usability Enhancements • Team • Upper Management Commitment • Focused Group, Senior Developers