250 likes | 448 Views
Tapping the Collective Creativity of Your Team Marcin Chady Technical Director, Radical Entertainment (Activision Blizzard). Acknowledgment – George Mawle. Paradox. We expect team members to be “passionate” about games Not just animation or physics programming or lighting
E N D
Tapping the Collective Creativity of Your Team Marcin ChadyTechnical Director, Radical Entertainment (Activision Blizzard)
Paradox • We expect team members to be “passionate” about games • Not just animation or physics programming or lighting • We give them highly specialised roles • With little influence on other areas • Now, what do we actually want them to do with this passion? • When creativity and passion breach the role boundaries • “They are smart, passionate and creative. Let them figure it out.”
Economics of Making a Pitch The value of a new idea (as subjectively perceived) vs Cost of pitching it (preparation, figuring out who to talk to, getting their attention) vs Risk of it being dismissed (because it’s lame, impractical, or because someone already thought of it)
Disenfranchisement • If we don’t give them enough to make an informed calculation, e.g. • Who to talk to? • Has this been already discussed? • They exaggerate the cost or the risk* • The calculation comes out negative • “Not worth making fuss over such a small detail” • “It’s not my job to worry about this, after all” * That’s human nature: see e.g. Stephen Pinker, “How the Mind Works”
Should We Care? Design by Committee Ivory Tower Syndrome Don’t listen to the team Stay focused Tunnel vision Missed opportunities Might get a polished turd • Listen to the team • Distraction • Lateral thinking • Might get the lowest common denominator ?
The Goal • Retain independent creative decision making • Help team members make informed assessments of new ideas • A way to give and receive feedback that is: • Easy • Accessible • Public • Unintimidating • Unimposing • Impersonal • Structured • Constructive
Other Goals • A home for suggestions • The awkward, unwanted bastard sibling of bugs • Rarely tracked in a bug database • Usually forgotten, unless they have a champion • Engage QA more in game-making process • Quality is more than being bug-free • Testers are hypersensitive to issues • An indicator of game quality
Prior Art • Encouragement • Q&A sessions • Suggestion boxes • Surveys • All of them are manual: • Cost time • The first to go when deadlines loom
General Idea • Hierarchical feature break-down • Comments • Ratings • Subscriptions • Database-backed • Ability to update one’s comment or rating • Agree/disagree option
Implementation • About 5 days’ worth work • Embedded in the build tool • As opposed to a separate tool or a website • Feature breakdown and subscriptions stored in a simple YAML file • Feature breakdown owned by QA (a major headache solved) • People can manage their own subscriptions • Version controlled • Control spam • One score/comment per feature per person • One score, multiple comments per feature per person
Scoring Scheme • 0to 100 percent or 1 to 5 stars? • Is mediocre 50 or 70? • No clear answer • Our choice: 0 – 100 scale with 5 point increments
Policy • Build trust that feedback will be read and considered • Producer role critical • Feature owners free to decide whether to act on feedback • But prodded by producers to at least respond to new concerns
Roll-Out • Initial resistance • “Unnecessary distraction” • “Leave it to professionals” • “Too technocratic – what we need is a change in attitudes” • Concerns about the name: BetterCritic→ Feedback Tool • Need buy-in and commitment from the top - The ratings feature! • Devolve ownership quickly • QA engagement • Users to control subscriptions
Results • Between Oct 2010 and Oct 2011 we collected: - 4000 operations (adding or modifying feedback or agreeing/disagreeing with it) - 10 per day, 30 per person • 3275 comments, 9 per day, 26 per person • 2561 scores, 7 per day, 20 per person • All comments resulted in an email sent to 10+ people • Most followed up by at least one email replied to all
Goals Achieved • Most of the team has used it • Lots of positive feedback on the tool itself • Feature owners appreciated getting regular feedback • Some used it to discern patterns and trends • E.g. polarised opinions often indicate an interesting gameplay mechanic • Catalyst for brainstorming • Translates directly to the increased quality of the game • Targeting • Gliding • Camera • Story • Helicopter controls • Profanity
A Life of Its Own • Ideas for the next game • Ideas for tools and tech improvements • Feedback on the studio, e.g. • Overtime food • Air conditioning
Other Benefits • Water cooler effect • Catalyst effect • Critical mass effect • Calling out unsung heroes • Taking of team’s temperature • Repository of evidence • “Is this an issue? It clearly is.” • “Does this mission suck? Apparently not.”
What Didn’t Work • Reading feedback • UI left much to be desired • Feature owners often relied entirely on email notifications • Many people didn’t like the scoring part • Onerous – “Why do I have to give a score?” • Unclear – “Not sure how to score” • Threatening – “Not fair that someone rated my mission 3/10” • Generally unreliable
In Defence of Mandatory Ratings • Part of the package • Not meant to be absolute • Is game score going up? • What areas are dragging it down? • Are our effort in area X paying off? • Encourages people to give the feature full consideration • Makes feedback more readable and easier to organise • E.g. when faced with 200 comments, start with the highest and lowest-rating ones
Future Work • “Feed-forward” to close the loop, i.e. • to pre-empt suggestions which have been already dismissed • to pre-empt feedback which is already being addressed • can take form of a special comment from the feature owner • Better UI • Data-mining tools • Charts and reports • Search • Binary ratings