140 likes | 245 Views
Data Driven Evaluation of Crowds. Presenter: Robin van Olst. The Authors. Professor Ariel Shamir. Assistant Professor Yiorgos Chrysanthou. Professor Daniel Cohen-Or. PhD. Alan Lerner. What is it about?. Crowd simulation quality is usually judged subjectively Based on ‘look-&-feel’
E N D
Data Driven Evaluation of Crowds Presenter: Robin van Olst
The Authors Professor Ariel Shamir Assistant Professor Yiorgos Chrysanthou Professor Daniel Cohen-Or PhD. Alan Lerner
What is it about? • Crowd simulation quality is usually judged subjectively • Based on ‘look-&-feel’ • Multiple definitions of ‘natural behavior’ are possible • Authors propose an objective approach
Previous work • State-action examples • Group behavior from video: a data-driven approach to crowd simulation – Lee et al. • Crowds by Example – Lerner et al. • Analyzing motion data for validation • Pedestrian Reactive Navigation for Crowd Simulation: a predictive Approach – Paris et al. • Vision community’s work? • Doesn’t look at the quality of trajectory segments
Concept • State-action examples • One for each agent, at a specific time and space • Holds data • Position, speed and direction of the agent • Position of nearby agents • Input videos • Analysis produces state-action examples • Are entered in a database • Evaluator • Everything is known (state attributes, trajectories) • Compares the action performed vs. action that should have been performed • Rates similarity to most similar state-action
Assassment • Positive points • Negative points • Conclusion
Positive points • Appears to be one of the first papers regarding objective crowd simulation judgement • Takes advantage of emperical data • Is able to find‘curious’ behavior:
Assassment • Positive points • Negative points • Conclusion
Performance • Analysis performance • 12 minutes of a sparse crowd, 343 trajectories • Took almost an hour • 3,5 minutes of a dense crowd, 434 trajectories • Took more than a hour • State-action data is ~1KB large • Unknown how much data is generated • Impossible to check large crowds or crowds for an extended time? • Requires more video data
Captured video data Technical issues: • Field of view must be fairly small • Wide or distant view may be too inaccurate • No obstructions are allowed • Doesn’t translate to real life • Can existing data be used?
Captured video data Practical issues: • Manual tracking is tedious • Is automatic tracking accurate enough? • No obstructions are allowed • Doesn’t translate to real life • Video analysis comparison is verification, not falsification
Concept • Doesn’t consider grouping? • Only really works for existing environments • New environments require new videos • Doesn’t indicate how the tested crowd simulation should improve • Can’t be used to compare crowd simulation methods • Good evaluation depends on the quality of your input video
Conclusion • No video or meaningful results • Referenced by one paper • Context‐Dependent Crowd Evaluation – by Lerner et al. • Referenced by none • Does not appear on any of the authors’ publication section • Verdict: only useful for checking your crowd simulation • Even that is cumbersome