200 likes | 583 Views
rationale. existing system VS culture as a function propagated by the masses generate predictive data increase public awareness for the arts increase responsibility of grant makers. Sub-domain examples. Cinema Museum exhibition City Art Project/Installation. film festival.
E N D
rationale existing system VS culture as a function propagated by the masses generate predictive data increase public awareness for the arts increase responsibility of grant makers
Sub-domain examples • Cinema • Museum exhibition • City Art Project/Installation
film festival • Predicting the top box office and the cumulative gross over some period • Categorized profile of each person • in the dimensions such as genre preference, age, sex, etc. • Type of each movie • in the dimensions such as genre, nationality, big stars shown, etc. • For classifying, • survey • preferences on different genres • identifying genre for each movie • predictions on the numbers for already known results of films that had similar success • data collection • the movies they watched in the festival • Observing how people’s different profiles influence predictions on different types of movies • adjusting the sensitivity and bias of each person’s prediction for more accurate prediction result • Depending on the true result, participants get the rewards
prediction system (sensitivity and bias parameters, etc) information each person’s prediction & betting prediction result parameter tuning influence initialization deviation from the true result profile classification (e.g. survey, data collection) each person’s profile, the type of each item inferred • - The parameters in the prediction system can be learned by • adjusting the difference between the prediction result and the true result. • Deciding the number of dimensions describing the personal profile • and the item type (Model Selection by Occam’s razor)
Exhibitions in Museums Predicting the success of exhibitions held in Museums
Museum Administrator and Project Managers Allocating exhibition space and time. Decision making on extending or closing the exhibition Participants Reward by predicting the truth result Opportunity to contribute public arts Predicting the success of exhibitions held in Museums
Paris Collections 2006 at Boston MFA Dec 16, 2006 - Jun 17, 2007 Michael Mazur: The Art of the Paint Jan 10, 2007 - Oct 8, 2007 Women of Renown: Female Heroes and Villains in the Prints of Utagawa Kuniyoshi (1797-1861) Jan 24, 2007 - Jul 8, 2007 Donatello to Giambologna: Italian Renaissance Sculpture at the Museum of Fine Arts, Boston Mar 10, 2007 - Aug 1, 2007 Through Six Generations: The Weng Collection of Chinese Painting and Calligraphy
Body World 2 at Boston Science Museum • » Fish, Fads, and Fireflies • » Is Algae In Your Future? • » Dinosaurs • » Lighthouse • » Human Body Connection • » Investigate! • » Natural Mysteries • » Science in the Park • » Theater of Electricity • » Discovery Center • » Current Science & Technology • » Cahners ComputerPlace • » Gilliland Observatory • » Welcome to the Universe
Exhibitions in Museums • Predicting the number of cumulative visitors within a month • Participants’ preferences • survey • data collection through records and sensors. • Correction of error by observing how participants’ preferences influence predictions on different types of exhibitions • Rewards to participants based on the true result.
Art Projects • Predicting the budget allocation and success. • Limiting the tickets playing prediction for each project. • Make a prediction of success per ticket. • Rewards to participants based on the true result.
how • Funding levels • • • • • Buy-in Cost • • • • • • • • • • • • • • • • • • • • • • • • Different Projects Time Multi-stage betting system Predict success of project Periodically publish results Adjust buy-in to reflect availability of information Project installation closes house at a given cell Success evaluated by independent agency
success • Funding levels • • • • • • • • • • • • • • • • • • • • • • • • • • • • • In multiple domains: property value attendance (technological implications) Different Projects
cant go dynamic • Funding levels • • • • • Buy-in Cost Share price determined dynamically • • • • • • • • • • • • • • • • • • Transaction wipes cards clean • • • • • • Different Projects Time Punishment for predicting early is too great An individual’s prediction matrix is too complex to convey in an ask-bid Wiping inordinately punishes for buying early
Goal: Predict impact of funding • Many possible projects • Many possible levels of funding • Problem: Not all projects get funded -- no feedback on predictions for unfunded projects
Reward based on some projects only: • Collect predictions (of attendance, approval) for all projects • Assign reward based on available data only (only projects that are ultimately funded) • Alternative: reward based on outcomes measurable irrespective of funding choice (neighborhood income, property value) • Are people already good at predicting local economic trends likely to be better at predicting impact of arts projects on those trends?
Possible strategy • I want project B to get funded, pick accurate estimate of project B’s benefit, slightly lower estimate for all others.
Model • For model, remove funding level variable: • “If given reasonable funding, how much relative benefit from each?”