1 / 13

Understanding What Users Need to Understand Us (and Our Data): Crowdsourcing Metadata

This study explores the impact of video length on user-generated tags through a mixed methods approach, providing insights into challenges and opportunities in metadata crowdsourcing. Participants' tagging behavior and metadata comparison reveal valuable information for enhancing content description and emotional responses in online videos.

rbeals
Download Presentation

Understanding What Users Need to Understand Us (and Our Data): Crowdsourcing Metadata

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Understanding What Users Need to Understand Us (and Our Data): Crowdsourcing Metadata Edward Benoit, III School of Library & Information Science, LSU

  2. Previous Research & Focus • Tagging studies main focus on still images or text • User-generated content as time saving mechanism • Challenges and opportunities of moving images • Initial research question: What effect does online video length have on crowdsourced metadata/user-generated tags?

  3. Methodology • Mixed methods, quasi-experimental two-group design • 40 participants randomly assigned to 2 groups and tag: • One 30 minute video -OR- • Three 10 minute videos • Analysis: • Open coding • Descriptive statistics

  4. Sample Video(s):LDMA http://www.ladigitalmedia.org/

  5. Participants • Gender: 22.5% Male, 77.5%Female • Age: 19-65; 30.9 Mean • Location: 65% Louisiana • Race: • 77.5% White • 10% Black • 5% Hispanic • 5% Asian • 2.5% Other

  6. Number of Tags: Short Videos

  7. Sample Video(s):LDMA Statistically significant difference between the number of tags generated for the long video (M=16.1, SD=22.6) and the short videos (M=27.75, SD=12.1); t(38)=-3.11, p=0.004.

  8. Top Tags Long Video Short Videos Cajun Christmas (19) Louisiana (17) Christmas (16) Papa Noel (16) Cajun (12) Bayou (11) Bayou Parade (11) Cajun Night Before Christmas (10) Food (9) • Christmas (14) • Papa Noel (13) • cajun (10) • Louisiana (9) • Cajun Night Before Christmas (6) • Lafayette (6) • Natchitoches (6) • Bayou (5) • Cajun Christmas (5) • Fireworks (5)

  9. Metadata Comparison • Compiled list of metadata words • Post-stop list removal, 89 words • Compared with generated tags: • Full Video: 41.6% matching • Short Videos: 48.3% matching

  10. Main Types of Tags • Content description • Identification (persons, places, music) • Emotional responses

  11. Future Directions • Comparison of amateur and professional online videos • Tagging enticement techniques

  12. Thank you!Questions?

More Related