130 likes | 246 Views
Results from the User Survey. Tobias Hossfeld. Summary. Apps of interest (in decreasing order) Adaptive streaming, 2D video images , VoIP, images, web browsing Interests and contributions by VIPs High interest: Design of test, statistical analysis
E N D
Resultsfromthe User Survey Tobias Hossfeld WG2 TF„Crowdsourcing“ https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
Summary • Apps of interest (in decreasing order) • Adaptive streaming, 2D video images, VoIP, images, web browsing • Interests and contributions by VIPs • High interest: Design of test, statistical analysis • Very few VIPs: implementation and execution • Time concerns by VIPs, limited resources possible for doing tests • Focus on existing (lab and crowdsourcing) data sets • Discussion in Phone Conference, see doodle link • Crowdsourcing data available / VIPs available for all steps (test design, implementation, execution, analysis) • Web browsing: data available (Martin, Lea, Toni, Tobias) • VoIP and image: VIPs for all steps available • Lab results available / VIPs available • Available: images, 2D video • VoIP: will be executed • Web browsing: only implementation missing WG2 TF„Crowdsourcing“ https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
Which application? Your Contribution? Crowdsourcing How will you contribute to crowdsourcing experiment? Application Which application do you prefer for the JOC? Laboratory How will you contribute to the lab experiment? WG2 TF„Crowdsourcing“ https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
Detailed View: Contributions • Of interest and contributions • images, web browsing, VoIP, adaptive streaming, 2D video • Out of scope, too many problems • File storage, Radio streaming, Other WG2 TF„Crowdsourcing“ https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
Research Questions • Develop and apply methodology • Derive QoE model for selected app • Analyze impact of crowdsourcing environment • Providing database with crowdsourcing results • Do results using crowdsourcing platforms differ from results of an test using a dedicated panel and in which sense? What does it imply for QoE assessment and the tools we (can) use? • Do results using crowdsourcing differ from results from controlled lab experiments (and in a next step possibly even more realistic home environments)? WG2 TF„Crowdsourcing“ https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
Invididual comments • Contributions • We are currently developing 2 applications of possible interest- one is a VoIP client within webRTC and the other is an intermedia synch application similar to HbbTV (broadcast/broadbandTV)..which we also hope to deploy on webRTC platform. Both are still at development stage..so perhaps I am being a bit optimistic ! • I can do data analysis for first two options as well. • The chosen app and link to ongoing activities, will determine how much I can be involved. Also depending on the app, I could also link up to the iMinds panel. • Problems • Heterogeneous possibly time-variant users' connections • I am completely novice with everything related to the implementation, but I see some methodological challenges related to the cross-device use (and how this links up to QoE) of e.g., personal cloud storage apps and adaptive video streaming. • No time WG2 TF„Crowdsourcing“ https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
Next Steps • Summary via mailinglist / wiki • Yourinterests • Yourcontributions • Collective decisionwithin TF • Collectinfofrom all TF participants • Google survey form • Online meeting • Decision on concreteapplication, platform, researchquestions • Allocationofworkfor VIPs • Rough time schedule • Time plan • 15/03/2013: summary • 22/03/2013: googlesurveysentaround • 31/03/2013: TF fillssurvey • Mid april: online meeting WG2 TF„Crowdsourcing“ https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
Summary fromBreakout Session WG2 TF„Crowdsourcing“ https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
Contributions by Participants • Design of user test • Source contents for tests (video, images): Marcus Barkowsky • Test design: LucjanJanowski, Katrien de Moor, Miguel Rios-Quintero • Implementation of test • Lab test for image quality: Judith Redi, FilippoMazza • Lab test for VoIP: Christian Hoene • Online test for VoIP: Christian Hoene • Crowdsourcing test for images/video: Christian Keimel • Crowdsourcing test for HTTP video streaming: Andreas Sackl, Michael Seufert, Tobias Hossfeld • Crowdsourcing platform with screen quality measurements: Bruno Gardlo • Crowdsourcing micro-task platform: BabkNaderi, Tim Polzehl • Execution of test • Crowdsourcing: Tobias Hossfeld • Online panel: Katrien de Moor • Lab test for image quality: Judith Redi, FilippoMazza • Lab test for VoIP: Christian Hoene • Crowdsourcing test for images/video: Christian Keimel • Crowdsourcing test for HTTP video streaming: Andreas Sackl, Michael Seufert, Tobias Hossfeld • Data analysis • Identification of key influence factors and modeling: Tobias Hossfeld, Judith Redi • Comparison between crowdsourcing and lab: Tobias Hossfeld, Marcus Barkowsky, Katrien de Moor, Martin Varela, Lea Skorin-Kapov • Model validation: Marcus Barkowsky WG2 TF„Crowdsourcing“ https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
Summary of Interests WG2 TF„Crowdsourcing“ https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
Summary of Contributions WG2 TF„Crowdsourcing“ https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
Input collectedbefore Novi Sadmeeting WG2 TF„Crowdsourcing“ https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
Interest in Joint Qualinet Experiment • FilippoMazza, Patrick le Callet, Marcus Barkowsky: comparison of lab and crowdsourcing experiments considering model validation; directly related to “Validation TF” • Martin Varela, Lea Skorin-Kapov: impact of crowdsourcing environment on user results and QoE models, e.g. incentives and payments on the example of Web QoE; directly related to “Web/Cloud TF” • Christian Keimel: Impact of crowdsourcing environment on user results and QoE models, e.g. demographics • Andreas Sackl, Michael Seufert: Impact of content/consistency questions on QoE ratings, e.g. for HTTP video streaming; directly related to “Web/Cloud TF” • Bruno Gardlo: currently working on improved crowdsourcing platform with screen quality measurement etc.; interest in incentive design, gamification; platform may be used for experiment, e.g. for videos or images • Katrien de Moor: contribution in the questionnaire development/refinement and/or by setting up a comparative lab test • BabakNaderi: development of crowdsourcing micro-task platform which may be used for joint experiment; incentives, data quality control, effects of platform-dependent and user-dependent factors on motivation and data quality WG2 TF„Crowdsourcing“ https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd