200 likes | 364 Views
EdTech 2009. Improving the Quality of Flexible Learning. Daire Ó Broin and Siobh án Clarke Distributed Systems Group TCD 21 st May 2009. Introduction. What is Flexible Learning enables learners to choose where, when, and how they learn. What is Quality
E N D
EdTech 2009 Improving the Quality of Flexible Learning Daire Ó Broin and Siobhán Clarke Distributed Systems Group TCD 21st May 2009
Introduction • What is Flexible Learning • enables learners to choose where, when, and how they learn. • What is Quality • the “degree to which a set of inherent characteristics fulfils requirements” [ISO 9000:2005] • Improving quality of flexible learning • Different learners have different requirements • meet learners’ increasingly demanding requirements
Introduction • Many desirable requirements stem from:content repository adapting to user context • want examples that explain material using subjects of interest to user. • want learning to be tailored to preferred learning style • want the experience to be enjoyable and motivating
Outline The flow model An approach to creating the key conditions of flow An evaluation of the approach Conclusion and future work
The Flow model • Flow • an immensely enjoyable mental state that is characterised by “a complete immersion in what one is doing" [Csikszentmihalyi, 1990] • 3 key conditions of flow: • challenging task that requires skills and person believes his skills match the challenges • clear goals • feedback
Research Question: how can we produce the 3 key conditions of flow? • Our approach: • Large repository of tasks • Model student skills • Recommend suitable tasks from the content repository based on skill model • Supply or enhance feedback • Sample application: Inka • A mobile teaching assistant tool for Java programming
Recommender Systems • Recommender systems estimate ratings of items that have not been seen by a user • Approaches usually classified as: • collaborativee.g. Amazon • content-basede.g. online news sites • hybrid [Kim, 2006] [Kim, 2006]
Task Recommender Systems • Recommender systems have been built to recommend a diverse range of items • Main difference for recommendations of learning tasks: • items such as movies are recommended based on assumption of shared taste (changes little over time) • recommendations of learning tasks based on many properties, some change rapidly, e.g. learner’s skills.
Multi-criteria Recommendation • Most recommendation problems solved up to now have involved single criterion rating systems [Adomavicius and Kwon, 2007] • Multi-criteria imperative for recommending learning tasks (importance of context). • Three criteria • balance of skills and challenges • clear goal • Feedback • Calculate score for each single criterion problem and compute overall ratings.
MCR Approach Overview • User gets a list of recommended tasks. • User does the task. • User rates the task along the three criteria. • Ratings used to improve performance.
MCR Approach • Clear goals and feedback • Similar to taste • Can use collaborative approach • Challenges and skills • Skills can change quickly • Can’t use collaborative approach
Challenges and skills • In flow experiments, usually done by asking a person to rate: • level of skills in the activity (0 to 9) • level of challenges in the activity (0 to 9) • Measuring skills and challenges in this way is ambiguous • not clear which specific challenges and skills are being measured
Challenges and skills • Challenges/skills ratio measured indirectly. • Perception of balance of challenges and skills can be viewed as a person’s “confidence regarding what [he/she is] able to do in a situation” [Jackson and Eklund, 2004]
Estimating challenges and skills • Domain expert indexes each task: vector of weights for each skill. • Modelling confidence of a skill: • Above graphs show confidence level • (1=definitely can’t do task, 5=definitely can)
Estimating challenges and skills • Small section of Inka skill model; rate skill by rate confidence of set of tasks requiring the skill. • Task recommendation score calculated from the index of skills and student’s skill model
Evaluation • Consumer recommender systems have several standardised data sets: e.g., Book-Crossing, EachMovie. • Datasets can be used to evaluate recommendation algorithms. • In Technology Enhanced Learning (TEL) no standardized data sets [Drachsler et al, 2009] • Ratings are context specific. • Evaluation small scale, but successful • TCD: 15 computer science students, 2 x 90 min sessions • flow score 95% confidence in [18.1,19.3]. Minimum flow score for 3 conditions to be present 16.
Improving quality • As ratings are gathered, poorly rated tasks are flagged. • Clear goals and feedback: • given to content developer to improve • Skills/challenges • the task index is automatically modified. • simulations have shown effectiveness.
Conclusion and future work • Requirement can be met in Java domain • Longitudinal study of Java domain • Develop applications for different domains • The abundance of content • Can the requirements of a particular flexible learner be met? • Searching repositories by context • Skills is one of the most important contexts, but: huge effort to index, agreement on decomposing domains.
References • [Drachsler et al, 2009] Identifying the Goal, User model and Conditions of Recommender Systems for Formal and Informal Learning, Journal of Digital Information, Vol 10, No 2 (2009) • [Jackson and Eklund, 2004] The Flow Scales Manual, Fitness Information Technology, 2004. • [Kim, 2006] What is a Recommender System?, Proceedings of Recommenders06.com (pp 1-21), 2006