1 / 31

Coding and interpreting log stream data

Coding and interpreting log stream data. Patrick Griffin Assessment Research Centre, MGSE Presented at the Annual Meeting of AERA, April 6 th , 2014, Philadelphia. Data coding in computer based problem solving assessments.

muncel
Download Presentation

Coding and interpreting log stream data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Coding and interpreting log stream data Patrick Griffin Assessment Research Centre, MGSE Presented at the Annual Meeting of AERA, April 6th, 2014, Philadelphia

  2. Data coding in computer based problem solving assessments • Current coding processes mainly use a dichotomous success-failure (present – absent) scoring system; • Greiff, Wustenberg & Funke (2012) determined three measures which represent dynamic problem solving(DPS): • Model Building, • Forecasting and • Information Retrieval. • These are applied to a series of steps in complex problems • students are scored as false (0) or true (1) on the task • ATC21S project draws inferences about how students solve problems as well and the outcome • using a series of automated dichotomous scores, rubrics and partial credit approaches.

  3. ATC21S approach ATC21S - five broad components of collaborative problem solving (CPS) (Hesse, 2014). • cognitive skills (participation, perspective taking, social regulation); • social skills (task regulation and knowledge building); Within these five components, students are assessed on three ordered levels of performance on 19 elements.

  4. Purpose of the assessments • 11 assessment tasks tapping into different and overlapping skills within this framework • Provides teachers with • information to interpret students’ capacity in collaborative problem solving subskills, • a profile of each student’s performance for formative instructional purposes

  5. Unobtrusive assessments Problem Solving Zoanetti, (2010) • Moved individual problem solving from maths to games • Recorded interactions between the problem solver and the task environment in an unobtrusive way ATC21S (2009-2014) • Collaborative problem solving tasks capture interactions between.. • the problem solvers working together • the individual problem solver and the task

  6. Activity log files • Following Zoanetti, the files generated for the automatic records of these types of student–task interactions are referred to as a ‘session log file’. • They contain free-form data referred to as ‘process stream data’ as free-form text files with delimited strings of text

  7. Process stream data • MySQL database architecture recorded interactions with the task environment to describe solution processes in an unobtrusive way. • Process stream data describe distinct key-strokes and mouse events such as typing, clicking, dragging, cursor movements, hovering time, action sequences etc. recorded with a timestamp. • Sequential numbering of events enabled timestamps to record analysis of action sequences and inactivity. • ‘Process stream’ data describes the time stamped data(Zoanetti, 2010).

  8. Common and unique indicators • ‘Common’ or ‘Global’ events apply to all tasks; • ‘Unique’ or ‘Local’ events are unique to specific tasks due to the nature of the behaviours and interactions those tasks elicit.

  9. Application example Laughing Clowns

  10. Interpreting the log stream data 1

  11. Interpreting log stream data

  12. Session logs and chat stream • process and click stream data are accumulated and stored in session logs • A chat box tool captures text exchanged between students and stored in string data format. • All chat messages were recorded with a time stamp.

  13. Recording action and chat data

  14. Interpreting counts and chats • Each task process log stream was examined for behaviours indicative of cognitive and social skills as defined by Hesse (2014)that could be captured algorithmically. • Indicators were coded as rule-based indicators through an automated algorithmic process similar to that described by Zoanetti (2010). • Zoanetti showed how process data (e.g., counts of actions) could be interpreted as an indicator of a behavioural variable (e.g., error avoidance or learning from mistake) • For example, in the Laughing Clowns task a count of the ‘dropShute’ actions (dropping the balls into the clown’s mouth) can indicate how well the student managed their resources (the balls).

  15. Direct and inferred indicators • Indicators that can be captured in all tasks are labelled ‘global’. • They included total response time, response time to partner questions, action counts, and other behaviours that were observed regardless of the task. • Indicators that are task-specific were labelled ‘local’. • There are two categories of local indicators: direct and inferred. • Direct indicators represent those that can be identified clearly, such as a student performing a particular action. • Inferred indicators relate to such things as sequences of action/chat within the data. Patterns of indicators are used to infer the presence of behaviour indicative of elements in the Hesse conceptual framework.

  16. Coding indicative actions • Each indicator was coded with a unique ID code. Using the example of the unique ID code ‘U2L004A’, • ‘U2’ the Laughing Clowns task, • ‘L’ ‘local’ indicator specific to that task • (‘G’ would represent that it was a global indicator that could be applied to all tasks), • ‘004’ fourth indicator created for this task • ‘A’ applicable to student A. • Programming algorithms search for and capture the coded data from the process stream log files; • A count of actions in indicators are converted into either a dichotomy or partial credit scores. • Panels used an iterative process to map indicators onto the Hesse framework until a stable allocation was agreed upon.

  17. Algorithms and scoring rules

  18. Coded data and variable identification

  19. Defining indicators

  20. Using indicator data • Scores from a set of indicators function similarly to a set of conventional test items requiring stochastic independence of indicators; • Most indicators scored ‘1’ if and ‘0’ to the indicator if absent for each student. In the clowns task a player needs to leave a minimum number of balls for his/her partner in order for the task to be completed successfully. If true – ‘1’, if not ‘0’. • Frequency-based indicators could be converted into polytomous scores based on threshold values and an iterative judgement and clibration process.

  21. Forming a dichotomous indicator from frequency data

  22. Polytomous indicator from frequency data

  23. Separating the players - scoring A and B • Collaboration cannot be summarised by a single indicator, – ‘students communicated’ – it involves communication, cooperation and responsiveness. • For collaboration, the presence of chat linked to action – pre and post a chat event – was used to infer collaboration, cooperation or responsiveness linked to the Hesse framework • The patterns of player-partner (A-B) interaction. • A series of three sequences of player partner interaction was found to be adequate yielding the following possible player-partner combinations : 1) A, B, A; 2) A, B, B; 3) A, A, B. • These combinations apply only to the action of the initiating student (A). Each student was coded separately in the data file, so the perspective changed when the other student (B) was scored

  24. Assigning to A and B

  25. Mapping indicators to Hesse Framework The empirical data were checked against the relevant skill in the conceptual framework (Hesse, 2014). • relative difficulty was consistent with framework. • map each indicator to relevant skill it was intended to measure • refine the definition of each indicator to clarify the link between the algorithm and the construct. • Frequency used as a proxy measure of difficulty

  26. Indicator review cycle • IRT yielded a hierarchy of the descriptors; • Substantive order checked for meaning within a broader collaborative problem solving framework. • Iterative review process to ensure that the conceptual descriptors were supported by empirical item location, which in turn informs the construct continuum.

  27. Domains of indicators social and cognitive • Clusters of indicators interpreted to identify levels of progression; • The indicators were divided into their two dimensions - social or cognitive- based on their previous mapping • Then into five dimensions. • Skills within each dimension were identified to represent the progression from novice to expert.

  28. Parameter invariance and fit • Multiple calibrations allowed for comparison and analysis of item parameters. • The stability of parameters remained after number of indicators reduced. from over 450 to fewer than 200. • The removal of poorly ‘fitting’ indicators reduced the standard errors of the item parameters, while maintaining the reliability of the overall set.

  29. Calibration of Laughing Clowns task

  30. Stability of indicator difficulty estimates across countries

  31. Challenges for the future • Scaling all 11 tasks • One, two and five dimensions • Stability of indicator estimates over language, curriculum and other characteristics • Simplifying the coding process • Using chat including grammatical errors, non-standard syntax, abbreviations, and synonyms or ’text-speak’. • Capture these text data in a coded form. • Complexity and simplicity without loss of meaning- built into task construction as an a-priori design feature. • Design templates for task development and scoring.

More Related