250 likes | 331 Views
Congifurable, Incremental and Re-structurable Contributive Learning Environments. Dr Kinshuk Information Systems Department Massey University, Private Bag 11-222 Palmerston North, New Zealand Tel: +64 6 350 5799 Ext 2090 Fax: +64 6 350 5725 Email: kinshuk@massey.ac.nz
E N D
Congifurable, Incremental and Re-structurable Contributive Learning Environments Dr Kinshuk Information Systems Department Massey University, Private Bag 11-222 Palmerston North, New Zealand Tel: +64 6 350 5799 Ext 2090 Fax: +64 6 350 5725 Email: kinshuk@massey.ac.nz URL: http://fims-www.massey.ac.nz/~kinshuk/
Reusability • Benefits are widely known • However, early promises of time and cost savings hae not materialised • In software reuse, only trivial pieces of code can be used in another context without much effort
CIRCLE Architecture • Only way to increase usability and in the process automatically increase the reusability, is to allow: • implementing teacher to contributethrough: • configuring the learning space • Incrementally adding and re-structuring • scope and functionality of IES components • Early adoption: HyperITS
HyperITS • No pre-defined sequence of operations • Concepts linked in an interrelationship network • Inconsistency results in graded feedback leading the learner gradually to the point of start • Mis-conceptions and missing conceptions are identified.
HyperITS • Emphasis on cognitive skills development • Uses cognitive apprenticeship approach to provide cognitive skills: • Observation • Imitation • Dynamic feedback by learning • Interpretation of data • Static feedback from testing
HyperITS • Granular design • Domain concepts are acquired in the context of their inter-relaed concepts • Interfaces are brought up to give: • Another perspective on the data set • Fine grained interface to give details of a coarse grained presentation • Fine grained basic application to revise steps at a more advanced level
HyperITS • Process modelling • Overcoming the shortcomings of overlay model • Understanding learner’s mental processes • Allows finding optimal and sub-optimal paths in learning process
Domain Layer • Static domain content provided by the designing teacher: • Concepts, the smallest learning units • Relationships among concepts • Priorities associated with the relationships • Custom operator definitions • Constraints on backward chaining, if desired
Teacher Model Layer • Consists of the pedagogy base reflecting various tutoring strategies and scaffolding provided by the implementing teacher • Optional problem bank created by the implementing teacher to situate the concepts in a particular context • The teacher can also provide additional diverse contexts
Contextual Layer • Contains the current goals and structural information of current tasks: • system’s solution to current problem; • system’s problem solving approach; • immediate goals. • This information is dynamically updated along with the learner’s progress in problem solving.
Initialization functionality • Domain representation initialiser • initialises the system according to the current learning goal for all types of problems. • Random problem generator • randomly selects concepts to treat as independents and creates their instances by randomly generating values within specified boundaries.
Initialization functionality • Prediction boundary initialiser • initialises the boundaries for the overlay model (how far student’s solution can go from expert solution). • These boundaries are used later to evaluate a learner’s action.
If independent variable introduced • Contextual dependency finder • identifies the dependent concepts that can be derived within in the current state of the problem space. • Dependency activator (client side) • activates the instances of the contextually dependent concepts and invokes the dependency calculator at server to update their current status in the expert solution.
If independent variable introduced • Dependency calculator (server side) • provides values for the dependent concepts based on domain layer and pedagogy base to update the expert solution. • This functionality allows a learner to adopt a different route to the solution than the one currently adopted by the system.
Setting validation bounds for dependent variables • Prediction boundary updater • updates the prediction boundaries used in comparing a learner’s solution with the expert solution. The updater fine-tunes the system’s initial prediction boundaries to match the route to solution adopted by a learner.
Validation of learner’s input to dependent variables • Discrepancy evaluator • evaluates the validity of a learner’s attempt by matching it with the expert solution within the prediction boundaries.
Validation of learner’s input to dependent variables • Dynamic feedback generator • provides context-based feedback to the learner. The messages are generated dynamically to improve semantics and to prevent monotony. • Granular approach is used in identifying the source of error and for providing feedback.
Validation of learner’s input to dependent variables • Dynamic feedback generator • i.Basic misconceptions, where the learner fails to derive a variable due to misconceptions about the critical concepts. In such cases, graded scaffolding is used: • ask the learner to try again; • suggests the relationship to be used; • provides the calculation data; • shows the full calculation, and allows the learner to proceed.
Validation of learner’s input to dependent variables • Dynamic feedback generator • ii. Missing conceptions, when learner unsuccessfully tries to derive a variable that requires derivation of intermediate variables the error arising from missing knowledge about intermediate relationships. • System suggests learner to derive the intermediate concept first.
Validation of learner’s input to dependent variables • Dynamic feedback generator • iii. If learner unsuccessfully tries to derive some complex concepts, system advises the learner to use a finer grain interface. • The finer grain interface deconstructs the complex concept into components to capture the misconceptions at a fine grain level.
Evaluating learner’s process of deriving solution • Local optimiser • identifies the possible relationships and determines the best relationship to use based on the priorities specified in the domain layer. • It allows the system to identify any sub-optimal approach adopted by the learner.
Finally.. Adequate technologies are rapidly emerging that can be harnessed for deploying the CIRCLE Architecture. For example: Distributed Component Object Model (DCOM) for Microsoft development tools or Remote Method Invocation (RMI) for Java