130 likes | 515 Views
Artificial Grammar Learning (AGL). First developed by Reber in 1967
E N D
Artificial Grammar Learning (AGL) • First developed by Reber in 1967 • Standard procedure: Subjects are shown a series of letter strings that follow particular complex rules. Ss are initially not told about the rules and complete an unrelated task (e.g. short-term memory task). After this “training phase”, Ss are told about the existence of rules, and have to then classify the next set of strings into ruleful and unruleful strings (“test phase”). • AGL rules are usually very complex finite state grammar rules
Figure 1: The finite-state artificial grammar created by Reber (1967) • Examples of ruleful & unruleful strings: VXVS VXXXS TPTXVS TPTPS • Typical classification performance at test is significantly above chance • Subjects are unaware of their knowledge and cannot verbalise the rules • Reber (1967) concluded that participants are implicitly learning the abstract rules of the grammar • However, there is considerable evidence that people are only learning fragments of the whole letter string, such as bigrams and trigrams (e.g. Perruchet & Pacteau, 1990)
The hypothesis that Ss are categorising test strings according to their similarity to training strings has been shown in many instances and with various AGs (e.g. Johnstone & Shanks, 1999) Categorisation • To be able to learn a rule and to categorise efficiently, it is necessary to sample both positive (i.e. ruleful) and negative (i.e. unruleful) examples of the rule (e.g. Bruner, Goodnow, & Austin, 1956; Gold, 1967) • In the standard AGL procedure, only positive (i.e. ruleful) strings are presented during training and Ss then have to discriminate between positive (ruleful) and negatvie (unruleful) strings in the test phase.
Experiment • An experiment was conducted with the standard AGL procedure, i.e. in the training phase Ss were told that it was a short-term memory experiment. After training they were told about the existence of rules and that they now must decide which of the test strings followed the same rules and which did not. Experimental group: - Training phase: 50% positive 50% negative strings differentiated by the background colour (green and red resp.) - Test phase: strings on white background Control group: - Training phase: 100% negative strings on random red or green background • The experiment was conducted to see what effect negative instances have. If AGL are true rule learning experiments, negative instances should have a beneficial effect on learning
Mean % correct answers in test phase for rulefulness and similarity Ss in the experimental and control group are not performing significantly different from each other in both cases
Conclusions • The inclusion of negative instances in the experimental group interfered both with learning of the rules (ruleful/unruleful) and with learning of the fragments (similar/dissimilar) • Standard AGL experiments are not very good examples of rule learning experiments, since the rules to be learnt seem to be so complex that Ss have to resort to memorisation, if they are to learn anything at all. Rather, AGL experiments seem to be merely rote memorisation experiments. • Thus, in this experiment, the inclusion of negative strings in the training phase only increased the memory load. Instead of just memorising the strings, Ss also had to memorise the colour of presentation, i.e. which strings were red and which were green
Are AGs learnable in the first place? • In general, AGL research has been concerned with establishing the existence of implicit learning, i.e. learning without conscious awareness. However, it has not been established whether the AGs are learnable in the first place • If an AG is not learnable, then neither explicit nor implicit learning of the rules will occur • Several pilot studies are being conducted in order to find three AGs that are learnable; an easy, medium and hard AG. Easy is arbitrarily defined as ‘learnable in approx. 5 minutes’, medium as ‘learnable in approx. 20 minutes’, and hard as ‘learnable in approx. 45 minutes’ • Initially, we adapted the pilot AG from the AG used in the experiment above. However, this proved to be much too complex to learn in the given number of trials
Pilot 1 • Ss task is to try and learn to recognise which of the 8-syllables strings are ruleful and which are unruleful in 80 trials. Ss are given all possible help, i.e. positive and negative instances, corrective feedback during training, comprehensive instructions. • Easy grammar Da(2) Ha(6) e.g. TaDaMaNa-GaHaVaBa • Medium grammar Da(3,7) <-> Ha(7,3) Fa(4,8) <-> Ka(8,4) e.g. TaVaDaKa-RaWaHaFa • Hard grammar Da(1,5) <-> Ha(5,1) Ga(2,6) <-> Ja(6,2) Fa(4,8) <-> Ka(8,4) e.g. DaJaVaFa-HaGaMaKa
Pilot studies Since Pilot 1 proved to be much too complex to learn in 80 trials (NB Pilot1 rules are already much easier than most AGL experiment rules), the subsequent pilots have been severely scaled down. For example, Pilot3: Easy =The string starts with the syllable Da Medium = The string contains Da Hard = The string contains Da or Ha. However, this easy rule proved to be too easy, while Ss were still not able to learn the medium and hard rules in the given number of trials. Once we started working with repetitions, the rules became too easy. For example, in Pilot 4.
Pilot 4: Easy = The string starts with Da Medium = The string starts with the same letter as it ends with, e.g. TaJaKaLa-YaPaMaTa Hard = Both halves of the string start with the same syllable, e.g. YaPaNaVa-YaDaZaGa In Pilot4, Ss were learning the rules in the first 10 trials, if not already in the five practice trials!! However, it is encouraging that the rules have now become too easy rather than too difficult to learn.
Future work Once an easy, medium, and hard grammar has been found, a series of experiments will be conducted with these three grammars. The experiments will invesitgate: • The effect of active responding versus passive feedback • The effect of positive and negatvie instances versus positive only or negative only in the training phase. • The effect of prior knowledge of the rules, versus knoweldge of the existence of a rule (or rules), versus no knowledge of rules. These experiments can then be compared to the results found in standard AGL experiments, and some conclusions may be drawn about the validity of AGL experiments as a means of showing implicit rule learning.