360 likes | 562 Views
Examples of semantic coordination. Typology of semantic updates. Theory development : Kinds of semantic coordination. Concept-level coordination: Coordinating available concepts (meanings) Modify existing concept Create new concept
E N D
Theory development:Kinds of semantic coordination • Concept-level coordination: Coordinating available concepts (meanings) • Modify existing concept • Create new concept • Register-level coordination: Coordinating on a register, i.e. a vocabulary (set of expressions), an ontology (concept system), and a mapping between them • Select one of several predefined registers to use • Create new register • Mapping a vocabulary onto an existing ontology • Mapping a vocabulary onto a new ontology: may involve concept-level coordination • (Feature-level coordination: modify globally available feature set)
is it really a correction? can also be seen as providing more specific information • in this case, no revision of bear • but why would B say ”panda” unless B thinks it is more appropriate? • social norm: use same term if possible • but perhaps too strong to say ”bear” is inappropriate; • just that ”panda” is more appropriate • counterexample? • A ”It’s a nice drink” • B ”Yes, it’s a nice Darjeeling” • so, only corrective under certain circumstances? • ambiguous between corrective and non-corrective use?
Representations and updates:Semantic features and updates • Semantics using features and values • Usage pattern = set of concepts (polysemy) • [bear] = { bear-animal, bear-toy } • Concept = record types • Panda = [ x: Ind, c1: animate(x), c2: bw(x) ] • Animate = [ x: Ind, c: animate(x) ] • predicate panda • panda(a) if (Er:Panda & r.x = a) • (could make “panda” detector predicate by building PANDA detector, from BW and ANIMATE detectors) • do abstract or concrete predicates have detectors? which are created first in children? what happens then – abstrction or concretion? functional equivalance • Global feature set = all possible features, given by model • “detector” predicates: animate, bw • detectors: ANIMATE, BW • animate(x) if ANIMATE(x.1)
Feature-value update • Modify value of feature • bear-animal.colour := (brown OR black) • Bear : [ x:Ind, c: brown(x), ... ] • Bear.c := brown(x) OR black(x) • Requires dynamic types in record types • if T1 and T2 are types, T1 OR T2 is a type • create “Join type” Bear’ = [ x: Ind, c: brown(x) ] OR [x: Ind, c: black(x)] • simplify Bear’ to [ x: Ind, c: brown(x) OR black(x) ] • Bear.c := Bear’.c • (idea: apply(Bear, c, ly.y OR black(R.x) ])
Concept-feature update • Add/delete feature for concept • addFeature(bear-animal.habitat : Habitat) • Requires dynamic local feature set • Meaning-concept update • Add/remove concept from usage pattern • Requires dynamic set of concepts • Concept and meaning creation • panda-animal := { animate = 1, colour = ( black AND white ) } • [panda] := { panda-animal } • Global feature set update • create / modify / delete global feature • Requires dynamic global feature set
panda type inherits from bear type, by adding panda as subype of bear, modify COLOUR of bear to include black-and-white • alt: panda type created by modifying bear type (COLOUR value of panda different from that of bear) • meanings in terms of records based on OWL ontology • [x:panda-animal] if [ x: Ind, c1: animate(x), c2: colour(x, black-and-white) ] • x:panda-animal panda-animal(x) • panda-animal = (lx:[ x: Ind, c1: animate(x), c2: colour(x, black-and-white) ])[c3:panda-animal(x)] • Panda:[ x: Ind, c1: animate(x), c2: colour(x, black-and-white) ] • panda(a) if (Er:Panda & r.x = a) • short version: a::R iff Er:R & r.x=a • ”the panda is nice” [x:ind, c1:panda(x), c2:nice(x)] • panda(x) is appropriate of x:Panda • connection predicate ”panda” – type ”Panda”??? • NOT same as (Er:Panda & r.x = a):panda(a)
Panda:[ x: Ind, c1: animate(x), c2: colour(x, black-and-white) ] • panda(a) if (Er:Panda & r.x = a) • panda(x) if [x:Ind, r:Panda, c:eq(Ind, x, r.x)] • a:panda(a) iff Er:Panda & r.x=a • shared context: [a:Ind] • A’s bear-detector outputs 1 when directed to a • -> A has model <{a}, F> • A: that’s a nice bear • [c1:bear(a), c2:nice(a) • (assuming this is a correction) • A’s bear-detectour outputs 0, but panda-detector outputs 1 • B: yes, it’s a nice panda • [c3:panda(a)] • A modifes bear-detector to output 0 when directed to a • e.g. by modifyíng value feature ”colour” • A creates a panda-detector • Panda-detector? • record type supported if there is a way of creating records of that type
-> A has model <{a}, F> • F provides proofs • F(panda(a))=panda-detector(a, 1) • panda(x) requires colour(x,bw) • panda-detector(x,1) requires colour-detector(x, bw, 1) • [ x = a, c = PDa ] panda(a)??? • [ x=a, c = BW(a) ] bw(a) • BW = black-and-white-detector • F(bw(a)) = BW(a,1) • ”a is black and white” Er:[x:Ind, c:bw(x)] & a=r.x • Er:R means ”the record type R is supported by r in the model if r can be created” • assume we have detectors BW and ANIMATE • F(animate(a)) = ANIMATE(a,1) • ? panda(a) • Panda =[ x: Ind, c1: animate(x), c2: bw(x) ] • panda(a) if (Er:Panda & r.x = a) • ? Er:[ x: Ind, c1: animate(x), c2: bw(x) ]&r.x=a • in model: ? a in F(Ind) & ANIMATE(a,1) & BW(a,1)
Panda =[ x: Ind, c1: animate(x), c2: bw(x) ] • panda(a) if (Er:Panda & r.x = a) • ? Er:[ x: Ind, c1: animate(x), c2: bw(x) ]&r.x=a • [ x = a, c1 = ANIMATE(a,1), c2 = BW(a,1)] • ANIMATE(x,1) : animate(x) [this mapping done by F]
”X-och-X” • ”X-och-X”-konstruktionen kodas som icm:challenge_use(X) • Example (Lindström 1996, 2001) • L: du har haft många inspelade samtal eller? • ask(”F har haft många inspelade samtal?”) • F: nja, många och många, men de e nåra stycken • icm:challenge_use(”många”) • MAX-QUD: ?x.replaces(x, ”många”) • S-[”många”] :=+ s • icm:replace(”många”, ”några stycken”) • FACTS = { ”F har haft några (stycken) inspelade samtal”, … } • S+[”några stycken”] :=+ s
Exempel: • B: du har drivhus? • ask(”A har drivhus?”) • A: ja, drivhus och drivhus, ja sår små frön så kommer dom upp • icm:challenge_use(”drivhus”) • MAX-QUD:?x.replaces(x, ”drivhus”) • S+[”drivhus”] :=- s • icm:replace(”A har drivhus”, ”A sår små frön så kommer dom upp”) • accommodate ?x.replaces(x,”A har drivhus”) • S-([”drivhus”]) :=+ s
Exempel: • A: (…) där han söker efter ord väldigt länge. (.) eller väldigt å väldigt länge, de tar i alla fall rätt lång tid • assert(”(han) söker efter ord väldigt länge”) • icm:challenge_use(”väldigt”) • icm:replace(”A gör H väldigt länge”,”H (utförd av A) tar lång tid”)
Contextual interpretation and update: Example • [”two o’clock”] = { [”two o’clock”]clock , [”two o’clock”]dir-giv} • ”clock” activity (basic) > 02:00 AM or PM • ”direction-giving” activity > East-norteast direction • In the Map task activity, A utters "sort of two o'clock“ • B activates [”two o’clock”]dir-givB • B instantiates [”two o’clock”]dir-givBto arrive at a contextual interpretation [”two o’clock”]sB • [“two o’clock”]dir-givB○=+ [“two o’clock”]sB • Assuming [“two o’clock”]direction-givingB⊦s
Accommodation: example • Assume [”two o’clock”]B = { [”two o’clock”]Bclock } • A, in Map Task situation: "sort of two o'clock“ • We have [“two o’clock”]B⊬s • Assume B makes situated interpretation [”two o’clock”]sB • We get [“two o’clock”]B○*+ [“two o’clock”]sB • create [“two o’clock”]dir-givB • [”two o’clock”]B = { [”two o’clock”]Bclock, [“two o’clock”]dir-givB} • [“two o’clock”]dir-givB○=+ [“two o’clock”]sB • After this, [“two o’clock”]dir-givB⊦s • And consequently, [“two o’clock”]B⊦s
Meaning accommodation, cont’d • Example • Jules: On my paper round this morning I porched all the papers without getting off my bike! • Jim: Congratulations. • Jules uses “porch” in a situation s to mean approximately “successfully throw onto a porch” • This is a common usage for Jules: [“porch”]Jules⊦s • B is not familiar with this use: [“porch”]Jim⊬s • In this case, Jim was able to create a situated interpretation by using contextual cues, world knowledge, commonsense background, and the ability to reason by analogy and metaphor. • [“porch”]sJim
Meaning accommodation, cont’d • Also, Jim chooses to accept this novel use of “porch” and provides feedback displaying understanding and acceptance • This thus counts as a case of accommodated conservative use in the table on the previous slide • As a consequence (or so our theory predicts) the corresponding updates are made • [“porch”]Jim○*+[“porch”]sJim • (Note that Jules’ judgement of appropriateness may have taken into account Jules’ beliefs about Jim’s ability to understand “porch” in this way)
Explicit negotiation • If Jim had not been able to understand Jules' use of “porch”, or if Jim had understood it but not accepted it, explicit negotiation would be expected • B: “What do you mean, porch”? • A: “Like, you know, throwing it onto the porch” • Typically, triggered by negative feedback, clarification requests • In general, in cases where a use is rejected or not understood (unsuccessful uses), explicit negotiation of the meaning and proper usage of c may well occur • Explicit negotiation = overt meta-linguistic negotiation of the proper usage of words, including e.g. cases where explicit verbal or ostensive definitions are proposed (and possibly discussed) • Although semantic negotiation typically has the goal of coordinating language use, it may in general be either antagonistic or cooperative
Ostensive definitionIncremental updates • Steels and Belpaeme (2004) investigate the effect of social linguistic interaction on learning and coordination of colour categories. • Robot agents play a language game of referring to and pointing to colour samples. • The language system of an individual agent is modelled as a set of categories in the form of neural nets that respond to sensory data from colour samples, and a lexicon connecting words to categories. • This is clearly a case of semantic plasticity and semantic negotiation, as categories are updated as a result of language use. • Semantic negotiation here takes the form of explicit and cooperative negotiation. • There is also an asymmetry with respect to the roles within each game; one agent is speaker and the other is the hearer. • The interaction follows a predefined ``guessing game'' script; essentially, a language game of guessing and ostensive definition. • The situation, as perceived by the agents, is a set of objects (colour samples), where one is the topic object.
Ostensive definition, cont’d • Below is a possible interaction between two agents playing the guessing game, and the corresponding updates in terms of the model presented in this paper. • The context is O a set of object (colour samples), • O = {o1, …, oN} where one object ot is the focus object. • The speaker (A) knows which object is the focus object but the hearer (B) does not. • The goal of the game is for B to correctly identify ot from O based on the interaction with A • A will first utter the word (here: “wabaku”) that A associates with ot, i.e. the word associated to the semantic category (in the form of a neural network) that uniquely discriminates ot • In terms of our framework, we identify the neural network with the usage pattern, which for the speaker is • [“wabaku”]A • Concretely, the usage pattern is stored as an array of numbers representing weights in the neural network
“Guessing game” (Steels & Belpaeme 2005) as FSA S: (points to topic) H: fb-und-neg S: fb-acc-pos H: (points to assumed topic) S: “x” S: fb-acc-neg + (points to topic) • Negotiation of meaning of colour terms through an ostensive language game • A: blue • B: (points to colour sample X) • A: no (points to colour sample Y)
Ostensive definition, cont’d • A starts off the dialogue game • A: wabaku • B now looks up ``wabaku'' in its own lexicon and finds an associated semantic category [“wabaku”]B • B then produces a situated interpretation • [“wabaku”]sB • given the architecture of agents in this experiment, this is equivalent to B's sensory impression of an object (og) that B (in this example) is able to uniquely pick out from the context using the semantic category (neural network) [“wabaku”]B • A then gives feedback to check whether this interpretation is correct. • B: (points at object og)
Ostensive definition, cont’d • Unfortunately, B has made a wrong guess, which leads to negative feedback and a correction: • A: (feedback indicating rejection of B's answer) • A: (points to topic object ot) • B will now produce a new situated interpretation • [“wabaku”]sB' = (the sensory impression from) object ot • As a consequence of this game of semantic negotiation, B will then revise its meaning-pattern (neural network) [“wabaku”]B by adjusting it to better match this new situated meaning; in our theory • [“wabaku”]B○*+ [“wabaku”]sB' • Note that accommodation is not an option in this game • the context is not rich enough to allow the hearer to infer the intended referent in problematic situations without consulting the speaker • (accommodation requires redundancy)
Providing definitions(Second language learners) • Exempel (Varonis & Gass 1985) • A: kött i Sverige mycke annelunda • U • B: mycke? • I • A: mycke annelunda • R • B: annolunda? • I • A: smakar inte samma • R • B: ahh • RR
Second language learners • Exempel (Varonis & Gass 1985) • A: kött i Sverige mycke annelunda • assert(”kött i Sverige (är) mycket annorlunda”) • B: mycke? • icm:per*neg:follows(”mycke?”) • A: mycke annelunda • icm:repeat(”mycket annelunda”) • B: annolunda? • icm:sem*neg:”annolunda” • MAX-QUD= ?x.meaning(”annolunda”,X) • A: smakar inte samma • answer(”smakar inte samma”) / icm:provide_definition(”smakar inte samma”) • interpreted (in light of MAX-QUD) as • definition(”annolunda”, ”smakar inte samma”) • this resolves ?x.meaning(”annolunda”,X) • defs([”annorlunda”]) :=+ ”smakar inte samma” • B: ahh • icm:acc*pos
What is a definition? • It connects a term with the other terms of the situlect, and possible superordinate situlects • It adds to the ”theory” • Does not extend usage sets directly but via definition • The corresponding usage set update is • S+([”annorlunda”], context:food_property) :=U S+[”smakar inte samma”] • I.e. extend the set of appropriate uses of ”annorunda” in the context of assigning properties to food with the appropriate-set for ”smakar inte samma”
Implementation:Ontological Coordination in City Navigation • S: Danny's deli is just north of your current location. • [absolute spatial ontology: north etc.] • U: uh. I don't know where that is! • S: Turn to face the church. Walk straight ahead for one block, then make a right turn and walk 50 meters. • [blocks, buildings and streets ontology] • U: Oh it's on High Street? • [user adapts and switches to street-name ontology] • S: No. It's on South Bridge, number 45 • [adapt to using street-name ontology] • U: Sorry, I don't know where South Bridge is.
Implementation:Ontological Coordination in City Navigation • S: Danny's deli is on Clerk Street, near the bridge in the city centre • [different street-name ontology] • South Bridge and Clerk Street are different sections of the same street, but the whole street is sometimes referred to as Clerk Street • U: Sorry, I don't get it • S: Turn to face the church and walk along Meadow street towards the north, then at the junction to Clerk Street make a right turn and walk along Clerk Street for about 50 meters until it changes name into South Bridge, then continue on South bridge until you reach number 45. • [combining several micro-ontologies in one utterance] • U: You lost me there • S: Have a look at this map and follow the direction arrow • [using alternative spatial ontology for multimodal interaction; talking about maps and arrows]