1 / 72

Part III Learning structured representations Hierarchical Bayesian models

Part III Learning structured representations Hierarchical Bayesian models. Universal Grammar. Hierarchical phrase structure grammars (e.g., CFG, HPSG, TAG). Grammar. Phrase structure. Utterance. Speech signal. Outline. Learning structured representations grammars logical theories

palma
Download Presentation

Part III Learning structured representations Hierarchical Bayesian models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Part IIILearning structured representationsHierarchical Bayesian models

  2. Universal Grammar Hierarchical phrase structure grammars (e.g., CFG, HPSG, TAG) Grammar Phrase structure Utterance Speech signal

  3. Outline • Learning structured representations • grammars • logical theories • Learning at multiple levels of abstraction

  4. (Chomsky, Pinker, Keil, ...) (McClelland, Rumelhart, ...) A historical divide Structured Representations Unstructured Representations vs Innate knowledge Learning

  5. Structured Representations • Innate Knowledge Chomsky Keil Structure Learning Learning McClelland, Rumelhart Unstructured Representations

  6. Representations asbestos Causal networks lung cancer coughing chest pain Grammars Logical theories

  7. Representations Phonological rules cause Chemicals Diseases affect interact with affect Semantic networks disrupt Biologicalfunctions Bio-active substances

  8. How to learn a R • Search for R that maximizes • Prerequisites • Put a prior over a hypothesis space of Rs. • Decide how observable data are generated from an underlying R.

  9. anything How to learn a R • Search for R that maximizes • Prerequisites • Put a prior over a hypothesis space of Rs. • Decide how observable data are generated from an underlying R.

  10. Context free grammar S  N VP VP  V N  “Alice” V  “scratched” VP  V N N  “Bob” V  “cheered” S S N VP N VP Alice V Alice V N cheered scratched Bob

  11. Probabilistic context free grammar 1.0 0.6 0.5 0.5 S  N VP VP  V N  “Alice” V  “scratched” 0.4 0.5 0.5 VP  V N N  “Bob” V  “cheered” 1.0 S S 1.0 N VP N VP 0.5 0.4 0.5 0.6 Alice V Alice V N 0.5 0.5 cheered scratched Bob probability = 1.0 * 0.5 * 0.6 = 0.3 probability = 1.0*0.5*0.4*0.5*0.5 = 0.05

  12. The learning problem Grammar G: 1.0 0.6 0.5 0.5 S  N VP VP  V N  “Alice” V  “scratched” 0.4 0.5 0.5 VP  V N N  “Bob” V  “cheered” Data D: Alice scratched. Bob scratched. Alice scratched Alice. Alice scratched Bob. Bob scratched Alice. Bob scratched Bob. Alice cheered. Bob cheered. Alice cheered Alice. Alice cheered Bob. Bob cheered Alice. Bob cheered Bob.

  13. Grammar learning • Search for G that maximizes • Prior: • Likelihood: • assume that sentences in the data are independently generated from the grammar. (Horning 1969; Stolcke 1994)

  14. Experiment • Data: 100 sentences ... (Stolcke, 1994)

  15. Generating grammar: Model solution:

  16. Predicate logic • A compositional language For all x and y, if y is the sibling of x then x is the sibling of y For all x, y and z, if x is the ancestor of y and y is the ancestor of z, then x is the ancestor of z.

  17. Learning a kinship theory Theory T: Sibling(victoria, arthur), Sibling(arthur,victoria), Ancestor(chris,victoria), Ancestor(chris,colin), Parent(chris,victoria), Parent(victoria,colin), Uncle(arthur,colin), Brother(arthur,victoria) Data D: … (Hinton, Quinlan, …)

  18. Learning logical theories • Search for T that maximizes • Prior: • Likelihood: • assume that the data include all facts that are true according to T (Conklin and Witten; Kemp et al 08; Katz et al 08)

  19. Theory-learning in the lab R(c,b) R(k,c) R(f,c) R(c,l) R(f,k) R(k,l) R(l,b) R(f,l) R(l,h) R(f,b) R(k,b) R(f,h) R(b,h) R(c,h) R(k,h) (cf Krueger 1979)

  20. f,k f,c f,l f,b f,h k,c k,l k,b k,h c,l c,b c,h l,b l,h b,h Theory-learning in the lab Transitive: R(f,k). R(k,c). R(c,l). R(l,b). R(b,h). R(X,Z) ← R(X,Y), R(Y,Z).

  21. Learning time Complexity trans. trans. Theory length Goodman (Kemp et al 08) trans. excep. trans.

  22. Conclusion: Part 1 • Bayesian models can combine structured representations with statistical inference.

  23. Outline • Learning structured representations • grammars • logical theories • Learning at multiple levels of abstraction

  24. Vision (Han and Zhu, 2006)

  25. Motor Control (Wolpert et al., 2003)

  26. Causal learning chemicals Schema diseases symptoms asbestos mercury Causal models lung cancer minamata disease muscle wasting coughing chest pain Patient 1: asbestos exposure, coughing, chest pain ContingencyData Patient 2: mercury exposure, muscle wasting (Kelley; Cheng; Waldmann)

  27. Universal Grammar Hierarchical phrase structure grammars (e.g., CFG, HPSG, TAG) P(grammar | UG) Grammar P(phrase structure | grammar) Phrase structure P(utterance | phrase structure) Utterance P(speech | utterance) Speech signal

  28. Hierarchical Bayesian model U Universal Grammar P(G|U) G Grammar P(s|G) s1 s2 s3 s4 s5 s6 Phrase structure P(u|s) u1 u2 u3 u4 u5 u6 Utterance A hierarchical Bayesian model specifies a joint distribution over all variables in the hierarchy:P({ui}, {si}, G | U) = P ({ui} | {si}) P({si} | G) P(G|U)

  29. Top-down inferences U Universal Grammar G Grammar s1 s2 s3 s4 s5 s6 Phrase structure u1 u2 u3 u4 u5 u6 Utterance Infer {si} given {ui}, G: P( {si} | {ui}, G) α P( {ui} | {si} ) P( {si} |G)

  30. Bottom-up inferences U Universal Grammar G Grammar s1 s2 s3 s4 s5 s6 Phrase structure u1 u2 u3 u4 u5 u6 Utterance Infer G given {si} and U: P(G| {si}, U) α P( {si} | G) P(G|U)

  31. Simultaneous learning at multiple levels U Universal Grammar G Grammar s1 s2 s3 s4 s5 s6 Phrase structure u1 u2 u3 u4 u5 u6 Utterance Infer G and {si} given {ui} and U: P(G, {si} | {ui}, U) α P( {ui} | {si} )P({si} |G)P(G|U)

  32. Word learning Whole-object bias Shape bias Words in general Individual words car monkey duck gavagai Data

  33. A hierarchical Bayesian model physical knowledge Coins • Qualitative physical knowledge (symmetry) can influence estimates of continuous parameters (FH, FT). q~ Beta(FH,FT) FH,FT ... Coin 1 Coin 2 Coin 200 q200 q1 q2 d1d2 d3 d4 d1d2 d3 d4 d1d2 d3 d4 • Explains why 10 flips of 200 coins are better than 2000 flips of a single coin: more informative about FH, FT.

  34. Word Learning “This is a dax.” “Show me the dax.” • 24 month olds show a shape bias • 20 month olds do not (Landau, Smith & Gleitman)

  35. “lug” “wib” “zup” “div” Is the shape bias learned? • Smith et al (2002) trained 17-month-olds on labels for 4 artificial categories: • After 8 weeks of training 19-month-olds show the shape bias: “Show me the dax.” “This is a dax.”

  36. Learning about feature variability ? (cf. Goodman)

  37. Learning about feature variability ? (cf. Goodman)

  38. A hierarchical model Meta-constraints M Color varies across bags but not much within bags Bags in general mostly yellow mostly blue? mostly green mostly red mostly brown … Bag proportions Data …

  39. A hierarchical Bayesian model M Meta-constraints Within-bag variability = 0.1 Bags in general = [0.4, 0.4, 0.2] … [1,0,0] [0,1,0] [1,0,0] [0,1,0] [.1,.1,.8] Bag proportions … Data [6,0,0] [0,6,0] [6,0,0] [0,6,0] [0,0,1] …

  40. A hierarchical Bayesian model M Meta-constraints Within-bag variability = 5 Bags in general = [0.4, 0.4, 0.2] … [.5,.5,0] [.5,.5,0] [.5,.5,0] [.5,.5,0] [.4,.4,.2] Bag proportions … Data [3,3,0] [3,3,0] [3,3,0] [3,3,0] [0,0,1] …

  41. Shape of the Beta prior

  42. A hierarchical Bayesian model Meta-constraints M Bags in general … Bag proportions Data …

  43. A hierarchical Bayesian model Meta-constraints M Bags in general … Bag proportions Data …

  44. Learning about feature variability Meta-constraints M Categories in general Individual categories Data

  45. “wib” “lug” “zup” “div”

  46. “wib” “lug” “zup” “div” “dax”

  47. Model predictions Choice probability “dax” “Show me the dax:”

  48. Where do priors come from? Meta-constraints M Categories in general Individual categories Data

  49. Knowledge representation

  50. Children discover structural form • Children may discover that • Social networks are often organized into cliques • The months form a cycle • “Heavier than” is transitive • Category labels can be organized into hierarchies

More Related