800 likes | 967 Views
Presented by Patrick Dallaire – DAMAS Workshop november 2 th 2012. Infinite dynamic bayesian networks. ( Doshi et al. 2011). INTRODUCTION. PROBLEM DESCRIPTION. Consider precipitations measured by 500 different weather stations in USA. Observations were discretized into 7 values
E N D
Presented by Patrick Dallaire – DAMAS Workshop november 2th 2012 Infinite dynamic bayesian networks (Doshi et al. 2011)
PROBLEM DESCRIPTION • Consider precipitations measured by 500 different weather stations in USA. • Observations were discretized into 7 values • The dataset consists of a time series including 3,287 daily measures • How can we learn the underlying weather model that produced the sequence of precipitations?
HIDDEN MARKOV MODEL • Observations are produced based on the hidden state • The hidden state evolvesaccording to some dynamics • Markov assumption says that summarizes the states history and is thus enough to generate • The learning task is to infer and from data
TRANSITION MODEL • A regular DBN is a directed graphical model • State at time is represented through a set of factors
TRANSITION MODEL • A regular DBN is a directed graphical model • State at time is represented through a set of factors • The next state is sampledaccording to:where representsthe values of the parents
TRANSITION MODEL • A regular DBN is a directed graphical model • State at time is represented through a set of factors • The next state is sampledaccording to:where representsthe values of the parents
OBSERVATION MODEL • The state of a DBN isgenerally hidden • State values must be inferred from a set of observable nodes • The observations are sampled from:where is the values of the parents
OBSERVATION MODEL • The state of a DBN isgenerally hidden • State values must be inferred from a set of observable nodes • The observations are sampled from:where is the values of the parents
OBSERVATION MODEL • The state of a DBN isgenerally hidden • State values must be inferred from a set of observable nodes • The observations are sampled from:where is the values of the parents
LEARNING THE STRUCTURE • The number of hidden factors is unknown • The state transition structure is unknown • The state observation structure is unknown
PRIOR OVER DBN STRUCTURES • A Bayesian nonparametric prior is specified over structures with Indian buffet processes (IBP) • We specify a prior over observation connection structures: • We specify a prior over transition connection structures:
IBP ON OBSERVATION STRUCTURE • selects a parent factor with probability • samplesnew parent factors
IBP ON OBSERVATION STRUCTURE • selects a parent factor with probability • samplesnew parent factors
IBP ON OBSERVATION STRUCTURE • selects a parent factor with probability • samplesnew parent factors
IBP ON OBSERVATION STRUCTURE • selects a parent factor with probability • samplesnew parent factors
IBP ON OBSERVATION STRUCTURE • selects a parent factor with probability • samplesnew parent factors
IBP ON OBSERVATION STRUCTURE • selects a parent factor with probability • samplesnew parent factors
IBP ON OBSERVATION STRUCTURE • selects a parent factor with probability • samplesnew parent factors
IBP ON TRANSITION STRUCTURE • selects a parent factor with probability • samplesnew parent factors
IBP ON TRANSITION STRUCTURE • selects a parent factor with probability • samplesnew parent factors
IBP ON TRANSITION STRUCTURE • selects a parent factor with probability • samplesnew parent factors
IBP ON TRANSITION STRUCTURE • selects a parent factor with probability • samplesnew parent factors
IBP ON TRANSITION STRUCTURE • selects a parent factor with probability • samplesnew parent factors
IBP ON TRANSITION STRUCTURE • selects a parent factor with probability • samplesnew parent factors
IBP ON TRANSITION STRUCTURE • selects a parent factor with probability • samplesnew parent factors
LEARNING MODEL DISTRIBUTIONS • The observation distribution is unknown • The transition distribution is unknown
PRIOR OVER DBN DISTRIBUTIONS • A Bayesian prior is specified over observation distributions: where is a prior base distribution
PRIOR OVER DBN DISTRIBUTIONS • A Bayesian prior is specified over observation distributions: where is a prior base distribution • A Bayesian nonparametric prior is specified over transition distributions:where is a Dirichlet process and is a Stickbreaking distribution
PRIOR ON OBSERVATION MODEL • For each observable variable , we can draw an observation distribution from:
PRIOR ON OBSERVATION MODEL • For each observable variable , we can draw an observation distribution from: • Assuming is discrete, could be a Dirichlet
PRIOR ON OBSERVATION MODEL • For each observable variable , we can draw an observation distribution from: • Assuming is discrete, could be a Dirichlet • The prior could also be a Dirichlet
PRIOR ON OBSERVATION MODEL • For each observable variable , we can draw an observation distribution from: • Assuming is discrete, could be a Dirichlet • The prior could also be a Dirichlet • The posterior is obtained by counting how many times specific observations occurred
EXAMPLE red blue
EXAMPLE red blue
PRIOR ON TRANSITION MODEL • First, we sample the expected factor transition distribution:
PRIOR ON TRANSITION MODEL • First, we sample the expected factor transition distribution: • For each active hidden factor, we sample an individual transition distribution: where controls the variance around
PRIOR ON TRANSITION MODEL • First, we sample the expected factor transition distribution for infinitely many factors: • For each active hidden factor, we sample an individual transition distribution: where controls the (inverse) variance • The posterior is again obtained by counting