300 likes | 484 Views
Introduction of Probabilistic Reasoning and Bayesian Networks. Hongtao Du Group Presentation. Outline. Uncertain Reasoning Probabilistic Reasoning Bayesian Network (BN) Dynamic Bayesian Network (DBN). Reasoning.
E N D
Introduction of Probabilistic Reasoning and Bayesian Networks Hongtao Du Group Presentation
Outline • Uncertain Reasoning • Probabilistic Reasoning • Bayesian Network (BN) • Dynamic Bayesian Network (DBN)
Reasoning • The activity of guessing the state of the domain from prior knowledge and observations. • Causal reasoning • Diagnostic reasoning • Combinations of these two
Uncertain Reasoning (Guessing) • Some aspects of the domain are often unobservable and must be estimated indirectly through other observations. • The relationships among domain events are often uncertain, particularly the relationship between the observables and non-observables.
The observations themselves may be unreliable. • Even though observable, very often we do not have sufficient resource to observe all relevant events. • Even though events relations are certain, very often it is impractical to analyze all of them
Probabilistic Reasoning • Methodology founded on the Bayesian probability theory. • Events and objects in the real world are represented by random variables. • Probabilistic models: • Bayesian reasoning • Evidence theory • Robust statistics • Recursive operators
Graphical Model • A tool that visually illustrate conditional independence among variables in a given problem. • Consisting of nodes (Random variables or States) and edges (Connecting two nodes, directed or undirected). • The lack of edge represents conditional independence between variables.
Chain, Path, Cycle, Directed Acyclic Graph (DAG), Parents and Children
Z X Y A U B V Bayesian Network (BN) • Probabilistic network, belief network, causal network. • A specific type of graphical model that is represented as a Directed Acyclic Graph.
BN consists of • variables (nodes) V={1, 2, …, k} • A set of dependencies (edges) D • A set of probability distribution functions (pdf) of each variable P • Assumptions • P(X)=1 if and only if X is certain • If X and Y are mutually exclusive, then P(X v Y) = P(X) + P(Y) • Joint probability P(X, Y)= P(X|Y) P(Y)
X represents hypothesis • Y represents evidence • P(Y|X) is likelihood • P(X|Y) is the posterior probability • If X, Y are conditionally independent P(X|Z, Y) = P(X|Z)
Given some certain evidence, BN operates by propagating beliefs throughout the network. P(Z, Y, U, V) = P(Z) * P(Y|Z) * P(U|Y) * P(V|U) where is the parents of node • Explaining away • If a node is observed, its parents become dependent. • Two causes (parents) compete to explain the observed data (child).
Tasks in Bayesian Network • Inference • Learning
Inference • Inference is the task of computing the probability of each state of a node in a BN when other variables are known. • Method: dividing set of BN nodes into non-overlapping subsets of conditional independent nodes.
Example Given Y is the observed variable. Goal: find the conditional pdf over Case 1:
Learning • Goal: completing the missing beliefs in the network. • Adjusting the parameters of the Bayesian network so that the pdfs defined by the network sufficiently describes statistical behavior of the observed data.
M: a BN model : Parameter of probability of distribution : Observed data • Goal: Estimating to maximize the posterior probability
Assume is highly peaked around maximum likelihood estimates
Dynamic Bayesian Network (DBN) • Bayesian network with time-series to represent temporal dependencies. • Dynamically changing or evolving over time. • Directed graphical model of stochastic processes. • Especially aiming at time series modeling. • Satisfying the Markovian condition: The state of a system at time t depends only on its immediate past state at time t-1.
Representation • Time slice t1 t2 tk • The transition matrix that represent these time dependencies is called Conditional Probability Table (CPT).
Description • T: time boundary we are investigating : observable variables : hidden-state variables : state transition pdfs, specifying time dependencies between states. : observation pdfs, specifying dependencies of observation nodes regarding to other nodes at time slice t. : initial state distribution.
Tasks in DBN • Inference • Decoding • Learning • Pruning
Inference • Estimating the pdf of unknown states through given observations and initial probability distributions. • Goal: finding : a finite set of T consecutive observations : the set of corresponding hidden variables
Decoding • Finding the best-fitting probability values for the hidden states that have generated the known observations. • Goal: determine the sequence of hidden states with highest probabilities.
Learning • Given a number of observations, estimating parameters of DBN that best fit the observed data. • Goal: Maximizing the joint probability distribution. : the model parameter vector
Pruning • An important but difficult task in DBN. • Distinguishing which nodes are important for inference, and removing the unimportant nodes. • Actions: • Deleting states from a particular node • Removing the connection between nodes • Removing a node from the network
Time slice t • : designated world nodes, a subset of the nodes, representing the part we want to inspect. • , If state of is known, , then are no longer relevant to the overall goal of the inference. Thus, (1) delete all nodes (2) incorporate knowledge that
Future work • Probabilistic reasoning in multiagent systems. • Different DBNs and applications. • Discussion of DBN problems.