360 likes | 488 Views
Research at the Decision Making Lab. Fabio Cozman Universidade de São Paulo. Decision Making Lab (2002). Research tree. Bayes nets. Sets of probabilities. Robotics (a bit). Anytime, anyspace (embedded systems). Classification. Algorithms independence. MCMC algorithms
E N D
Research at the Decision Making Lab Fabio Cozman Universidade de São Paulo
Research tree Bayes nets Sets of probabilities Robotics (a bit) Anytime, anyspace (embedded systems) Classification Algorithms independence MCMC algorithms inference & testing Applications Medical decisions Applications MDPs, robustness analysis, auctions
Decisions in medical domains (with the University Hospital) • Idea: To improve decisions at medical posts in urban, poor areas • We are building networks that represent cardiac arrest — can be caused by stress, cardiac problems, respiratory problems, etc • Support by FAPESP
Embedded Bayesian networks • Challenge: to implement inference algorithms compactly and efficiently • Real challenge: to develop anytime anyspace inference algorithms • Idea: decompose networks, apply several algorithms (UAI2002 workshop on RT) • Support by HP Labs
Decomposing networks • How to decompose and assign algorithms to meet space and time constraints with reasonable accuracy
Generating random networks • Problem is easy to state, hard to solve: critical properties of DAGs are not known • Method based on MCMC simulation, with constraints on induced width and degree • Support by FAPESP
Research tree (again) Bayes nets Sets of probabilities Biorobotics (a bit of it) Anytime, anyspace (embedded systems) Classification Algorithms independence MCMC algorithms inference & testing Applications Medical decisions Applications MDPs, robustness analysis, auctions
Bayesian network classifiers • Goal is to use probabilistic models for classification – to “learn” classifiers using labeled and unlabeled data • Work with Ira Cohen, Alex Bronstein and Marsha Duro (UIUC and HP Labs)
Using Bayesian networks to learn from labeled and unlabeled data Suppose we want to classify events based on observations; we have recorded data that are sometimes labeled and sometimes unlabeledWhat is the value of unlabeled data?
Class Attribute 1 Attribute 2 Attribute N The Naïve Bayes classifier • A Bayesian-network like classifier with excellent credentials: • Use Bayes rule to get classification p(label | attrs.) a p(label) Pi=0…Np(attr. i | Class)
The TAN classifier Class Attribute 1 X1 Attribute 2 X2 Attribute 3 X3 Attribute N XN
Now, let’s consider unlabeled data • Our database: • American baseball hamburger • Brazilian soccer rice and beans • American golf apple pie • ? saloon soccer rice and beans • ? golf rice and beans Question: How to use the unlabeled data?
Unlabeled data can help… • Learning a Naïve Bayes for data generated from a Naïve Bayes model (10 attributes):
… but unlabeled data may degrade performance! • Surprising fact:more data may not help; more data may hurt
Some math: asymptotic analysis • Asymptotic bias: • Variance decreases with more data
Class X Y A very simple example • Consider the following situation: Class “Real” X Y “Assumed” X and Y are Gaussian given Class
Searching for structures • Previous tests suggest that we should pay attention to modeling assumptions when dealing with unlabeled data • In the context of Bayesian network classifiers, we must look for structures • This is not easy; worse, existing algorithms do not focus on classification
Stochastic Structure Search (SSS) • Idea: search for structures using classification error • Hard: search space is too messy • Solution: Metropolis-Hastings sampling with underlying measure proportional to 1/perror
Some words on unlabeled data • Unlabeled data can improve performance, can degrade performance — really hard! • Current understanding about this problem is shaky • people think outliers or mismatches between labeled and unlabeled data cause the problem
Research tree (once again) Bayes nets Sets of probabilities Biorobotics (a bit of it) Anytime, anyspace (embedded systems) Classification Algorithms independence MCMC algorithms inference & testing Applications Medical decisions Applications MDPs, robustness analysis, auctions
Sets of probabilities • Instead of probability of rain is 0.2, say probability of rain is [0.1, 0.3] • Instead of expected value of stock is 10,admit expected value of stock is [0, 1000]
Set of probabilities An example • Consider a set of probabilities p(q1) p(q2), p(q3)
Why? • More realistic and quite expressive as representation language • Excellent tool for • robustness/sensitivity analysis • modeling incomplete beliefs (probabilistic logic) • group decision-making • analysis of economic interactions – for example, to study arbitrage and design auctions
What we have been doing • Trying to formalize and apply “interval” reasoning, particularly independence • Building algorithms for manipulation of these intervals and sets • To deal with independence and networks • JavaBayes is the only available software that can deal with this (to some extent!)
Family In? Dog Sick? Lights On? Dog Out? Dog Barking? Credal networks • Using graphical models to represent sets of joint probabilities • Question: what do the networks represent? • Several open questions and need for algorithms
Concluding • To summarize, we want to understand how to use probabilities in AI, and then we add a bit of robotics • Support from FAPESP and HP Labs has been generous • Visit the lab in your next trip to São Paulo