600 likes | 724 Views
www.poulinhugin.com. Overview. Brief Project History Hugin Expert A/S and Bayesian Technology Discussion Poulin Automation Tool Discussion. Hugin Software?. Product maturity and optimisation produce the world ’ s fastest Bayesian inference engine
E N D
Overview • Brief Project History • Hugin Expert A/S and Bayesian Technology Discussion • Poulin Automation Tool Discussion
Hugin Software? • Product maturity and optimisation produce the world’s fastest Bayesian inference engine • State-of-the-art capabilities based on internal and external research and development • Practical experience and theoretical excellence combined form the basis of further product refinement • High-performance and mission critical systems in numerous areas are constructed using Hugin software
Hugin Expert A/S? • The market leader for more than a decade • Highly skilled researchers and developers • Has strategic cooperation internationally • Has a clear strategy for maintaining its leadership as tool and technology provider • Part of the world’s largest Bayesian research groups • Has experience from numerous, large-scale, international R&D projects
Client List Raytheon Systems Company Kana Communications Sandia National Laboratories GE Global Research Westhollow Technology Center Great Britain Rolls-Royce Aerospace Group Philips Research Laboratories USB AG Motorola Defence Research Agency Nuclear Electric Plc Marconi Simulation Lucas Engineering & Systems ltd Lloyd´s Register BT Laboratories Brown & Root Limited Silsoe Research Institute Aon Risk Consultants Railtrack Shell Global Solutions Germany Siemens AG Volkswagen AG DaimlerChrysler AG GSF Medis Reutlingen Kinderklinik France PGCC Technologie Protectic Objectif Technologies Usinor Canada Decision Support Technologies Italy ENEA CRE Casassia C.S.E.L.T. Israel IBM Haifa Research Laboratory Australia Department of Defence, DSTO National Australian Bank Netherlands Shell International E&P Japan Sumitomo Metal Industries Dentsu Inc. Scandinavia Defence Research Agency Danish Defence Research Establishm. Aalborg Portland Danish Agricultural Advisory Center COWI FLS Automation Judex Datasystemer AON Denmark ABB Nykredit Swedpower South Africa CSIR USA Hewlett-Packard Intel Corporation Dynasty DrRedDuke Xerox Lockheed Martin NASA/Johnson Space Center Boeing Computer Service USDA Forest Service Information Extraction & Transport Inc. Pacific Sierra Research Price Waterhouse Swiss Bank Corporation Bellcore ISX Corporation Lam Research Corporation Orincon Corporation Integrate IT Charles River Analytics Northrop Grumman CHI Systems Inc Voyan Technology Los Alamos National Laboratory Rockwell Science Center Citibank Perkin Elmer Corporation Inscom Honeywell Software Initiative Aragon Consulting Group
Bayes’ Theorem • Rev. Thomas Bayes (1702-1761), an 18th century priest from England • The theorem, as generalized by Laplace, is the basicstarting point for inference problems using probabilitytheory as logic – assigns degree of belief to propositions
Bayesian Technology • Probablistic graphical models • Model based approach to decision support • Compact and intuitive graphical representation • Sound & coherent handling of uncertainty • Reasoning and decision making under uncertainty • Bayesian networks and influence diagrams
A Bayesian Network • A Bayesian network consists of: • A set of nodes and a set of directed edges between nodes • The nodes together with the directed edges form a directed acyclic graph (DAG) • Each node has a finite set of states • Attached to each node X with parents there is a conditional probability table • A knowledge representation for reasoning under uncertainty
Generativedistribution Bayesian Expert Systems • Induce structure of the graphical representation • Fusion of data & expert knowledge • Estimate parameters • Fusion of data & expert knowledge
Implementation • Cause and effect relations represented in an acyclic, directed graph • Strengths of relations are encoded using probabilities • Compute probabilities of events given observations on other events • Fusion of data and domain knowledge • Analyse results using techniques like conflict & sensitivity analysis
Technology Summary • A compact and intuitive graphical representation of causal relations • Coherent and mathematically sound handling of uncertainty and decisions • Construction and adaptation ofBayesian networks based on data sets • Efficient solution of queries against the Bayesian network • Analysis tools such as • Data conflict, Explanation, Sensitivity, Value of information analysis
What Does This Do For You? • Reasoning and decision making under uncertainty supporting • Diagnosis • Prediction • Process analysis and supervision • Filterting & classification • Control • Troubleshooting • Predictive maintenance • …
Bayesian Applications • Medicine – forensic identification, diagnosis of muscle and nerve diseases, antibiotic treatment, diabetes advisorysystem, triage (AskRed.com) • Software – software debugging, printer troubleshooting, safety and risk evaluation of complexsystems, help facilities in Microsoft Office products • Information Processing – information filtering, display of information for time-criticaldecisions, fault analysis in aircraft control • Industry – diagnosis and repair of on-board unmanned underwater vehicles, prediction of parts demand, control ofcentrifugal pumps, process control in wastewater purification. • Economy – prediction of default, credit application evaluation, portfolio risk and return analysis • Military – NATO Airborne Early Warning & Control Program, situation assessment • Agriculture – blood typing and parentage verification of cattle, replacement of milk cattle,mildew management in winter wheat
Hugin Products • General purpose decision support • Hugin Explorer • Hugin graphical user interface • Hugin Developer • Hugin graphical user interface • Hugin decision engine • APIs (C, C++, Java) and ActiveX server • Troubleshooting • Hugin Advisor • A suite of tools for troubleshooting • Data mining • Hugin Clementine Link
Vision • To create an application that would provide automation for The Hugin Decision Engine. • Focus on main Bayesian Inference capabilites • Build automation capabile command line tool • Build data parser for formating of structured/unstructured data • Divide problem space and build meta-database • Integrate with Hugin GUI for human based knowledge discovery
Methodology • The Naive Bayes Model • Structure, variables and states, • Discretization using Principle of Maximum Entropy • Parameter estimation using EM • The Tree Augmented Naive Bayes Model • Interdependence relations between information variables based on mutual information • (extra step compared to NBM) • Model update by adding new nodes as in NBM • Value of information (variables and cases) • Evidence sensitivity analysis (what-if)
Analysis Model Data Command-line interface Functionality • Data preparation • Model construction - build Naive Bayes Model orTree Augmented NBM • Model update - Add additional information variables • Inference - compute probability of target given evidence • What-if analysis - robustness of probabilities • Value of Information analysis • Which case is most informative • Which observation is most informative
Features • An application for construction of a Naive Bayes Model • Updating a Naive Bayes Model • Construction of a Tree Augmented Naive Bayes Model • Inference base on a case • What-if sensitivity analysis (a single piece of evidence) • Value-of-information analysis (cases and observations) • Error handling and tracing have been kept at a minimum. • Implemented in C++ using Hugin Decision Engine 6.3 • Runs on Windows 2k, Linux Redhat, Sun Solaris. • Program documentation in HTML.
Tools • The POULIN-HUGIN package consists of a set of tools for • Data preparation : • dat2hcs, class2net, class2dat, weather2dat, pull, struct2dat, ustruct2dat, ustruct2hcs • Model construction & update : • ph • Inference : • ph • Analysis : • ph
Data Sample • Source data are from the Global Summary of the Day (GSOD) database archived by the National Climatic Data Center (NCDC). • Used average daily temperature (of 24 hourly temperature readings) in 145 US cities measured from January 1, 1995 to December 29, 2003. • Data of 3,255 cases split into subsets for learning, updating, cases, and case files. • learning: 2000 cases • update: 1000 cases • cases: 10 cases • case files: 245 cases • 2,698 missing values out of 471,975 entries: 0.006% missing values.
Discretization • Measures on average daily temperatures are continuous by nature. • Continuous variables can be represented as discrete variables through discretization. • Determining intervals: how many, width, equally sized, . . . ? • We discretize using the principle of maximum entropy, but can easily make equidistant (uniform) discretization.
Discretization • Entropy can be considered as a measure of information. Obtain uninformative distribution under current information. • Principle of Maximum Entropy • By choosing to use the distribution with the maximum entropy allowed by our information, the argument goes, we are choosing the most uninformative distribution possible. To choose a distribution with lower entropy would be to assume information we do not possess; to choose one with a higher entropy would violate the constraints of the information we do possess. Thus the maximum entropy distribution is the only reasonable distribution. • Discretize variables to have uniform distribution based on data.
Model Specification • A Bayesian network consists of a • qualitative part, the graph structure (DAG G). • quantitative part, the the conditional probability distributions (P). • Model specification consists of two parts. • A Bayesian network N is minimal if and only if, for every node X and for every parent Y, X is not independent of Y given the other parents of X
Naive Bayes Model • A well-suited model for classification tasks and tasks of the following type • An exhaustive set of mutex hypotheses h1; : : : ; hn are of interest • Measures on indicators I1; : : : ; In to predict hi • The Naive Bayes Model • h1; : : : ; hn are represented as states of a hypothesis variable H • Information variables I1; : : : ; In are children of H • The fundamental assumption is that I1; : : : ; In are pairwise independent when H is known. • Computationally and representationally a very efficient model that provides good results in many cases.
Naive Bayes Model • The Naive Bayes Model in more details • Let the possible hypotheses be collectedinto one hypothesis variable H with prior P(H). • For each information variable I, acquire P(I | H) = L(H | I). • For any set of observations calculate: • The posterior is where • The conclusion may be misleading as the assumption may not hold
NBM Model Construction ph -nbm <data> <target> <states> <iterations> [-verbose] • This command builds a NBM model from the data contained in <data> with <target> as the hypothesis variable. • All variables will have a maximum of <states> states. • As many as <iterations> iterations of the EM algorithm will be performed • The model constructed is saved in file "nbm.net", which can be loaded into Hugin Graphical User Interface for inspection • Example: ph -nmb model.dat MDWASHDC 2 1
Binary NBM Model Construction ph -boolnbm <data> <target> <states> <iterations> [-verbose] • This command builds a Boolean NBM model from the data contained in <data> with <target> as the hypothesis variable. • All variables will be Boolean indicating the presence of a word (the word represented by a variable is equal to the label of the variable). • As many as <iterations> iterations of the EM algorithm will be performed. • The model constructed is saved in file ”boolnbm.net", which can be loaded into Hugin Graphical User Interface for inspection. • Example: ph -boolnmb model.dat MDWASHDC 2 1
Tree-Augmented NBM Model • Let M be a Naive Bayes Model with hypothesis H and information variables I = fI1; : : : ; Ing • We can use I(Ii; Ij j H) to measure the conditional dependency between two information variables Ii; Ij conditional on H. • After computing I(Ii; Ij j H) for all Ii; Ij, we use Kruskal’s algorithm to find a maximum weight spanning tree T on I: • The edges of T are directed such that no variable has more than two parents (H and one other I). • Complexity of inference becomes polynomial in the number of information variables.
TAN Model Construction ph -tan <data> <target> <states> <iterations> [-verbose] • This command builds a Tree-Augmented Naive Bayes model (TAN) from the data contained in <data> with <target> as the hypothesis variable. • All variables will have a maximum of <states> states. • As many as <iterations> iterations of the EM algorithm will be performed. • The model constructed is saved in file "tan.net", which can be loaded into Hugin Graphical User Interface for inspection. • Example: ph -tan model.dat MDWASHDC 2 1
Binary TAN Model Construction ph -booltan <data> <target> <states> <iterations> [-verbose] • This command builds a Tree-Augmented Boolean Naive Bayes model (TAN) from the data contained in <data> with <target> as the hypothesis variable. • All variables will be Boolean indicating the presence of a word (the word represented by a variable is equal to the label of the variable). • As many as <iterations> iterations of the EM algorithm will be performed. • The model constructed is saved in file ”booltan.net", which can be loaded into Hugin Graphical User Interface for inspection. • Example: ph -booltan model.dat MDWASHDC 2 1
Model Updates ph -update <data> <model> <target> <states> <iterations> [-verbose] • This command updates a model with data contained in <data>. <target> is the hypothesis variable of the model stored in <model>. • Variables in the data not represented in the original model will be added to the model as children of the hypothesis variable (no structure between information variables is added). The data file should contain measures on all variables (old and new). • All new variables will have a maximum of <states> states. • As many as <iterations> iterations of the EM algorithm will be performed. • The updated model is saved in file "update.net", which can be loaded into Hugin Graphical User Interface for inspection. • Example: ph -update update.dat model.net MDWASHDC 2 1
Parameter Estimation • Parameter learning is identification of the CPTs of the Bayesian network. • theoretical considerations, database of cases, subjective estimates. • The CPTs are constructed based on a database of cases D = fc1; : • There may be missing values in some of the cases indicated by N/A. • The CPTs are learned by maximum likelihood estimation: • where n(Y = y) is the (expected) number of cases for which Y = y.
Parameter Estimation • Prior (domain expert) knowledge can be exploited. • Experience is the number of times pa(Xi) = j has been observed. • Experience count is positive number j > 0. • Also used to turn on/off learning. • Prior knowledge is used both to speed up and guide learning in search of global optimum • Expected counts used when values are missing. • Including parameters not appearing in the data. • The EM algorithm is an iterative procedure using the current estimate of the parameters as the true values. In the first run the initial content is used.
Inference ph -inference <model> <target> <case> [-verbose] • This command performs inference in <model>, which has <target> as hypothesis variable. The posterior distribution in <target> is displayed for the case stored in the file <case> • Example: ph -inference nbm.net MDWASHDC case.hcs
VOI in Bayesian Networks • How do we perform value of information analysis without specifying utilities? • The reason for acquiring more information is to decrease the uncertainty about the hypothesis. • The entropy is a measure of how much probability mass is scattered around on the states (the degree of chaos). • Thus, where • Entropy is a measure of randomness. The more random a variable is, the higher entropy its distribution will have.
Value of Information • If the entropy is to be used as a value function, then • We want to minimize the entropy
Variables Value of Information • What is the expected most informative observation ? • A measure of the reduction of the entropy of T given X. • The conditional entropy is • Let T be the target, now select X with maximum information gain
Variables Value of Information • Assume we are interested in B, i.e. B is target: • We are interested in observing variable Y with most information on B: • We select to observe and compute: • Thus,
Variables VOI Command Line ph -voivariables <model> <target> <case> [-verbose] • This command performs a value-of-information analysis on each non-observed variable given the observations in <case> relative to <target>. That is, for each unobserved variable in <case>, a measure of how well the variable predicts <target> is displayed. • Example: ph -voivariables nbm.net MDWASHDC case_2.hcs
Case Value of Information • Assume T is the target of interest and assume we have a database of cases D = fc1; : : : • The uncertainty in T can be measured as H(T): • A high value of H(T) indicates high uncertainty • A low value of H(T) indicates low uncertainty • Entropy for the binary case E(H) • We compute H(T j c) for all cases c. • The case c producing the lowest value of H(T j c) is considered most informative wrt. T.
Case VOI Command Line ph -voicase <model> <target> <case> [-verbose] • This command performs a value-of-information analysis on the case stored in <case> relative to <target>. That is, a measure of how well the case predicts <target> is displayed. • Example: ph -voicase tan.net MDWASHDC case_2.dat
Evidence Sensitivity Analysis • Let = f1; : : : ; ng be a set of observations and assume a single hypothesis h is of interest. • What-if the observation i had not been made, but instead ? • Involves computing P(h j [ f0ig n fig) and comparing results. • This kind of analysis will help you determine, if a subset of evidence acts for or against a hypothesis.
What-If Analysis • What happens to the temperature inWashington, DC if the temperature in Austin, TX changes? • Assume evidence = f1; : : : ; ng and let i be the measured temperature in Austin, TX • We compute P(T = t j n fig [ f0ig) for all • Myopic what-if analysis: change finding on one information marginal and monitor the change in probability of T
What-If Analysis: Cases ph -whatif <model> <target> <case> • This command performs a what-if analysis on each instantiated variable in the case file <case> relative to <target>. That is, the posterior distribution of each hypothesis (each state of the target variable) is displayed for each possible value of the observed variables. • Example: ph -whatif model.net MDWASHDC case.hcs
What-If Analysis: Variables ph -whatif <model> <target> <case> <variable> • This command performs a what-if analysis on <variable> relative to <target>. That is, the posterior distribution of each hypothesis (each state of the target variable) is displayed for each possible value of the indicated variable. • Example: ph -whatif model.net MDWASHDC case.hcs TXAUSTIN
Help ph –help • This command displays as simple help.