440 likes | 454 Views
This group presentation explores the concept of fractals and their application in Bayesian Networks inference. It covers the basics of fractals, explores their self-similarity property, and discusses how this property can be applied to improve inference in Bayesian Networks. The presentation also includes real-world examples of fractals and their relevance in various fields.
E N D
KDD Group Presentation Fractal and Bayesian Networks Inference Saturday, Oct. 12, 2001 Haipeng Guo KDD Research Group Department of Computing and Information Sciences Kansas State University
Presentation Outline • Simple Tutorial on Fractals • Bayesian Networks Inference Review • Joint Probability Space’s Fractal Property and its Possible Application to BBN Inference • Summary
Part I Introduction to Fractals
Fractal – “broken, fragmented, irregular” “I coined fractal from the Latin adjective fractus. The corresponding Latin verb frangere means "to break" to create irregular fragments. It is therefore sensible - and how appropriate for our need ! - that, in addition to "fragmented" (as in fraction or refraction), fractus should also mean "irregular", both meanings being preserved in fragment. ” B. Mandelbrot : The fractal Geometry of Nature, 1982
Definition: Self-similarity • A geometric shape that has the property of self-similarity, that is, each part of the shape is a smaller version of the whole shape. Examples:
mathematical fractal: Konch Snowflake • Step One. Start with a large equilateral triangle. • Step Two. Make a Star. • Divide one side of the triangle into three parts and remove the middle section. 2. Replace it with two lines the same length as the section you removed. 3. Do this to all three sides of the triangle. • Repeat this process infinitely. • The snowflake has a finite area bounded by a perimeter of infinite length!
Real world fractals A cloud, a mountain, a flower, a tree or a coastline… The coastline of Britain
Fractal geometry: the language of nature • Euclid geometry: cold and dry • Nature: complex, irregular, fragmented “Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.”
Euclid dimension • In Euclid geometry, dimensions of objects are defined by integer numbers. • 0 - A point • 1 - A curve or line • 2 - Triangles, circles or surfaces • 3 - Spheres, cubes and other solids
Fractal dimension • Fractal dimension can be non-integers • Intuitively, we can represent the fractal dimension as a measure of how much space the fractal occupies. • Given a curve, we can transform it into 'n' parts (n actually represents the number of segments), and the whole being 's' times the length of each of the parts. The fractal dimension is then : d = log n / log s
Example: Knoch snowflake • After the first step, we get four segments(it's then divided into 4 parts). • The whole curve is composed of three of these new segments. • So, the fractal dimension is : d=log 4/log 3=1.2618... • It takes more space than a 1 dimensional line segment, but it occupies less space than a filled two-dimensional square.
Another example: Cantor Set • The oldest, simplest, most famous fractal 1 We begin with the closed interval [0,1]. 2 Now we remove the open interval (1/3,2/3); leaving two closed intervals behind. 3 We repeat the procedure, removing the "open middle third" of each of these intervals 4 And continue infinitely. • Fractal dimension: D = log 2 / log 3 = 0.63… • Uncountable points, zero length
Cantor square • Fractal dimension: d = log 4 / log 3 = 1.26
The Mandelbrot Set • The Mandelbrot set is a connected set of points in the complex plane • Calculate: Z1 = Z02 + Z0, Z2 = Z12 + Z0, Z3 = Z22 + Z0 • If the sequence Z0, Z1, Z2, Z3, ... remains within a distance of 2 of the origin forever, then the point Z0 is said to be in the Mandelbrot set. • If the sequence diverges from the origin, then the point is not in the set
Colored Mandelbrot Set • The colors are added to the points that are not inside the set. Then we just zoom in on it
Applications of fractals • Astronomy: the struture of our universe • Superclusters - clusters – galaxies- star systems(Solar system) - planets - moons • Every detail of the universe shows the same clustering patterns. • It can be modeled by random Cantor square • The fractal dimension of our universe: 1.23
Applications of fractals • Rings of Saturn • Originally, it was believed that the ring is a single one. • After some time, a break in the middle was discovered, and scientists considered it to have 2 rings. • However, when Voyager I approached Saturn, it discovered that the two ring were also broken in the middle, and the 4 smaller rings were broken as well. • Eventually, it identified a very large number of breaks, which continuously broke even small rings into smaller pieces. • The overall structure is amazingly similar to... Cantor Set
Application of Fractals • Human Body THE LUNGS: • Formed by splitting lines • Fractal Canopies The brain: • The surface of the brain contains a large number of folds • Human, the most intelligent animal, has the most folded surface of the brain • Geometrically, the increase in folding means the increase in dimension • In humans, it is obviously the highest, being as large as between 2.73 - 2.79
Plants • a tree branch looks similar to the entire tree • a fern leaf looks almost identical to the entire fern • One classic way of creating fractal plants is by means of l-systems(Lindenmayer)
Bacteria Cultures • A bacteria culture is all bacteria that originated from a single ancestor and are living in the same place. • When a culture is growing, it spreads outwards in different directions from the place where the original organism was placed. • The spreading of bacteria can be modeled by fractals such as the diffusion fractals
Data Compression • A color full-screen GIF image of Mandelbrot Set occupies about 35 kilobytes • Formula z = z^2 + c, 7 bytes! (99.98% ) • It could work for any other photos as well • The goal is too find functions, each of which produces some part of the image. • IFS (Iterated function system) is the key.
Weather • Weather behaves very unpredictably • Sometimes, it changes very smoothly. Other times, however, it changes very rapidly • Edward Lorenz came up with three formulas that could model the changes of the weather. • These formulas are used to create a 3D strange attractor, they form the famous Lorenz Attractor, which is a fractal pattern.
Fractal Antenna • Practical shrinkage of 2-4 times are realizable for acceptable performance. • Smaller, but even better performance
Electronic Transmission Error • During electronic transmissions, electronic noise would sometimes interfere with the transmitted data. • Although making the signal more powerful would drown out some of this harmful noise, some of it persisted, creating errors during transmissions. • Errors occurred in clusters; an period of no errors would be followed by a period with many errors. • On any scale of magnification(month, day, hour, 20 minutes, …), the proportion of error-free transmission to error-ridden transmission stays constant. • Mandelbrot studied the mathematical process that enables us to create random Cantor dust describing perfectly well the fractal structure of the batches of errors on computer lines
Network Traffic Model Packets delays gain as s function of time in a WAN environment: • the top diagram - absolute values of RTT parameter in virtual channel; • the bottom diagram - fractal structure of packets flow that excessed 600 msec threshold.
Fractal Summary • Fractals are self-similar or self-affine structures • Fractal object has a fractal dimension • It models many natural objects and processes. • It is the nature’s language. • It has very broad applications.
Part II Bayesian Networks
Bayesian Networks Review • Bayesian Networks • Examples • Belief Update and Belief Revision • The joint Probability Space and Brute Force Inference
Bayesian Networks • Bayesian Networks, also called Bayesian Belief networks, causal networks, or probabilistic networks, are a network-based framework for representing and analyzing causal models involving uncertainty • A BBN is a directed acyclic graph (DAG) with conditional probabilities for each node. • Nodes represent random variables in a problem domain • Arcs represent conditional dependence relationship among these variables. • Each node contains a CPT(Conditional Probabilistic Table) that contains probabilities of this node being specific values given the values of its parent nodes.
Bowel-problem Family-Out Light-On Dog-out Hear-bark Family-Out Example • " Suppose when I go home at night, I want to know if my family is home before I try the doors.(Perhaps the most convenient door to enter is double locked when nobody is home.) Now, often when my wife leaves the houses, she turns on an outdoor light. However, she sometimes turns on the lights if she is expecting a guest. Also, we have a dog. When nobody is home, the dog is put in the back yard. The same is true if the dog has bowel problems. Finally, if the dog is in the back yard, I will probably hear her barking(or what I think is her barking), but sometimes I can be confused by other dogs. "
Why is BBN important? • Offers a compact, intuitive, and efficient graphical representation of dependence relations between entities of a problem domain. (model the world in a more natural way than Rule-based systems and neural network) • Handle uncertainty knowledge in mathematically rigorous yet efficient and simple way • Provides a computational architecture for computing the impact of evidence nodes on beliefs(probabilities) of interested query nodes • Growing numbers of creative applications
MINVOLSET KINKEDTUBE PULMEMBOLUS INTUBATION VENTMACH DISCONNECT PAP SHUNT VENTLUNG VENITUBE PRESS MINOVL FIO2 VENTALV PVSAT ANAPHYLAXIS ARTCO2 EXPCO2 SAO2 TPR INSUFFANESTH HYPOVOLEMIA LVFAILURE CATECHOL LVEDVOLUME STROEVOLUME ERRCAUTER HR ERRBLOWOUTPUT HISTORY CO CVP PCWP HREKG HRSAT HRBP BP Alarm Example: the power of BBN • The Alarm network • 37 variables, 509 parameters (instead of 237)
Applications • Medical diagnostic systems • Real-time weapons scheduling • Jet-engines fault diagnosis • Intel processor fault diagnosis (Intel); • Generator monitoring expert system (General Electric); • Software troubleshooting (Microsoft office assistant, Win98 print troubleshooting) • Space shuttle engines monitoring(Vista project) • Biological sequences analysis and classification • ……
Bayesian Networks Inference • Given an observed evidence, do some computation to answer queries • An evidence e is an assignment of values to a set of variables E in the domain, E = { Xk+1, …, Xn } • For example, E = e : { Visit Asia = True, Smoke = True} • Queries: • The posteriori belief: compute the conditional probability of a variable given the evidence, • P(Lung Cancer| Visit Asia = TRUE AND Smoke = TRUE) = ? • This kind of inference tasks is called Belief Updating • MPE: compute the Most Probable Explanation given the evidence • An explanation for the evidence is a complete assignment { X1 = x1, …, Xn= xn } that is consistent with evidence. Computing a MPE is finding an explanation such that no other explanation has higher probability • This kind of inference tasks is called Belief revision
Belief Updating • The problem is to compute P(X=x|E=e): the probability of query nodes X, given the observed value of evidence nodes E = e. For example: Suppose that a patient arrives and it is known for certain that he has recently visited Asia and has dyspnea. - What’s the impact that this evidence has on the probabilities of the other variables in the network ? P(Lung Cancer) = ? Smoking Visit to Asia Tuberculosis Lung Cancer tub. or lung cancer Bronchitis Dyspnea X-Ray
Belief Revision Let W is the set of all nodes in our given Bayesian network Let the evidence e be the observation that the roses are okay. Our goal is to now determine the assignment to all nodes which maximizes P(w|e). We only need to consider assignments where the node roses is set to okay and maximize P(w), i.e. the most likely “state of the world” given the evidence that rose is ok in “this world”. The best solution then becomes - P(sprinklers = F, rain = T, street = wet, lawn = wet, soil = wet, roses = okay) = 0.2646
Complexity of BBN Inference • Probabilistic Inference Using Belief Networks is NP-hard. [Cooper 1990] • Approximating Probabilistic Inference in Bayesian Belief Networks is NP-hard [Dagum 1993] • Hardness does not mean we cannot solve inference. It implies that • We cannot find a general procedure that works efficiently for all networks • However, for particular families of networks, we can have provably efficient algorithms either exact or approximate • Instead of a general exact algorithm, we seek for special case, average case, approximate algorithms • Various of approximate, heuristic, hybrid and special case algorithms should be taken into consideration
BBN Inference Algorithms • Exact algorithms • Pearl’s message propagation algorithm(for single connected networks only) • Variable elimination • Cutset conditioning • Clique tree clustering • SPI(Symbolic Probabilistic Inference) • Approximate algorithms • Partial evaluation methods by performing exact inference partially • Variational approach by exploiting averaging phenomena in dense networks(law of large numbers) • Search based algorithms by converting inference problem to an optimization problem, then using heuristic search to solve it • Stochastic sampling also called Monte Carlo algorithms
Approximate Algorithms • Exact Inference for large-scale networks is apparently infeasible. • Real life network can be up to thousands nodes. For example: QMR(Quick medical Reference) consists of a combination of statistical and expert knowledge for approximately 600 significant diseases and 4000 findings. The median size of the maximal clique of the moralized graph is 151.5 nodes. It’s intractable for all exact inference algorithms. • Approximate algorithms can be categorized into: • Partial evaluation methods by performing exact inference partially • Variational approach by exploiting averaging phenomena in dense networks(law of large numbers) • Search based algorithms by converting inference problem to an optimization problem, then using heuristic search to solve it • Stochastic sampling also called Monte Carlo algorithms
Inference Algorithm Conclusions • The general problem of exact inference is NP-Hard. • The general problem of approximate inference is NP-Hard. • Exact inference works for small, sparse networks only. • No single champion either exact or inference algorithms. • The goal of research should be that of identifying effective approximate techniques that work well in large classes of problems. • Another direction is the integration of various kinds of approximate and exact algorithms exploiting the best characteristics of each algorithm.
Part III BBN Inference using fractal? • BBN Inference using fractal?
The End • Any Questions ?