1 / 62

Neural Network and Earthquake Prediction

CS157B Lecture 10. Neural Network and Earthquake Prediction. Professor Sin-Min Lee. What is Data Mining?. Process of automatically finding the relationships and patterns, and extracting the meaning of enormous amount of data. Also called “knowledge discovery”. Objective.

astra
Download Presentation

Neural Network and Earthquake Prediction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS157B Lecture 10 Neural Network and Earthquake Prediction Professor Sin-Min Lee

  2. What is Data Mining? • Process of automatically finding the relationships and patterns, and extracting the meaning of enormous amount of data. • Also called “knowledge discovery”

  3. Objective • Extracting the hidden, or not easily recognizable knowledge out of the large data… Know the past • Predicting what is likely to happen if a particular type of event occurs … Predict the future

  4. Application • Marketing example • Sending direct mail to randomly chosen people • Database of recipients’ attribute data (e.g. gender, marital status, # of children, etc) is available • How can this company increase the response rate of direct mail?

  5. Application (Cont’d) • Figure out the pattern, relationship of attributes that those who responded has in common • Helps making decision of what kind of group of people the company should target

  6. Data mining helps analyzing large amount of data, and making decision…but how exactly does it work? • One method that is commonly used is decision tree

  7. Decision Tree • One of many methods to perform data mining - particularly classification • Divides the dataset into multiple groups by evaluating attributes • Decision tree can be explained a series of nested if-then-else statements.

  8. Decision Tree (Cont’d) • Each non-leaf node has a predicate associated, testing an attribute of data • Leaf node represents a class, or category • To classify a data, start from root node and traverse down the tree by testing predicates and taking branches

  9. Example of Decision Tree

  10. Advantage of Decision Tree • simple to understand and interpret • require little data preparation • able to handle nominal and categorical data. • perform well with large data in a short time • the explanation for the condition is easily explained by boolean logic.

  11. Advantages of Decision Tree • Easy to visualize the process of classification • Can easily tell why the data is classified in a particular category - just trace the path to get to the leaf and it explains the reason • Simple, fast processing • Once the tree is made, just traverse down the tree to classify the data

  12. Decision Tree is for… • Classifying the dataset which • The predicates return discrete values • Does not have an attributes that all data has the same value

  13. CMT catalog: Shallow earthquakes, 1976-2005

  14. INDIAN PLATE MOVES NORTH COLLIDING WITH EURASIA Gordon & Stein, 1992

  15. COMPLEX PLATE BOUNDARY ZONE IN SOUTHEAST ASIA Northward motion of India deforms all of the region Many small plates (microplates) and blocks Molnar & Tapponier, 1977

  16. India subducts beneath Burma microplateat about 50 mm/yrEarthquakes occur at plate interface along the Sumatra arc (Sunda trench)These are spectacular & destructive results of many years of accumulated motion

  17. NOAA

  18. IN DEEP OCEAN tsunami has long wavelength, travels fast, small amplitude - doesn’t affect ships AS IT APPROACHES SHORE, it slows. Since energy is conserved, amplitude builds up - very damaging

  19. TSUNAMI WARNING Because seismic waves travel much faster (km/s) than tsunamis, rapid analysis of seismograms can identify earthquakes likely to cause major tsunamis and predict when waves will arrive Deep ocean buoys can measure wave heights, verify tsunami and reduce false alarms

  20. HOWEVER, HARD TO PREDICT EARTHQUAKES recurrence is highly variable Sieh et al., 1989 Extend earthquake history with geologic records -paleoseismology M>7 mean 132 yr s 105 yr Estimated probability in 30 yrs 7-51%

  21. EARTHQUAKE RECURRENCE AT SUBDUCTION ZONES IS COM PLICATED In many subduction zones, thrust earthquakes have patterns in space and time. Large earthquakes occurred in the Nankai trough area of Japan approximately every 125 years since 1498 with similar fault areas In some cases entire region seems to have slipped at once; in others slip was divided into several events over a few years. Repeatability suggests that a segment that has not slipped for some time is a gap due for an earthquake, but it’s hard to use this concept well because of variability GAP? NOTHING YET Ando, 1975

  22. SEPTEMBER 19, 1985 M8.1 A SUBDUCTION ZONE QUAKE ALTHOUGH LARGER THAN USUAL, THE EARTHQUAKE WAS NOT A “SURPRISE” A GOOD, MODERN BUILDING CODE HAD BEEN ADOPTED AND IMPLEMENTED 1985 MEXICO EARTHQUAKE

  23. EPICENTER LOCATED 240 KM FROM MEXICO CITY 400 BUILDINGS COLLAPSED IN OLD LAKE BED ZONE OF MEXICO CITY SOIL-STRUCTURE RESONANCE IN OLD LAKE BED ZONE WAS A MAJOR FACTOR 1985 MEXICO EARTHQUAKE

  24. 1985 MEXICO EARTHQUAKE: ESSENTIAL STRUCTURES--SCHOOLS

  25. 1985 MEXICO EARTHQUAKE: STEEL FRAME BUILDING

  26. 1985 MEXICO EARTHQUAKE: POUNDING

  27. 1985 MEXICO EARTHQUAKE: NUEVA LEON APARTMENT BUILDINGS

  28. 1985 MEXICO EARTHQUAKE: SEARCH AND RESCUE

  29. Definition • Characteristics • Project:California Earthquake Prediction)

  30. Neural Networks • AIMA – Chapter 19 • Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994 • An Introduction to Neural Networks (2nd Ed). Morton, IM, 1995

  31. Neural Networks • McCulloch & Pitts (1943) are generally recognised as the designers of the first neural network • Many of their ideas still used today (e.g. many simple units combine to give increased computational power and the idea of a threshold)

  32. Neural Networks • Hebb (1949) developed the first learning rule (on the premise that if two neurons were active at the same time the strength between them should be increased)

  33. Neural Networks • During the 50’s and 60’s many researchers worked on the perceptron amidst great excitement. • 1969 saw the death of neural network research for about 15 years – Minsky & Papert • Only in the mid 80’s (Parker and LeCun) was interest revived (in fact Werbos discovered algorithm in 1974)

  34. Neural Networks

  35. Neural Networks • We are born with about 100 billion neurons • A neuron may connect to as many as 100,000 other neurons

  36. Neural Networks • Signals “move” via electrochemical signals • The synapses release a chemical transmitter – the sum of which can cause a threshold to be reached – causing the neuron to “fire” • Synapses can be inhibitory or excitatory

  37. The First Neural Neural Networks McCulloch and Pitts produced the first neural network in 1943 Many of the principles can still be seen in neural networks of today

  38. What is neural network ? • Def 1: Imitate the brain, and surpass the brain to manage both pattern processing problem and symbolic problem. • Example: learning and self-organization

  39. What is neural network ? (cont.) Def 2: Complex-valued neural networks are the network that deal with complex-valued information by using complex-valued parameters and variables. Example: • Good dish: color, smell, taste • Prediction: seismic history, ground water, abnormal behavior, nearby seismic activities

  40. What is neural network ? (cont.) Def 3: brain artificial brain  artificial intelligence  neural network Example: information processing in the real world should be flexible enough to deal with unexpectedly (geo figure) and dynamically (fore/main/after-shock) changing environment.

  41. A new sort of computer • What are (everyday) computer systems good at... and not so good at?

  42. Neural networks to the rescue • Neural network:information processing paradigm inspired by biological nervous systems,such as our brain • Structure: large number of highly interconnected processing elements (neurons) working together • Like people, they learn from experience (by example)

  43. Neural networks to the rescue • Neural networks are configured for a specific application, such as pattern recognition or data classification, through a learning process • In a biological system, learning involves adjustments to the synaptic connections between neurons  same for artificial neural networks (ANNs)

  44. Where can neural network systems help • when we can't formulate an algorithmic solution. • when we can get lots of examples of the behavior we require. ‘learning from experience’ • when we need to pick out the structure from existing data.

  45. Inspiration from Neurobiology • A neuron: many-inputs / one-output unit • output can be excited or not excited • incoming signals from other neurons determine if the neuron shall excite ("fire") • Output subject to attenuation in the synapses, which are junction parts of the neuron

  46. Synapse concept • The synapse resistance to the incoming signal can be changed during a "learning" process [1949] Hebb’s Rule: If an input of a neuron is repeatedly and persistently causing the neuron to fire, a metabolic change happens in the synapse of that particular input to reduce its resistance

  47. Mathematical representation The neuron calculates a weighted sum of inputs and compares it to a threshold. If the sum is higher than the threshold, the output is set to 1, otherwise to -1. Non-linearity

  48. Input Learningrate Actualoutput Desired output A simple perceptron • It’s a single-unit network • Change the weight by an amount proportional to the difference between the desired output and the actual output. Δ Wi = η * (D-Y).Ii Perceptron Learning Rule

More Related