140 likes | 235 Views
Programming Neural Networks and Fuzzy Systems in FOREX Trading. Presentation 1 Balázs Kovács (Terminator 2), PhD Student Faculty of Economics, University of Pécs E-mail: kovacs.balazs.ktk@gmail.com Dr. Gabor Pauler, Associate Professor Department of Information Technology
E N D
Programming Neural Networks and Fuzzy Systems in FOREX Trading Presentation 1 Balázs Kovács (Terminator 2), PhD Student Faculty of Economics, University of Pécs E-mail:kovacs.balazs.ktk@gmail.com Dr. Gabor Pauler, Associate Professor Department of Information Technology Faculty of Science, University of Pécs E-mail: pauler@t-online.hu
Content of the Presentation • Artificial Neural Networks (ANN): • Function of Neurons • Long-term memory • Short term memory, Membrane value aggregation • Activation, Firing • Types of Synapses • Topology • Neuron fields • Input • Hidden • Output • Set of Synapses • Intra-field • Complementer • Competitive • Inter-field • Direct • Full • Kohonen • Sanger • Graphical representations of ANN • Topology diagram • I/O diagram • Control function diagram • References
wik Artificial Neural Network (ANN): Function of Neurons 1 wik bi Si(xit) ui ai xit li Sj wji wji • ArtificialNeural Network (Mesterséges Neurális Hálózat) is distribution free estimatorwithmachinelearningbasedoninterconnectedsystem of Neurons/Cells/Nodes(Ideg-sejt):separateunitswhich has thefollowingfuctions: • Non-volatileMemory (Permanens memória): jisynapsesconnectingj=1..mneuronswithi=1..nneuronsinthenetworkduringt=0..TtimeperiodstransmitsjtRsignals of jth neuron intthperiodwithchangingwjitIntensity/ Weight(Súly). Teaching/ Training(Tanítás) of net meanschangingtheinitially random wji0Rweights. Allinformationlearnt is storedassynapticweights • VolatileMemory (Rövid távú memória): a neuron aggregateswjit×sjtweightedsignals of incomingsynapsesinto a xitMembranevalue (Membrán érték) intheActivationProcess (aktivációs folya-mat),additionallytheyPassivelydecay (Passzív lecsengés) membranvalueby(1-di)DecayRate (Lecsengési ráta) tokeepmembranevaluewithin[li, ui]Lower/Upperbounds (Alsó/Felső Korlát) and smooth (Simít) itschangesintime. Thereare 2 methods of membranevalueaggregation: • Additive (Additív):xit=di(Sj(wjit×sjt)/Sj(wjit))+ +(1-di)×xit-1i=1..n, j=1..m, t=1..T (1.1) • Multiplicative (Multiplikatív): xit=diPj(sjtwjit)(1/ Sj(wjit))+ +(1-di)×xit-1, i=1..n, j=1..m, t=1..T (1.2)
wik wik Artificial Neural Network (ANN): Function of Neurons 2 wik wik bi bi Si(xit) Si(xit) ui ui ai xit li Sj ai xit li Sj wji wji wik wik wji wji bi Si(xit) ui ai xit li Sj wji wji • Firing (Tüzelés): in t=1..T time periods, when xit aggregated membran values of i=1..n neurons are enogh big, they emit sit signals their Si(xit)Signal Function (Jelzési függvény): they are monotonic increasing functions, bounded with [li, ui] lower/ upper bounds.Their Inflexion point (Inflexios pont) is called aiFiring Treshold (Tüzelési határérték) and they have biSlope (Meredekség) in that point: • Signal function can be Linear (Lineáris): sit+1 = (xit - ai) / bi + (ui– li)/2, i=1..n, t=1..T (1.3) It is usually used omly in neurons recieving input • Or Sectionwise linear (Szakaszonként lineáris): sit+1 = Max(li, Min(ui,(xit - ai) / bi + (ui– li)/2)), i=1..n, t=1..T (1.4) It is faster to compute, no floating point code but gives less exact approximation • Or Sigmoid/Logistic (S-görbe): sit+1 = 1/(1+exp(-(xit - ai)/b i))×(ui– li)+ li, i=1..n, t=1..T (1.5) This is the most general (and biologically correct) form giving more exact approximation But takes more time to run floating point code
Artificial Neural Network (ANN): Types of Synapses 1 bk Sk(xkt) uk ak xkt lk Sj wki wik bi Si(xit) ui ai xit li Input • sit Signals of i = 1.. n neurons emitted at firing in t = 1..T time periods are propagated throgh Synapses/Edges (Szinapszis): a connection between neurons iandk with a wikt R real type weight, which can change in t time and used for weighting transmitted signals: (sit × wikt). Types of synapses are: • By Strength (Erősség) they can be: • Exciting (Izgató): Positive weightedwikt>0 • Blocking (Gátló): Negative weightedwikt<0 • By Signal Encoding (Jelkódolás) they can be: • Amplitude Modulation, AM (Amplitúdó-modulált): info is carried by amplitude of signal impulses. • This is more simple • But enables less exact control, prone to random noises. Therefore in biology it is used for simple controls (eg. Bowel movement), butin math it is preferred because of simplicity, and there are no unvanted noises in a math model • Frequency Modulation, FM (Frekvencia modulált): info is carried by frequency of impulses with the same amplitude. • This enables more exact control, less prone to noise • But much more complicated. Therefore it is used only in biology fro more exact sensory controls (eg. eyes) • By Direction (Irányultság) they can be: • Feedforward (Előrecsatoló): only transmits signal from input towards output, reverse is blocked: wiktR, wkit=0 • Recurrent (Visszacsatoló): only transmits signal from output to input as feedback, reverse is blocked: wikt=0,wkitR
Artificial Neural Network (ANN): Types of Synapses 2 wik wii bk Sk(xkt) uk ak xkt lk Sj bi Si(xit) ui ai xit li Input • Bidirectional (Kétirányú): it transmits signal in both direction: wijt = wjit R • Recursive/ Recurrent/ Self-connected (Önmagára visszacsatoló): both starting and ending neuron is the same: wiitRIt is pretty seldom in math models but in biology it has important role at FM control as frequency generator • By Delay (Késleltetés) synapses can be: • Prompt (Azonnali): it is effective immediately in next t period, it is the most frequent • Delayed (Késleltetett): connection is effective at t+t period, were t is the delay. It is used to load Time windows (Időbeli ablak) from Temporal Data Series (Idősor) as input into the network. In biology, this is equivalent with a Finite Impulse Response, FIR (Véges impulzusszámú válasz) filter
Content of the Presentation • Artificial Neural Networks (ANN): • Function of Neurons • Long-term memory • Short term memory, Membrane value aggregation • Activation, Firing • Types of Synapses • Topology • Neuron fields • Input • Hidden • Output • Set of Synapses • Intra-field • Complementer • Competitive • Inter-field • Direct • Full • Kohonen • Sanger • Graphical representations of ANN • Topology diagram • I/O diagram • Control function diagram • References
Artificial Neural Network (ANN): Topology: Neuron Fields who who who who bh bo bo bh Sh(xht) So(xot) Sh(xht) So(xot) uo uo uh uh ao ah ao ah xot xot xht xht lh lo lo lh wih wih wih wih Sj Sj Sj Sj bi bi Si(xit) Si(xit) ui ui ai ai xit xit li li Input Input • Artificial Neural Network, ANN Topology (Mesterséges neurális hálózat topológiája): • Sets of neurons and system of synapses connecting them • It determines how difficult info the network can learn • It is described with Directed Graph (Irányított gráf), where Nodes (Csomópontok) are neurons and Edges (Élek) are sysnapses • Neuron Field (Neuronmező): set of i=1..n neurons with the sameSj membrane aggregation, Si signal finction, ai firing treshold, bi signal function slope, ui, li signal bounds. They are located in a same place ordered into 1 or more Dimensional (Dimenziós) Arrays (Tömb) and they are related with their environment in the same way. Field types by relation: • i=1..nInput Field (Input mező): input neurons get their membrane value from Receptors (Érzékelőkről) or sample database and emit it through a linear signal function toward hidden or output field • h=1..HHidden Field (Rejtett mező): hidden neurons aggregate their membrane values from signals of an input or previous hidden field and emit it through usually sigmoid signal function twords the output or the next hidden field. Usually this field has more neurons than in input or output fields. More neurons in general results in more exact approximation but higher computational requirement. But this is not true endlessly: there is an optimal number and above that performance is decreasing. There is no exact formula to determine optimal number, only experimenting! • o=1..OOutput Field (Output mező): they aggregate membrane value from signals of input or hidden fields and emit it through usually sigmoid signal function towards controlled or estimated system • Width (Szélesség) of Neuron fields: number of neurons in fields • Depth (Mélység) of Neuron fields: number of fields in topology
Artificial Neural Network (ANN): Topology: Set of Synapses wih wih who who who who bh bo bh bo bh bh bh bh bh bh Sh(xht) Sh(xht) Sh(xht) Sh(xht) Sh(xht) So(xot) Sh(xht) Sh(xht) So(xot) Sh(xht) uh uh uh uh uh uo uh uh uo uh ao ah ah ah ao ah ah ah ah ah xht xot xht xht xht xht xht xot xht xht lh lh lh lh lh lo lh lo lh lh Sj Sj Sj Sj Sj Sj Sj Sj Sj Sj bi bi bi bi bi bi bi bi Si(xit) Si(xit) Si(xit) Si(xit) Si(xit) Si(xit) Si(xit) Si(xit) ui ui ui ui ui ui ui ui ai ai ai ai ai ai ai ai xit xit xit xit xit xit xit xit li li li li li li li li Input Input Input Input Input Input Input Input • Types of Intra-Field (Mezőn belüli) synaptic sets: • No connections: this is the most general case • Complementary (Komplementer): positive weights, very seldom used • Competitive (Kompetitív): negative weights, neurons Mutually (Kölcsönösen) block each other, it has very important role in Winner Take All, WTA (Győztes mindent visz) situations, eg. when the network has to select the best fitting or most effective output from many possible ones • Types of Inter-Field (Mezőközi) synaptic sets: • Direct Connection (Direkt csatolás):1:1 connecting of neurons of 2 fields, it is used very seldom • Full Connection (Teljes csatolás):N:MCartesian product (Descartes-szorzat):”everything by everything”-type connecting of neurons of 2 fields, it is used most frequently. It has 2 special subcases: • Kohonen-type Self-Organizing Map, SOM Connection (Kohonen Alakfeltérképező csatolás): • A small linear input field (where each neuron represents an input variable) connects full feedforward to a competitive interconnected sigmoid output field. There the output is the discrete winner neuron (eg. „Sell” or „Buy”) depending on input situation (eg. Currency pair prices). It is equivalent of K-mean clustering (K-közép klaszterezés) in statistics • Sanger Connection (Sanger-féle csatolás): • A large linear input field (eg. many input vars with scattered infos – even sex scandals of IMF-presidents) connects full feedforward to a smaller sigmoid hidden field to compress input info less, but more meaningful variables. Equivalent of Factor Analysis (Faktoranalízis)
Content of the Presentation • Artificial Neural Networks (ANN): • Function of Neurons • Long-term memory • Short term memory, Membrane value aggregation • Activation, Firing • Types of Synapses • Topology • Neuron fields • Input • Hidden • Output • Set of Synapses • Intra-field • Complementer • Competitive • Inter-field • Direct • Full • Kohonen • Sanger • Graphical representations of ANN • Topology diagram • I/O diagram • Control function diagram • References
Graphical representation of ANN: Topology diagram who who who who bo bo bh bh Sh(xht) So(xot) Sh(xht) So(xot) uh uh uo uo ao ao ah ah xot xht xht xot lh lh lo lo wih wih wih wih Sj Sj Sj Sj bi bi Si(xit) Si(xit) ui ui ai ai xit xit li li Input Input • Topology Diagram (Topológiai dia-gramm) is a Directed Graph (Irányított gráf), where: • Nodes (Csomópont) are neurons: • They contain a 2-dimensional chart where horizontal axis shows xj membran value and its aggregation method (Sj or Pj) • Vertical axis shows sj signal emitted by Sj signal function with ai firing treshold, bi slope, ui, li signal bounds • Edges (Él): wij weighted synapses: • Direction of edge (forward, backward, bidirectional) shows signal transfer • Its color means positive or negative weight • Its thickness shows intensity of weight • Nodes are ordered in neuron fields • Within fields nodes can be ordered in 1 or 2 dimensional grid/array • Input field is usally at the bottom or left while output is on top or right • Topology diagram can show the structure and parametering of even large networks pretty well • But it does not show anything about how the network transforms inputs into output. Threfore we use other diagrams:
Graphical representation of ANN: I/O diagram wji×Sj(xj) xi bi Sj ai Si(xi) Si(xi) wik wik×Si(xi) Si(xi) wik wik×Si(xi) • I/O Diagram (I/O diagramm) is the most detailed graphical representation of functional transformations of neural activation, but it can handle only 2 incoming signals by each neuron: • The 2 incoming weighted signals appear on a 1 dimensional coordinate axis. Then an additive or multiplicative „aggregation grid” symbolizing dendrits aggregates the 2 signals • Signal function in the node transforms aggregated membrane value into outgoing signal • Synaptic weights appear as linear functions transforming travelling signal at the „curves” of road-like synapses
Graphical representation of ANN: Control Function Diagram It works only with 2 input variables (x,y), therefore we have 2 linear input neurons downwards It shows emitted signal values of neurons in the coordinate system of (x,y) input variables For a multi-layered full-feedforward network. (Eg. How a scanner character recognizer models characters in the 4th layer from incoming x,y ink dot coordinates)
References • General intro in neural thory (HUN): http://www.cs.ubbcluj.ro/~csatol/mestint/pdfs/neur_halo_alap.pdf • Basik textbook (HUN): Borgulya István: Neurális Hálózatok és Fuzzy Rendszerek, Dialog Campus, 1999 • Internet textbook (ENG): http://www.dlsi.ua.es/~mlf/nnafmc/pbook/pbook.html • Bibliography (ENG): http://liinwww.ira.uka.de/bibliography/Neural/art.html