180 likes | 285 Views
“Statistical Approach Using Neural Nets” Nuclear Masses and Half-lives. E. Mavrommatis S. Athanassopoulos A. Dakos. University of Athens. K. A. Gernoth J. W. Clark. UMIST Washington University, Saint Louis. NUPECC, Town Meeting, GSI, 2003.
E N D
“Statistical Approach Using Neural Nets” Nuclear Masses and Half-lives E. Mavrommatis S. Athanassopoulos A. Dakos University of Athens K. A. Gernoth J. W. Clark UMIST Washington University, Saint Louis NUPECC, Town Meeting, GSI, 2003
Contents • Introduction • ANNs for global modeling of nuclear properties • Nuclear Masses • Half-lives of β- decaying nuclides • Conclusions - Prospects
Global models Number of parameters • Hamiltonian • Masses • Mölleret al. (FRDM) • Pearson et al.(HFBCS-1) • Half-lives • Mölleret al. (FRDM) • Klapdor et al • Statistical • Neural networks • ……………………. Input
Artificial Neural Networks ANNs • Systems of neuron-like units that are arranged in some architecture and are connected with each other through weights • Different applications of ANNs… • Scientific problems J. W. Clark, T. Lindenau & F. Ristig, Scientific Applications of NNs (Springer, 99) • We focus on the task Approximation of a fundamental mapping from one set of physical quantities to another ANN is required under training with a subset of estimating data base to create a statistical model for subsequent use in prediction.
Neural network models We use multi-layered feed-forward supervised neural networks 1. Architecture and Dynamics [I-H1-H2-…HL-O], activation function 2. Training Algorithms Back-propagation (SB), Modified back-propagation (MB1) 3. Data sets Learning, validation, test 4. Coding at Input and Output Interfaces 5. Performance measures
Neural networks elements • We use multi-layered feed-forward supervised neural networks. 6. Learning continues until the error criteria is satisfied. 5.The procedure repeats for the next pattern. 4.The connection weights change so that the cost function proportional to (t-o)2reduces. 3.The output valueis compared with target valuet 2.The information proceeds towards the output unit 1.Independent variables of a known pattern are presented at the input units. Output unit ο w36 w13 χ1 χ2 Input units
Supervised learning (on line updating) Target output Training data Input patterns Target outputs Feedforward neural network Error criteria satisfied Input pattern Calculate error Yes No Get next pattern Stop Adjust weights to reduce error
References • (calculations of ΔΜ with artificial neural networks) • S. Gazula, J. W. Clark and H. Bohr, Nucl. Phys. A540 (1992) 1 • K. A. Gernoth, J. W. Clark, J. S. Prater and H. Bohr, Phys. Lett. B300 (1993) 1 • K. A. Gernoth and J. W. Clark, Comp. Phys. Commun.88 (1995) 1 • E. Mavrommatis, S. Athanassopoulos, K. A. Gernoth, and J. W. Clark, Condensed Matter Theories, Vol. 15, edited by G. S. Anagnostakos et al. (Nova Science Publishers, N.Y. 2000) p. 207 • J.W.Clark, E.Mavrommatis, S.Athanassopoulos, A.Dakos and K.A.Gernoth in Proceedings of the Conference on ``Fission Dynamics of Atomic Clusters and Nuclei'', D.M.Brink et al., eds.(World Scientific, Singapore 2002) p.76 • S. Athanassopoulos, E. Mavrommatis, K. A. Gernoth, and J. W. Clark, submitted to Phys. Rev. C. • (calculations of T1/2with artificial neural networks) • - E.Mavrommatis, A.Dakos, K.A.Gernoth, and J. W.Clark, Condensed Matter Theories, • Vol.13, J.DaProvidencia and F.B.Malik, eds. (Nova Science Publishers, Commack, NY, 1998) p.423 • - E.Mavrommatis, S. Athanassopoulos, A.Dakos,K. A. Gernoth, and J. W. Clark, in Proceedings of the International Conference on “Structure of the Nucleus at the Dawn of the 21st Century”, eds. Bonsignori et al. (World Scientific, Singapore 2001) p. 303 • - A. Dakos, E.Mavrommatis, K. A. Gernoth, and J. W. Clark, to be submitted for publication
Nuclear Masses with ANNs Mass Excess ΔΜ [Binding Energies, Separation Energies, Q-values] Experimental values from NUBASE (G. Audi et al. Nucl. Phys. A624 (1997) 1) Net: [4-10-10-10-1]* [281] Data sets: learning: 1323(O), validation: 351(N) from MN 1981 prediction: NUBASE 158 (NB) Training: Modified Back Propagation Algorithm Modification of learning and momentum parameters Coding: 4 input units: Z , N in analog, Z, N parities 1 output unit: ΔΜ analog (S3 scaling) Performance measure: σRMS Net: [4-10-10-10-1]** [281] Data sets: learning: 1303, validation: 351 from mixed MN (FRDM) prediction: NUBASE 158 (NB) Training: as above Coding: 4 input units: Z , N in analog, Z, N parities 1 output unit: ΔΜ analog (S3 scaling) Performance measure: σRMS
Nuclear Half-lives of β-decaying nuclides with ANNs Half life T1/2(lnT1/2) ) [ground state, β- mode, branching ratio=1] Experimental values from NUBASE (G. Audi et al. Nucl. Phys. A624 (1997) 1) Best net: [17-10-1]* [191] Data sets: T1/2≤106 sec [ learning: 518, prediction: 174 ] (base B) Training: Standard Back - propagation (with momentum term) Coding: 16 input units: Z , N in binary ; 1 input unit: Q in analog 1 output unit: lnT1/2 in analog Performance measures: σRMS (Klapdor et al.) (Möller et al.)
Conclusions • Global Models based on ANNs for Nuclear Masses are approaching the accuracy of models based on Hamiltonian theory. • Global Models based on ANNs for half-lives of β- decay arepromising. Prospects • Further development of global models based on ANNs for nuclear masses and half-lives etc. (optimization techniques, pruning, construction, etc.) • Further investigation for models of the mass differences DM=ΔMexp-ΔMFRDM • Further insight in the statistical interpretation and modeling with ANNs • Inverse problem
Neural Network Modeling as well as other statistical strategies based on new algorithms of artificial intelligence may prove to be a useful asset in the further exploration of nuclear phenomena far from β-stability.