280 likes | 505 Views
Baseline architecture of ITER control system Anders Wallander , Franck Di Maio, Jean-Yves Journeaux, Wolf-Dieter Klotz, Petri Makijarvi, Izuru Yonekawa ITER Organization (IO) 13067 St. Paul lez Durance, France. Basics. Goal: Demonstrate feasibility of fusion as an energy source
E N D
Baseline architecture of ITER control system Anders Wallander, Franck Di Maio, Jean-Yves Journeaux, Wolf-Dieter Klotz, Petri Makijarvi, Izuru Yonekawa ITER Organization (IO) 13067 St. Paul lez Durance, France
Basics Goal: Demonstrate feasibility of fusion as an energy source (Q=10 means output power equal 10 times input power) Schedule: 10 years construction phase First plasma 2019, first D-T plasma 2027 Collaboration: CN, EU, IN, JA, KO, RF, US
This is the ITER Agreement 140 slices
…all these links work But this will only work if… Standards Architecture
Finite set of “Lego blocks”, which can be selected and connected as required
Plant System I&C Is a deliverable by ITER member state. Set of standard components selected from catalogue. One and only one plant system host.
ITER Subsystem is a set of related plant system I&C
Plant Operation Network is the work horse general purpose flat network utilizing industrial managed switches and mainstream IT technology
Plant System Host is an IO furnished hardware and software component installed in a Plant System I&C cubicle. There is one and only one PSH in a Plant System I&C. PSH runs RHEL (Red Hat Enterprise Linux) and EPICS (Experimental Physics and Industrial Control System) soft IOC (Input Output Controller). It provides standard functions like maintaining (monitoring and controlling) the Common Operation State (COS) of the Plant System. PSH is fully data driven, i.e. it is customized for a particular Plant System I&C by configuration. There is no plant specific code in a PSH. PSH has no I/O.
Fast Controller is a dedicated industrial controller implemented in PCI family form factor and PCIe and Ethernet communication fabric. There may be zero, one or many Fast Controllers in a Plant System I&C. A Fast Controller runs RHEL and EPICS IOC. It acts as a channel access server and exposes process variables (PV) to PON. A Fast Controller has normally I/O and IO supports a set of standard I/O modules with associated EPICS drivers. A Fast Controller may have interface to High Performance Networks (HPN), i.e. SDN for plasma control and TCN for absolute time and programmed triggers and clocks. Fast Controllers involved in critical real-time runs a RT enabled (TBD) version of Linux on a separate core or CPU. A Fast Controller can have plant specific logic. A Fast Controller can act as supervisor for other Fast Controllers and/or Slow Controllers. The supervisor maintains Plant System Operating State.
High Performance Computer are dedicated computers (multi core, GPU) running plasma control algorithms.
High Performance Networks are physically dedicated networks to implement functions not achievable by the conventional Plant Operation Network. These functions are distributed real-time feedback control, high accuracy time synchronization and bulk video distribution.
Estimate of system size ~1000 computers connected to PON
Timing System Main requirements: 50 ns RMS absolute time synchronization (off-line correlation of diagnostics) • It is common practice by large experimental facilities to invent • their own home made timing system and we want to avoid that • We believe that IEEE 1588-2008 (PTP v2) provides a • COTS alternative fulfilling ITER requirements • IEEE 1588, 2008, provides 50 ns RMS synchronization accuracy • of absolute time over Ethernet with possibility to program triggers • and clocks synchronized with this absolute time using COTS • This standard is being endorsed by more and more suppliers and • we will see many new COTS products in the future • This also provides an evolution path to White Rabbit being • developed by CERN • Therefore we have baselined IEEE 1588-2008 for TCN and will confirm • this decision by further evaluations in 2nd half of 2010
Distributed real-time plasma control Main requirements: control cycles Hz-kHz, peak bandwidth 25 MB/s, number of nodes participating 50-100 • ITER distributed plasma control main characteristics • decoupling and separation of concern • data driven • multiple in multiple out (MIMO) • non intrusive probing • flexibility • scalability • simulation support • minimize latency and jitter • Two schools of thoughts for real-time network • Reflective memory • Ethernet based (e.g. UDP, RTnet) • Decision on technology delayed while watching the market • Further test beds and evaluations in 2011
Conclusions • The non-technical peculiarities of the ITER project has • been addressed • Components making up ITER control system have • been defined and a baseline architecture outlined • Flexibility in combining these standard components • have been emphasized • Having a set of standard components and a sound • architecture will ease integration • Issues on timing and feedback control have been • touched • We intend to continue working with our partners all over the world to make the ITER control system contribute to ITER success. • https://www.iter.org/org/team/chd/cid/codac/Pages/