170 likes | 405 Views
ITER Control System Technology Study. Klemen Žagar klemen.zagar@cosylab.com. Overview. About ITER ITER Control and Data Acquisition System (CODAC) architecture Communication technologies for the Plant Operation Network Use cases/requirements Performance benchmark. A Note!.
E N D
ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com
Overview • About ITER • ITER Control and Data Acquisition System (CODAC) architecture • Communication technologies for the Plant Operation Network • Use cases/requirements • Performance benchmark EPICS Collaboration Meeting, Vancouver, April 2009
A Note! Information about ITER and CODAC architecture presented here-in is a summary of ITER Organization’s presentations Cosylab prepared studies on communication technologies for ITER EPICS Collaboration Meeting, Vancouver, April 2009
About ITER (International Thermonuclear Experimental Reactor) EPICS Collaboration Meeting, Vancouver, April 2009
Toroidal Field Coil Nb3Sn, 18, wedged Central Solenoid Nb3Sn, 6 modules Poloidal Field Coil Nb-Ti, 6 Cryostat 24 m high x 28 m dia. Torus Cryopumps, 8 Port Plug heating/current drive, test blankets limiters/RH diagnostics Blanket 440 modules Vacuum Vessel 9 sectors Divertor 54 cassettes Major plasma radius 6.2 m Plasma Volume: 840 m3 Plasma Current: 15 MA Typical Density: 1020 m-3 Typical Temperature: 20 keV Fusion Power: 500 MW Machine mass: 23350 t (cryostat + VV + magnets) - shielding, divertor and manifolds: 7945 t + 1060 port plugs - magnet systems: 10150 t; cryostat: 820 t About ITER 29m ~28m EPICS Collaboration Meeting, Vancouver, April 2009
CODAC Architecture EPICS Collaboration Meeting, Vancouver, April 2009
Plant Operation Network (PON) • Command Invocation • Data Streaming • Event Handling • Monitoring • Bulk Data Transfer • PON self-diagnostics • Diagnosing problems in the PON • Monitoring the load of the PON network • Process Control • Reacting on events in the control system by issuing commands or transmitting other events • Alarm Handling • Transmission of notification of anomalous behavior • Management of currently active alarm states EPICS Collaboration Meeting, Vancouver, April 2009
Prototype and Benchmarking • We have measured latency and throughput in a controlled test environment • Allows side-by-side comparison • Also, hands-on experience is more comparable • Latency test: • Where a central service is involved (OmniNotify, IceStorm or EPICS/CA): • Send a message (to the central service) • Upon receipt on the sender node, measure difference between send and receive times • Without a central service (OmniORB, ICE, RTI DDS): • Round-trip test • Send a message (to the receiving node) • Respond • Upon receipt of the response, measure the difference • Throughput test: • Send messages as fast as possible • Measure differences between receive times • Statistical analysis to obtain average, jitter, minimum, 95th percentile, etc.
Applicability to Use Cases not applicable at all applicable, but at a significant performance/quality cost compared to optimal solution; custom design required applicable, but at some performance/quality cost compared to optimal solution; custom design required applicable, but at some performance/quality cost compared to optimal solution; foreseen in existing design applicable, and close to optimal solution; use case foreseen in design First number: performance Second number: functional applicability of the use case
Applicability to Use Cases not applicable at all applicable, but at a significant performance/quality cost compared to optimal solution; custom design required applicable, but at some performance/quality cost compared to optimal solution; custom design required applicable, but at some performance/quality cost compared to optimal solution; foreseen in existing design applicable, and close to optimal solution; use case foreseen in design First number: performance Second number: functional applicability of the use case
PON Latency (small payloads) EPICS Collaboration Meeting, Vancouver, April 2009
PON Latency (small payloads) • Ranking: • OmniORB (one way invocations) • ICE (one way invocations) • RTI DDS (not tuned for latency) • EPICS • OmniNotify • ICE storm EPICS Collaboration Meeting, Vancouver, April 2009
PON Throughput EPICS Collaboration Meeting, Vancouver, April 2009
PON Throughput • Ranking: • RTI DDS • OmniORB (one way invocations) • ICE (one way invocations) • EPICS • ICE storm • OmniNotify EPICS Collaboration Meeting, Vancouver, April 2009
PON Scalability • With technologies that do not use IP multicasting/broadcasting, per-subscriber throughput is inversely proportional to the number of subscribers! (source: RTI) RTI DDS efficiently leverages IP multicasting (source: RTI) EPICS Collaboration Meeting, Vancouver, April 2009
EPICS • Ultimately, ITER Organization has chosen EPICS: • Very good performance. • Easiest to work with. • Very robust. • Full-blown control system infrastructure (not just middleware). • Likely to be around for a while (widely used by many labs). • Where EPICS could improve? • Use IP multicasting for monitors. • A remote procedure call layer (e.g., “abuse” waveforms to transmit data serialized with with Google Protocol Buffers, or use PVData in EPICSv4). EPICS Collaboration Meeting, Vancouver, April 2009