1.11k likes | 1.29k Views
Introduction to CODAS. Sverker Griph. Overview 1. Fusion history The JET experiment The JET machine Functional overview The operator view HW organisation Computer networks Plant interface JET machine safety. Real time view JET state machine JET control software S/W infrastructure
E N D
Introduction to CODAS Sverker Griph CCFE is the fusion research arm of the United Kingdom Atomic Energy Authority
Overview 1 Fusion history The JET experiment The JET machine Functional overview The operator view HW organisation Computer networks Plant interface JET machine safety Real time view JET state machine JET control software S/W infrastructure System admin view CODAS configuration Database view Result access view Presentation to be defined
Overview 2 Documentation view JET ROs JET roles Security view CODAS on-duty view Product view Personal computing environment Management view S/W engineering view S/W technology view JET software and the future of software technology Presentation to be defined for all of these points
1920: F.W. Aston • 1905: E = mc2
1905: E = mc2 • 1920: F.W. Aston • 1920: A. Eddington • 1933: L. Szilard • 1934: E. Rutherford • 1938: Meitner, Hahn, Strassmann • 1941: Manhattan project • 1945: First fission bombs • 1952: First fusion bomb
Magnetic confinement Without magnetic field Charges in a magnetic field
1969 - T3 high performance plasma confirmed
The JET Experiment • The principle of fusion • Confinement principles: • gravity, inertia, magnetic • Tokamak: Magnetic confinement in torus • Toriodial magnetic field • Pollodial magnetic field • Fusion reaction: D+ + T+ = He++ + n
The JET Experiment • European and international collaboration • JET Joint Undertaking (1978 - 1999). • EFDA-JET (2000 – 2014 and beyond?) • JET Operating Contract - JOC: UKAEA-EFDA • JET followed by ITER • Site preparations in France • Design under further development • JET experiments to support design decisions
The JET Machine • Toroidal vacuum vessel • Toroidal field (4.0 T) - toroidal coils • Poloidal field - poloidal coils • Divertor - divertor coils • Additional heating and fuelling systems • Diagnostics
The JET Machine • Power supply systems • National grid: 415 kV - max 575 MW • Two fly-wheel generators - max 630 MW • Total 1.2 GW • Vacuum systems • Partitions for vacuum isolation • Gas introduction system • Cryogenic systems • Pumping the diverter, beams, diagnostics
The JET Machine • Tritium plant - J25 building • Store tritium in uranium beds • Recover tritium from exhausts • Remote handling system • Interface and control systems
415kV Max 575 MW
Max 315 MW x 2 Total max 1.2 GW
Functional Overview • JET is an ‘electrical transformer’ - pulse • Preparations • Countdown • Power supply preparations • Checks and initialisations • Pulse • Data collection • Analyses
Functional Overview • Typical pulse: • PF pre-magnetisation • TF ramp up • Gas introduction - PF fast rise - plasma • PF null • X-point formation • Additional heating and fuelling
JET Plasma Pulse - 60-120 s • Executed automatically after a countdown • Synchronized - optical fiber distribution • triggers + 2 MHz clock (phase-locking) • clock usage is recorded • Waveforms are generated • used as plant control references • Real-time feedback loops are executed • Signals are recorded - local memory • analog inputs, pulse counters, cameras
Countdown - circa 3 minutes • Automatic checks • Pulse number allocated • Pulse archives prepared • Slow control sequences executed • flywheel generators winded up • valves opened or closed • circuit breakers opened or closed • More automatic checks • Trigger pulse timing system
After Pulse • Recorded signals transferred to archive • Recorded time bases transferred to archive • Associations between signals and time are kept • Pulse setup info is archived • Archives transferred to central storage • Automatic chain of analysis is executed • Immediate human evaluation • Continuing evaluation - days, months & years
JET Experiment Pulse Cycle JET is pulsed 25 pulses per day ~80,000 since 1983 Two shifts 06:30 – 22:30 Pulse every ~ 30 minutes ~30-40s of plasma Maximise Repetition Rate
Hierarchical and Modular Architecture Subsystem per major Plant System Vacuum, Poloidal Field, RF Heating ... 10 Subsystems for ~70 Diagnostics Same structure applicable to Machine Control and to Diagnostics 21 subsystems in total Autonomy at Subsystem Level • Three-level hierarchical and modular control system structure in hardware and software: • Central/Supervisory - level 1 • Subsystem - level 2 • Component - level 3 1 2 3
Functional Overview • JET control is organised as subsystems • Central subsystems • Essential subsystems • Additional heating and fuelling subsystems • Other pulse oriented subsystems • Non-pulse subsystems • Test and off-line subsystems
Functional Overview • JET control is organised as subsystems • Central subsystems • MC - machine control • SS - safety and (machine) security • SA - storage & analysis • PM - pulse management
Functional Overview • JET control is organised as subsystems • Essential subsystems • GS - general services • VC - vacuum control • PF - poloidal field control • TF - toroidal field control • DA - magnetic diagnostic (KC1) • DF - plasma density diagnostic (KG1)
Functional Overview • JET control is organised as subsystem • Additional heating and fuelling subsystems • AH - neutral beam heating system • YC - neutral beam heating system • RF - Radio Frequency heating system • LH - Lower Hybrid current drive system • PL - Pellet Launcher system
Functional Overview • JET control is organised as subsystems • Other pulse oriented subsystems • SC - Saddle Coil system • Diagnostic subsystems • DB, DC, DD, DE • DG, DH, DI, DJ
Functional Overview • JET control is organised as subsystems • Non-pulse subsystems • NM - network monitoring
Functional Overview • JET control is organised as subsystems • Test and off-line subsystems • PD - Power supply development system • YD - diagnostic commissioning system • YC - neutral beam off-line testing system • Development systems • YE - driver development system • CC - Codas commissioning system • EL - Electronics commissioning system
Functional Overview • Within each subsystem control is subdivided as subsystem components • Components can be selected for pulse or not • Components can be operational or not • Components can be marked as essential or not • A subsystem component that is • selected and essential • not operational • stops JET countdown • state may be forced
The Operator view • Preparations for pulse • Prepare parameters for pulse - level1 etc • Manually check and setup machine • Mimics - xmimic (active points: commands & links) • EIC provided with ‘reasons’ • Select systems to be included in pulse • Clear all states that raises alarms - xdalar • Put CISS in normal (Central Interlock and Safety System)
The Operator view • Countdown • Dcountd: coordination of the activities of all subsystems taking part in the pulse - cdutil • Start with automatic checks • Initialisation of H/W and S/W modules • Wait - then trigger pulse • More automatic checks • Automatic execution of the pulse • Data collection: • pulse files: JPF,IPF,QPF,DPF,LPF
The Operator view • Countdown • Hardware and software checks - the pulse may fail • ‘Pulse Aborted’ • H/W - something like CISS detects a hard fault • S/W - an unconditional check fails • Engineer in charge aborts the pulse • ‘Force State’ - a check has failed but… • Alarms - xdalar displays, operator acknowledges
The Operator view • Result analysis of pulse file data • Control room session • PAD analysis – old results automatically removed • XPAD displays pulse file signals • Some outside control room analyses may be used to adjust the next pulses • After session analyses • JET Analysis Cluster (JAC): PC farm - 120+ Linux systems
HW organisation • Computer networks - ethernet • Off-line office network - JETNET • On-line control networks • Core network • IP gap - no IP routing between on- and off-line • Bespoke proxies implements access across • X11 server proxy: used by ‘tunnels’ through the gap • OMS - object monitoring service
HW organisation • Computer networks - on-line control: • Many, many sub-nets: • Datanet - control and data acquisition between control subsystems and subordinate host computers • JPFnet - Pulse file data transfer from control subsystems to central file storage • One sub-net per subsystem fileserver cluster
HW organisation • Computer networks • Subsystem fileserver cluster subnet • One SUN file server • Related control subsystems each running on one SUN V40 computer • No specific discs on subsystem computers • Subsystem internal disc used for swapping and tmpfs • srv-control: MC, SA, GS, PF, TF, VC • srv-heat: RF, AH, YC, LH, PL
HW organisation • Computer networks - What for? • nfs • TCP/IP • X11 client-server • CODAS message protocols • CSL5 - ported from Nord/Sintran • CSL8 - extended client-server event based API • HTTP ‘Black Box’ protocol • UDP/IP • echo, CSL6 ‘real-time’ broadcast
HW organisation • Computer networks - ATM (not Ethernet) • Star configuration - central ATM switch • High bandwidth • Real time compatible • Dedicated channels with guaranteed bandwidth • Supports both point-2-point and broadcasts • TCP/IP in parallel on dedicated channels • Usage: Real time signal server input and signal distribution
HW organisation • Computer networks • NM - network monitoring subsystem • Regularly polls all known hosts using whatever method is available • Raises alarm if a host is no longer available • What hosts are there? • ypcat -k hosts | less
HW organisation • Plant interface • CAMAC buses with CAMAC modules • VME buses with VME modules • TCP/IP to plant PC:s • National Instruments modules etc • PCI buses for a few module types • PC hardware, especially memory is cheap • BAD2 • FPGA front ends, no bus • UXD7 via Ethernet • Plant interfaces are ‘hosted’ on subsystems
HW organisation • Plant interface - CAMAC • Old ‘slow’ bus standard - but still going strong • CAMAC modules in CAMAC crates • Standard modules for crate control, crate LAMs and crate supervision • Specialised CAMAC modules • FPGA based module refurbishment program • Serial communication with host via loop • Fibre optics (U-port interface with batteries) • Main loop and backup loop
HW organisation • Plant interface - VME • VME crate with 1 VPLS module (CODAS design) • CTTS interface • Crate monitoring • LSD digital I/O • VME crate controller with Ethernet interface • PPC or 68K processor (VxWorks operating system) • Specialised VME modules