330 likes | 465 Views
T hanks to Benjamin Todd & Markus Zerlauth. Outline. LHC Machine Interlocks overview Powering Interlocks systems Beam Interlock system Characteristics & Layout Performance Monitoring & Operational checks Summary. Protection Functions. Beam Energy (360 MJ). Beam Dump.
E N D
Outline • LHC Machine Interlocks overview • Powering Interlocks systems • Beam Interlock system • Characteristics & Layout • Performance • Monitoring & Operational checks • Summary
Protection Functions BeamEnergy (360 MJ) Beam Dump Beam Protection: 100x energy of TEVATRON 0.000005% of beam lost into a magnet = quench 0.005% beam lost into magnet = damage Failure in protection – complete loss of LHC is possible Powering Protection: MagnetEnergy (10 GJ) Emergency Discharge 10-20x energy per magnet of TEVATRON magnet quenched = hours downtime many magnets quenched=days downtime magnet damaged=$1 million, monthsdowntime many magnets damaged =many millions, many months downtime (few spares) LHC is to a large extent a super-conducting machine: 1232 main dipoles, ~400 main quadrupoles and more than 8000 correctors
What are the “Machine Interlocks”? for protecting the Equipments for Beam Operation for protecting Normal Conducting Magnets or Super ConductingMagnets BIS Beam Interlock System (VME based) + Warm magnet Interlock Controllers (PLC based) WIC Safe Machine Parameters System (VME based) SMP PIC Powering Interlock Controllers (PLC based) + FMCM Fast Magnet Current change Monitor
LHC Machine Interlocks Hierarchy EXPERIMENTS ( Machine Interlocks systems in red )
Key facts for Powering Interlock Systems Superconducting circuit protection: Normal Conducting circuit protection: 140normal conducting magnets powered in 44 electrical circuits(in the LHC) And more than 1000 magnets in the injectors chain ~10’000 superconducting magnets: powered in 1600 electrical circuits Both powering interlock systems use of industrial electronics (SIEMENS PLCs with remote I/O modules) Distributed systems corresponding to LHC machine sectorization All critical signals are transmitted using HW links (Fail safe signal transmission, built in redundancy) All circuit related systems OK => Power Permit, else dump beams and activate Energy extraction (if any) Reaction time: Reaction time: • 8 controllers for • Normal Conducting magnets (WIC) 36 controllers for Superconducting magnets (PIC system) 100 ms ~ 1mS
SCADA application: monitoring views… Magnets status Power Conv. status SPS Transfer Lines Permit A Permit B SCADA: Supervisory Control and Data Acquisition Courtesy of F.Bernard (CERN)
Beam Interlock System Function Dumping system or Extraction Kicker or Beam Stopper or Beam source…. Beam ‘Permit’ Signals Target system BIS Σ(User Permit = “TRUE” ) => Beam Operation is allowed IF one User Permit = “FALSE” => Beam Operation is stopped
Beam Interlock System: quick overview • Remote User Interfaces safely transmit Permit signals from connected systems to Controller • Controller acts as a concentrator • collecting User Systems Permits • generating local Beam Permit • Controllers could be daisy chained (Tree architecture) or could share Beam Permit Loops(Ring architecture) JAVA Application Configuration DB Technical Network User Permit #1 local Beam Permit User System #1 Front End Software Application Cupper links #2 User System #2 coppercables Optical outputs or fiber optics links coppercables rear front Beam Interlock Controller (VME chassis) User System #14 #14 User Interfaces (installed in User’s rack)
LHC Beam Permit Loops 17 Beam Interlock Controllers per beam (2 per Insertion Region (IR) + 1 near Control Room) 4 fibre-optic channels: 1 clockwise & 1 anticlockwise for each beam Square wave generated at IR6: Signal can be cut and monitored by any Controller When any of the four signals are absent at IR6, BEAM DUMP! Beam-1 / Beam-2 loops are independent but they can be linked (or unlinked)
Beam Interlock Systems currently in Operation LHC Injection regions In total: LHC ring (since 2007) 50 Controllers ~ 370 connected systems 4 c. 34 controllers SPS to LHC Transfer lines • SPS ring • (since 2006) 14 c. 6 c. VME power supply & VME-bus Controller not taken into account
BIS Performance (1/3) Fail Safe concept: Must go to fail safe state whatever the failure Safe:(Safety Integrity Level 3 was used as a guideline). Must react with a probability of unsafe failure of less than 10-7 per hour and, Beam abort less than 1% of missions due to internal failure (2 to 4 failures per year). Reliable:(whole design studied using Military and Failure Modes Handbooks) Results from the LHC analysis are: P (false beam dump) per hour = 9.1 x 10-4 P (missed beam dump) per hour = 3.3 x 10-9 Available: Uninterruptable Powering (UPS)) Redundant Power Supply for Controller (i.e. VME crate) Redundant Power Supply for Remote User Interface
BIS Performance (2/3) Critical process in Hardware: ♦functionality into 2 redundant matrices ♦ VHDL code written by different engineers following same specification. Critical / Non-Critical separation: ♦ Critical functionality always separated from non-critical. ♦ Monitoring elements fully independent of the two redundant safety channels. CPLD chip (Matrix A) FPGA chip (Monitoring part) CPLD chip (Matrix B) Used CPLD: 288 macro-cells & 6’400 equivalent gates Used FPGA: 30’000 macro-cells & 1 million gates + all the built in RAM ,etc. Manager board CPLD: Complex Programmable Logic Device FPGA: Field Programmable Gate Array
BIS Performance (3/3) 100% Online Test Coverage: Can be easily tested from end-to-end in a safe manner => recovered “good as new” Modular Fast: ~20μS reaction time from User Permit change detection to the corresponding Local Beam Permit change
BIS Feature “Flexible”: thanks to Input Masking Within a fixed partition, half of User Permit signals could be remotely masked Masking depends on an external condition: theSetup Beam Flag • generated by a separate & dedicated system • (Safe Machine Parameters) • distributed by Timing YES FALSE Masking automatically removed when Setup Beam Flag = FALSE
History Buffer time
BIS Application: Timing Diagram Courtesy of J.Wenninger (CERN)
Operational Checks Pre-Operation checks (launched by Beam Sequencer) During Operation(DiaMon application) fault diagnosis and monitoring configuration verification and integrity check Post-Operation checks (included in Post Mortem analysis ) In order to ensure that safety is not compromised, the verification is carried out in three stages response analysis
Operational experience Beam Interlock System • Originally designed for LHC and firstly installed in its pre-injector for validation. • Fully operational since 2006 for the SPS ring and its transfer lines. • Since restart in Nov.09, LHC-ring BIS extensively exercised with more than 1000 emergency dumps. • Promising overall availability (only few failures with redundant VME Power Supplies and with VME Processor boards) • Very high availability concerning in-house part (99.996%) with only one stop due to a failure. Powering Interlock Systems Very good experience for both Powering Interlock Systems. Already > 4 years of operation (starting with initial LHC Hardware Commissioning) Highly dependable (only two failures in more than 4 years) • Concerning the remote User Interfaces: as foreseen, some PSU failed; thanks to redundancy, it has not lead to a beam operation disruption.
LHC 2010 run: downtime distribution Warm Magnet Interlock System : 0 Powering Interlock System Beam Interlock System % % (percentage of total downtime)
Summary (1/2) • Core of the LHC machine protection • Fail Safe concept • Fast and modular • Fully redundant and Critical process separated from Monitoring • Redundant Power Supplies + UPS • On-line Testable => recovered “As Good As New” end-to-end • Automated tools to perform regular and quick validation: • internal to Beam Interlock System • external in involving connected systems
Summary (2/2) • Embedded features for monitoring and testing internal interlock process • Together with powerful GUI application: • it provides clear and useful information to Operation crew • it minimize machine downtime • 3-stage verification: • Validation prior to beam operation (Pre-Operational checks) • On-line diagnostics during beam operation • Post Operation checks • Reliable systems: • in operation since few years with a reduced number of aborted beam operations due to internal failure.
Protection of NC magnets NC = Normal Conducting • WIC solution = PLC crate + remote I/O crates • based on Safety PLC • collect input signals from: • - thermo-switches, • - flow meters, • - red buttons, … • give Power Permit for the corresponding converter PVSS Operator Console Ethernet Configuration DB PLC + I/Os Beam Permit BIS interface Power Converter PC Status Power Permit Profibus-Safe link Thermoswitches Water Flow Red button… Several thermo-switches @ 60°C remote I/Os Magnet 1 Magnet 2
WIC: remote test feature WIC: remote test feature • - Facilitate as-good-as-new testing • perform thanks to relays implanted into the magnet interlock boxes. • simulate the opening of the thermo-switches or the flow sensors. magnet interlock box PLC OUTPUT Test Relay Test Button PLC INPUT WIC NE4 or NE8 cable Thermo-Switch Guarantee the system integrity; in particular after an intervention on the magnet sensors or after a modification of the configuration file. 30
Protection of SC magnets / circuits Courtesy of M. Zerlauth (CERN) SC = Super Conducting Power Permit Internal failures / GroundFaults Beam Permit Powering Interlock Controller BIS interface Cooling Failures AUG, UPS, Mains Failures Power Converter Normal conducting cables CRYO_OK Energy Extraction Superconducting Diode Quench- Heater QPS + nQPS HTS Current Leads Quench Signal Magnet 1 Magnet 2 sc busbar DFB
Fast Magnet Current Change Monitors Fast Magnet Current Change Monitors are (strictly speaking) not interlocking powering equipment Installed on nc magnets with << natural τ (injection/extraction septas, D1 magnets in IR1/IR5, …) and large impact on beam in case of powering failures DESY invention which has been ported with great success to LHC and SPS-LHC transfer lines Power Converter Beam Dump to BIS U_circuit Magnet 1 Magnet 2 Courtesy of M. Zerlauth (CERN)
BIS Hardware CIBU CIBS CIBT CIBM CIBG CIBX More than 2000 boards produced (~85% in operation) CIBI CIBTD & CIBMD CIBFu & CIBFc CIBD CIBMD & CIBTD CIBE CIBO CIBP