60 likes | 262 Views
SEU Mitigation Tests in Xilinx Virtex-6 for New CSC Electronics. Jason Gilmore Texas A&M University. FNAL, 6 Dec 2012. TMB Mezzanine as a Test Bed. Virtex - 6 FPGA + PROMs. QPLL. Finisar Fiber Transceiver - only for testing. Snap12 Fiber Transmitter - only for testing.
E N D
SEU Mitigation Tests in Xilinx Virtex-6 for New CSC Electronics Jason Gilmore Texas A&M University FNAL, 6 Dec 2012
TMB Mezzanine as a Test Bed Virtex-6 FPGA + PROMs QPLL Finisar Fiber Transceiver - only for testing Snap12 Fiber Transmitter - only for testing Snap12 Fiber Receiver - fibers from 7 CFEBs PCB Dimensions: 7.5” wide by 5.9” high 11 mm clearance from TMB main board I/O Voltage-level shifters, 3.3 V to 2.5 V DCFEB PRR 2
SEU Studies for New CSC Boards • Tests were designed to study SEU effects in Virtex-6 and investigate mitigation methods for CMS Endcap • FPGA sensitive elements include GTX primitives, Block RAMs, CLBs • Use of these elements may vary across firmware versions • Measure SEU cross sections for each type, allows for rescaling • Expected 20 MeV neutron fluence in ME1/1 at HL-LHC: 2.7 *1011 n/cm2over 10-years • Initial radiation testing done in 2011 • Tests with 55 MeV protons • Performed at Texas A&M Cyclotron • Raw SEU sensitivity with *no mitigation* • Found Block RAMs & CLBs were sensitive • Block RAM errors were single bit flips • GTX errors were rare • Additional tests completed this summer at UC Davis • 64 MeV protons with higher flux • Tests in Block RAMs and CLBs *with mitigation* DCFEB PRR
Testing SEU Effects in Virtex-6 FPGAs • 2011 tests with 2 Virtex-6 FPGAs: xc6vlx195t-2ffg1156ces • No SEU mitigation in firmware for this study • Goal is to measure cross section of individual FPGA elements • Determine where mitigation is necessary • Similar results from each test board: merge results for SEU cross section • GTX Transceivers (55% are used in the FPGA) • PRBS data transfers @3.2 Gbps • s = (7.6 ± 0.8) *10-10 cm2 • HL-LHC: expect ~3 SEU/year/link • Block RAMs (74% are used) • 4 kB BRAM “ROMs” readout to PC • s = (5.7 ± 0.6) *10-8 cm2 • HL-LHC: expect ~9 SEU/day/chip • CLBs (38% are used): • 4 kB CLB “ROMs” readout to PC • s = (3.7 ± 0.5) *10-8 cm2 • HL-LHC: expect ~5.5 SEU/day/chip DCFEB PRR
Recent 2012 SEU Study • Testing at UC Davis Cyclotron • 64 MeV proton beam, flux up to ~1 *109 cm-2s-1 • FPGA tests included firmware mitigation this time • Enabled native ECC feature in Block RAMs • BRAM test used Read & Write under software control • Software designed to distinguish potential failure modes • CLB testing based on triple-voting system • CLBs were implemented as a system of long shift registers • Driven by common inputs and checked against each other • Error counts recorded in registers and monitored by software DCFEB PRR
2012 SEU Test Results • GTX Transceiver (55% are used in the FPGA) • Random PRBS data patterns @3.2 Gbps on each of eight links • These SEUs only caused transient bit errors in the data • 2012 GTX SEU cross section result: s = (10 ± 0.8) *10-10 cm2 • Similar to 2011 result, ~30% larger, consistent with additional active links • HL-LHC: still expect ~3 SEU/year/link • Block RAM (74% are used) • Use built-in Xilinx ECC feature to protect data integrity • Software controlled the writes and reads for BRAM memory tests • No errors were detected in the BRAM contents: mitigation at work • 2012 BRAM SEU cross section result: s90% < 8.2 *10-10 cm2 • CLB (43% are used) • Most of the logic is a shift register system with voting • Some of it was unvoted logic for control and monitoring • This reduced the “mitigation” effect of the voting • 2012 CLB SEU cross section result: s = (6.0 ± 0.5) *10-9 cm2 • Much smaller than 2011 SEU result, factor of 6 better: mitigation at work • With this we expect ~1 CLB SEU per FPGA per day at HL-LHC DCFEB PRR