480 likes | 671 Views
ATLAS Upgrade for SLHC. ESE Seminar 25 September 2008. Philippe Farthouat, CERN. Content: LHC upgrade plans Detector upgrade (except tracker) Electronics upgrade Tracker upgrade Upgrade organisation CERN participation. LHC Evolution – Phase 1.
E N D
ATLAS Upgrade for SLHC ESE Seminar 25 September 2008 Philippe Farthouat, CERN
Content: • LHC upgrade plans • Detector upgrade (except tracker) • Electronics upgrade • Tracker upgrade • Upgrade organisation • CERN participation
LHC Evolution – Phase 1 • LHC is complete apart from full collimation • Limited to 40 % of nominal for protection until collimators installed • Collimators to be completed in 2010/11 shutdown, allowing rise to ~nominal luminosity of 1034 cm-2 s-1 • Best current estimate is that one nominal year will deliver 60 fb-1 • Phase-1: 2013-2016 • Linac-4 • Approved and work has started; higher brightness • Allows higher LHC current, to “Ultimate” which is 2.3 times nominal • Ready to run in 2013 • New Inner Triplet focusing magnets • Use spare super-conductor from LHC magnets • Larger aperture, allows * of 0.25 m instead of 0.55 m • Install in 2012/13 shutdown • In principle also gives factor 2 on nominal • Expectation is that these two improvements will allow a ramp-up to 3 x nominal; Conditions: 70 minimum bias events per BC; ~700 fb-1 before phase 2
LHC Evolution – Phase 2 • Several ideas being explored to see the best way to achieve10 x nominal in 2017 • Injector improvements – higher current, higher reliability, shorter fill time • New machine elements and ideas: • Magnets inside the experiments for “Early Separation” schemes • Crab cavities • Wire correctors • Luminosity Leveling
LHC Evolution – Injectors • Details in the presentation of Roland Garoby during the LHCC of July 1st, 2008 • http://indico.cern.ch/conferenceDisplay.py?confId=36149
SLHC Luminosity • 50 ns scheme gives always more events per crossing – high pileup • 25 ns also high: • low machine fill means short spill but fill time the same • needs very high peak luminosity (15 x nominal) for same integrated luminosity as 50 ns • Leveling luminosity • Investigate (de-)tuning *, vary crabbing, or bunch length to have fixed intensity throughout spill • Win-win: it turns out these schemes can allow a higher machine fill • Higher integrated luminosity • Very interesting to the experiments
Anticipated Peak and Integrated Luminosity • LHC, ATLAS and CMS agreed to use this as basis for all planning • Approved at LHCC meeting 1 July 2008 • Sets the conditions and timescale • Phase 1 starts with 6 – 8 month shutdown end 2012 • Peak luminosity 3 x 10-34 cm-2 s-1 at end of phase 1 • Phase 2 will start with an 18 month shutdown at end of 2016 • Peak 10 x 10-34 cm-2 s-1 in phase 2 • 3000 fb-1 integrated luminosity lifetime of detectors minimum in phase 2
What are the conditions at sLHC? CMS event • 300 – 400 pile-up events at start of spill (unless luminosity levelling) • Want to survive at least 3000 fb-1 data taking • B-layer at 37 mm: • ~30 tracks per cm-2 per bunch crossing • >1016 1 MeV n-equivalent non-ionising • Few 10’s of MGray
Phase 1 • Limited time for installation – 6 to 8 months in 2012/13 shutdown • Small increase in peak rate above previous estimates (2 --> 3 x 1034) • Total integrated luminosity similar to previous expectations ~700 fb-1 • Limited changes needed • Pixel B-layer to be replaced • In fact the replacement would take more than a year • A new B-layer will be put inside the current detector, along with a new smaller diameter beam pipe • TDAQ • TDAQ will be continuously upgraded to cope with rates and take advantage of new processing power • Level-1 trigger might need some improvement such as the introduction of topological triggers • The Central Trigger Processor would be affected (CERN involvement) • Fast track finding might be introduced to speed up Level-2
Phase 2 - SLHC Complete Innerdetector Parts of muonsystem • Interface to machine: • Beam Pipe • Shielding • Possible machine magnets inside Forward calorimeter
Beam Pipe & Shielding • Currently central part of ATLAS beam pipe is Be, rest is Stainless Steel • SS gives large backgrounds, especially to muon system • SS gets activated Access problems for maintenance • Change to Al in ~2009 • Change to all Be for SLHC • Be is expensive compared to SS, but cheap compared to muon chambers! • It gives a big reduction in background in critical areas of muon system • A factor 2 or better • Shielding • Already highly optimised, only small improvements possible
Muon Spectrometer • Hit rate might be too high in some places • Large uncertainty on the background rate • Need to wait for first beam before knowing what the real rate is • Current detector done with some safety factor on the simulated hit rate • If at low end and a complete Be beam pipe in place, the precision chambers can remain • Replacement or adaptation of the current readout electronics might be needed (see later) • On going R&D • Micromegas for both tracking and trigger (CERN involvement) • Thin MDT • TGC for both tracking and trigger
Calorimeters HEC electronics FCAL • EM calorimeter should work even with high pile up • Adaptation of the digital filters in the back-end electronics • Front-end electronics might require some changes • Detecting elements of the Tile calorimeter do not need replacement • However electronics does (see later) • HEC front-end electronics in the cryostat. If a replacement is needed (because of radiation level) it will imply a major work • Forward calorimeters need some adaptation because the liquid argon might boil…
Main topics • General concern for the electronics in the cavern • Level-1 trigger • Muon readout electronics • Calorimeters readout electronics • New readout electronics for the tracker
General concerns • The electronics installed in the cavern will be affected by the higher level of radiation • ATLAS rules defined safety factors for the radiation tolerance of the electronics • Don’t know yet if we are on the safe side or not • Major upgrades could be required if not • Power supplies • FE electronics of the calorimeters and muon chambers • Control devices (ELMBs) • Wait for the beam for final assessment • A substantial fraction of the front-end electronics will be obsolete at the time of SLHC • Could need a replacement in case of high level of failure • R&D starting to be ready in case
Level-1 Trigger • More complexe algorithms needed • For instance, currently level-1 calorimeter only performs object counting • So far no need for tracking in level-1 although it might change • Can both the latency and the L1A rate be maintained at their current values (2.5 µs and 75-100 kHz)? • Latency very likely to increase • Very much linked to what will happen to readout systems of the calorimeters and muon chambers • Intermediate upgrade for Phase-1
Muon chambers readout • For high rates • Upgrade the TDC Or • Selective readout based on level-1 (CSM Upgrade) • Possible new chambers (Micromegas, TGC, …) will require specific electronics
Calorimetersreadout • Additional radiation tests of the front-end electronics needed when LHC background known • Both Lar and Tile calorimeters are looking at a new front-end system reading out everything at BC rate • Large dynamic range (14-16 bits) ADC @ 40 MHz • High data throughput • Lar 1600 FE boards, each of them delivering 100 Gbits/s • Backend electronics for readout and level-1 trigger • High bandwidth backplane (ATCA,…) • A bit brute force but allows very high flexibility for the trigger
Content • Detector layout • Readout • Detector Control • Power • Services • Schedule • On-going developments • On-going work in the collaboration, still far from being finalised • Mainly presentation of the strips detector • A lot of what is presented here is very likely to be inaccurate or even wrong
Straw man Layout All silicon detector to replace the current pixel, SCT and TRT: - pixels, - short strips(2.5cm) - long strips (10cm)
Environmental parameters NIEL 1016 Radiation for 3000 fb-1 TID 1015 • Running up to 3000 fb-1 • Design for 6000 fb-1 • Should take about 6 years (?) hadron rate for SEE • Detector temperature ~-30oC • Magnetic Field ~2T 1014
Developments for Pixels • Sensors • 3D Si (Parker, Da Via et al) • Thin silicon + 3D interconnects (Nisius et al) • Gas over thin pixel (GOSSIP) (van de Graaf et al) • Diamond pixels (Kagan et al) • Front-end electronics IC • Willing to have chips 4 times larger as current one and pixel size 2 times smaller (50 x 250 µm) • ~22000 channels/chip • Development in 130nm CMOS technology for B-layer replacement
Silicon sensors Carbon honeycomb or foam Carbon fiber facing Bus cable Coolant tube structure Hybrids Readout IC’s Stave concept for strips • Modules on fully integrated staves • Double sided • 1 to 2-m long • LS modules 10 x 10 cm2; 1280 strips (10-cm long, 80 µm pitch) • SS modules 10 x 10 cm2; 5120 strips (2.5-cm long, 80 µm pitch) 10 cm 1 to 2 m
Strips Detector in numbers • Current SCT detector • 4088 modules • 49k 128-channel FEIC • 6.3M channels Long Strip Cylinders (4 meter length) Short Strip Cylinders (2 meter length)
Working Assumptions • Binary readout as in the current detector • 1 hit = 1 bit • L1A rate unchanged (?) and L1 latency increased • Read-out architecture as identical as possible for the strips and the pixels • Avoid extra design diversity • Share as much as possible design efforts and costs • From the front-end electronics up the off-detector electronics • Material budget is a key element for the upgraded tracker • Solutions minimising the amount of material always preferred • Extremely harsh radiation environment for the front-end electronics • High level of single event upsets expected. • Read-out architecture as simple as possible; complex tasks such as partial event building, data integrity check, etc. to be avoided • Amount of services connected to the tracker to be kept as low as possible • To maintain an overall low material budget • Available volume for services routing severely limited
Readout Organisation Generic readout model • In the current detector the readout unit is the module • Large number of low speed readout links • Large number of power supply lines • Not affordable for the upgrade • Hierarchical readout scheme • FEIC • Module controller • Stave controller (GBT) • Low number of high speed links
Module #1 Module #2 Module #10 Cooling In TTC, Data ( & DCS) F Opto fibers E SC I (DCS link) C DCS DCS env. IN s interlock Cooling Out SC MC Service bus Hybrid PS cable Readout Organisation (Barrel Strips) • Half single sided stave as a readout unit • Readout hybrids as sub-elements • 2 hybrids with 20 FEICs per module for SS • 1 hybrid with 10 FEICs per module for LS Short Strip Single Sided Half Stave
Quantity of Data 0 hit 41% 1 cluster 34% >1 clusters 21% Simulation for worst case scenario: 1035 cm-2 luminosity 50 ns BC period (400 overlapping events per BC) Short Strips A. Weidberg etal. Number of hits per FEIC Event size for a short strips module (40 128-channel FEICs). Current ATLAS SCT detector coding scheme Mean size ~1600 bits A. Weidberg etal.
Module #1 Module #2 Module #10 Cooling In TTC, Data ( & DCS) F Opto fibers E SC I (DCS link) C DCS DCS env. IN s interlock Cooling Out SC MC Service bus Hybrid PS cable Data Rates • Long strips and short strips are well balanced • 400 versus 380 FEIC • Numbers without so much safety margin • Detector layout very likely to change • L1A rate could increase • Luminosity could increase • Data format might change • Pixel will require more bandwidth • Better design for more x2 160 Mbits/s 800 bits/event 100kHz L1 rate 80 Mbits/s 1600 bits/event 100kHz L1 rate 160 Mbits/s 1.6 Gbits/s 3.2 Gbits/s 1.6 Gbits/s (20 * 80 Mbits/s) 3.2 Gbits/s
Data Links • FEICs to MC • Electrical, 160 Mbits/s • MC to SC (GBT) • Electrical, 160 Mbits/s • Up to 20 links per half single sided SS stave • DC balanced code mandatory if serial powering is used, desirable in all cases • On-going work to assess what is achievable at different places • Optical links at >3.2 Gbits/s • Rely on the versatile link project
TTC Links • The TTC links are used to transmit to the front-end: • A clock synchronised with the beam (either the LHC clock or a multiple of it) • The L1A • Synchronous commands such as the bunch counter reset (BCR) or the event counter reset (ECR) • Control data to be stored in the FEICs, MCs and SMCs (e.g. threshold, masks, …) • Unidirectional links to minimise the number of lines • To read a register, command transmitted on TTC link, data transmitted on the read-out data link • TTC links bandwidth dictated by : • Clock frequency to be transmitted • Might be better to transmit a clock at higher frequency than the BC to be used directly by the readout logic (e.g. 160 Mhz if reading out at 160 Mbits/s. Avoid some PLL) • Necessity to transmit simultaneously the L1A and commands (e.g. Bunch Counter Reset) • Need for forward error correction to fight SEUs • Need for DC balanced codes and self clock recovery protocols • Bandwidth greater than or equal to 80 Mbits/s
Electrical Links • Protocols not defined • Clock and data separated versus single encoded link • Several low speed links versus a single high speed • Multi-drop or point-to-point • A few pictures of some promising tests 320 Mbits/s on a 60-cm Kapton tape with 4 loads V. Fadeyev 80 Mbits/s on an hybrid with 10 and 20 FEICs A. Greenall
Need for Redundancy? • Redundancy has a lot of impact on the readout architecture • Possible schemes for redundancy on the hybrids shown • Full redundancy required for the optical links (?) • Difficult to implement redundancy without increasing the number of ASICs or the amount of services • Some work necessary to assess the needs • Impact of loosing a FEIC, a readout hybrid, a half single sided stave • Define the maximum losses allowed within the life time • Define the minimum reliability level needed to be better after the life time of the experiment Possible redundancy schemes to cope with the loss of a FEIC or of a Module Controller
Data Format • Data format used in the current detector is highly optimised in size • Necessary to look at each single bit to know what it is and what is following and requires synchronisation between FEICs • “On the fly” event building and decoding • Might be a problem when a large amount of SEU are expected • Could be better to consider the system as a network and to push packets of data from the FEIC up to off-detector electronics • Pros and cons • Enough to only protect the headers against errors • Data unprotected • No synchronisation expected in the data transmission in the FE • System cannot hang • More complex task in the off-detector electronics • A lot of resources available in big FPGA • Extra data volume • To be simulated
Integrated with the readout or separated? • In the current detector, a lot of direct connections of sensors • Not applicable for the upgrade • A lot of discussions concerning the need for a fully separated DCS system • Separately powered and separate communications • Separate ASICs • Additional services…. • Still possible to run safely the detector even with the DCS integrated in the readout
Module #1 Module #2 Module #10 Cooling In TTC, Data ( & DCS) F Opto fibers E SC I (DCS link) C DCS DCS env. IN s interlock Cooling Out SC MC Service bus Hybrid PS cable Mode of Operation Humidity sensor Temperature sensors (cooling pipes and Stave Controller) • A few sensors directly connected • Used also for interlock • Powering sequentially the different components only when it is safe to do so • Note that it is easy to do with DC-DC converters but may be less with serial power • DCS functionalities in the module controllers and the stave controllers • Separation of the DCS data and the readout data in the off-detector electronics • Does not require the DAQ to work
Total Power for the Strips • Assumptions: • Pessimistic 1.5mW [1mW] per channel for the strip FEIC and 1.3V Vdd • 150mA [100mA] per 128-channel FEIC. • Total current (for the barrel and both end-caps): 48.5kA [33kA] • 80% efficiency of front-end power devices would lead to 78.5kW [52kW] dissipated in the tracker volume • 70% efficiency would lead to 90kW [60kW] • Current SCT and TRT detectors are fed with about 12kA • Assuming the amount of services cannot be increased, the powering scheme to be used must limit the amount of current to be fed at that level • That’s about 1/6th (1/5th) of the current needed by the front-end electronics. Hence either a factor 5 to 6 (at least) DC-DC conversion or a serial powering scheme of at least 5 to 6 modules has to be used.
DC-DC or Serial Power • Developments are on-going: serial powering and DC-DC conversion • Cf last ESE seminar (Federicco) and the presentations during TWEPP08 • Reducing the current to be fed by a factor of 5-to-10 minimum is reachable with both solutions • DC-DC converters offer some interesting flexibility • Can separate different supplies easily • Analog – digital saving in overall power • Stave controller, module controller and FEICs capability of controlling the operation • Radiation hardness not solved • Serial powering scheme has some system issues which are being tackled • Options to be kept opened for a while • Real estate is an important issue for both solutions
Space available for the power devices • Readout hybrid with 20 FEIC (ABCn 0.25 device) • Not so much space for the power devices and the module controller…
Rough Estimate • Services for the barrel strips at the entrance of the tracker volume • 1 mW/ch • DC-DC or serial power introduces a factor 10 saving on the current • 2-V drop max • Packing factor of 2 LV is still the dominant part by far
Total Fantasy ? • LoI, TP and TDR in 2009, 2010 and 2011 • LHC stop: October 2016 • SLHC start: Spring 2018 • Tracker installation: January 2017 (?) • Stave assembly start: January 2013 (?) • Very little time left for fully specifying the components and designing them • Choice of technology to be used (130nm or 90nm or lower) as late as possible • Most of the work done in 130nm so far • Some early work with 90nm (or lower) necessary to be able to make the decision in due time • Analogue performances • Radiation hardness
Total Fantasy? (cont) • Architecture definition complete with options Nov-08 • Decision on options (powering, electrical links, opto) Dec-09 • Component specifications complete Feb-10 • Prototype sensitive blocks (design, MWP fab, test) Sep-08 to Dec-09 • Prototype complete component design, fab and test Feb-10 to Aug-11 • Stave assembly & test (system test of electronics) Aug-11 to Feb-13 • Pre-production component design, fab and test Feb-12 to Feb-13 • Pre-production stave assembly & system test Feb-13 to Jul-13 • Production Readiness Review Aug-13 • Component Production • First production wafer batch (fab and test) Aug-13 to Feb-14 • Deliver first production batch to module assembly sites Feb-14 • Start component series fabrication Sep-13(Assumes no design change from pre-production) • Start of component series delivery to assembly sites Apr-14 No contractual value!
ASICs developments and specifications • Working document on architecture available since about a year • Reviewed and presented to the collaboration • Two working groups in place to try and define more precisely the specifications of the different components. One for the pixels and one for the strips • Inputs to the “common projects” teams (e.g. GBT) • ~350k FEIC but only ~20k MC and ~5k SC(GBT) • ABCn 0.25 chip as test vehicule for sensor studies • Also contains some features for testing different power schemes and readout speeds • Preliminary study of the front-end part (preamplifier-shaper-discriminator) in 0.13 • Very good power performances: <200µW per channel • See J. Kaplon’s presentation during the TWEPP workshop • Evaluation of SiGe
Fixed Barrel Length • SS longer more modules more data (+20%) • LS shorter less modules and less data
The readout architecture of the ATLAS upgraded tracker has to be different from the current one • Detector organised in staves. Hierarchical readout following this segmentation • Fewer but higher speed links • Some elements of the readout are not to be produced in very high quantity • Points towards common solutions with CMS and others • Power distribution requires special efforts to maintain reasonable services • Saving factor 5 - 10 on the current • Schedule looks uneasy • Not so much time for the electronics development • Decision on technology to be used for the FE electronics at the latest in 2012
Two major coordination bodies • Develop a realistic and coherent upgrade plan • Steer R&D activities • Cover engineering aspects from the beginning • Upgrade Steering Group (USG) • Representatives from systems, software, physics and technical coordination • Project Office • Part of the technical coordination • Project Office should technically guide the upgrade activities • Conceptual design and R&D • Prototyping • Pre-series and construction • Installation and commissioning • Upgrade addressed during each ATLAS overview week and during dedicated workshops • Several devoted to the tracker Ref:https://edms.cern.ch/file/690177/1/Upgrade_Org_PO.doc Indico pages: http://indico.cern.ch/categoryDisplay.py?categId=350
R&D Activities • Proposal made to the USG • Established lightweight review procedure for approval • Circulation to collaboration board • More groups joining? • Second discussion in USG • Sufficient resources? • Decision about recommendation by USG • Review procedure during the lifetime of the project • Review office • A number of proposals issued so far • <https://edms.cern.ch/cedar/plsql/navigation.tree?cookie=7849675&p_top_id=1310970533&p_top_type=P> • ATLAS welcome joint R&D activities with other experiments (CMS) • Versatile link and GBT • 130 nm or lower processes • Power distribution
Technical coordination • PO engineering • Tracker engineering • Electronics coordination • Interface to machine (beam pipe and shielding) • Level-1 • Central Trigger Processor upgrade for phase-1 and SLHC • Si Strips • ABCnext design • Micro-electronics design and project management • Tests and qualification of devices, modules etc. • Could make a major contribution to the end-cap • Would make sense to have also a strong participation in the FE system • Micromegas development and evaluation • Will need electronics • Some interest of individuals here and there • e.g. in the back-end electronics of the calorimeters • There is not today a clear picture of where the CERN ATLAS Team will contribute • No MOU or alike defining yet the upgrade activity • Still little specific funding available so far • Rely on common projects • Versatile Link • GBT • Support for µelectronics • TTC upgrade • Power