310 likes | 323 Views
P hase 2 pixel electronics. Jorgen Christiansen, CERN. This meeting. Serial powering: Fernando, Stella, Giacomo Material estimate and improvements: Stella Thermal estimates (shunt LDO): Yadira Pixel cable services : Charles CHIPIX65 demonstrator: Lino RD53A: Flavio
E N D
Phase 2 pixel electronics Jorgen Christiansen, CERN
This meeting • Serial powering: Fernando, Stella, Giacomo • Material estimate and improvements: Stella • Thermal estimates (shunt LDO): Yadira • Pixel cable services: Charles • CHIPIX65 demonstrator: Lino • RD53A: Flavio • General electronics stuff: Jorgen • Prepare for RD53A • Chip size, module size • Readout • Organization issues • (Awaiting feedback from phase 2 tracker review) • https://indico.cern.ch/event/535547/
Prepare for RD53A • We need to prepare what to do when RD53A materializes (mid2017): • Single naked chip testing • Chip testing at wafer level • Sensor and Bump-bonding • Test of assemblies • Module Design and test • Serial powering testing • Readout testing • Radiation testing • Test beams • Test/DAQ system with Hardware, firmware, software and related support • Will be used across several CMS groups • Will be used across CMS and ATLAS groups • >10 systems will be actively used in 2017 -> • Better to have one well working test system than 5 different partial test systems • Appropriate for next generation chip (CMS specific or common with ATLAS) • Needs to be organized now • Initial organization of this in RD53 context but groups outside RD53 can/should contribute to this Who is interested to contribute to such an effort and on what ? (e.g. reuse of software from current CMS/ATLAS test systems)
Pixel chip and module size • Pixel chip size critical for optimized layout of inner layers • Generic RD53 size: 20 x 20mm • RD53A (to allow reticle sharing with MPA): 20 x 12mm • CMS pixel chip size: 20x 15mm ? • Pixel module size: • Small modules: 1x2, 1x4, 2x2 favoured for layout. • Modularity of 4 pixel chips per module favoured for serial powering failure modes (1 out of 4 chips failing) • Two 1x2 pixel modules can if needed be put in parallel (looking like a 1x4 power module) • To be studied in more detail.
ATLAS/CMS compatibility • ATLAS in the process of revising their trigger strategy • RD53 assumed generic trigger rate of ~1MHZ and ~10us latency • Readout full pixel at L0 rate • Multiple new options being considered • 1 or 2 level trigger for pixel • Pixel readout rate: 1MHZ or 4MHz • 4MHz full readout will give a major problem with readout bandwidth for inner layers (>10Gbits/s per chip!) • Latency: Possibly up to 25us • Not room for the latency buffer in the pixels • Partial readout (Region of interest) at L0 and full readout at L1 • Requires additional functionality and additional buffers • Converge within the next few months for their TDRs. • Pixel chip size ?. • Common final chip ?.
Readout: E-links, LPGBT, opto • Modest rate (1.28Gbits/s) E-links to LPGBT • LPGBT (10Gbits/s): 7 input links • Would have preferred 8 input links and option of 2.56Gbits/s • Speed will be affected by radiation damage • Low mass cables critical • High rate regions: Max 4 E-links per chip • Low rate regions: Shared E-link between 2 or 4 pixel chips • One control link per module @ 160Mbits/s • ~5500 readout + ~2000 control E-links, <1m • AluKapton Flex or twisted pair • Inner barrel: 4% of pixel surface, 20% of links • 1k 10Gbits/s optical links: ~1 TBytes/s • LPGBT / VCSEL located on service cylinder • Standardized optical link/chip from CERN • LPGBT, laser driver, Pre-amp, Opto, DC/DC • 100Mrad, 1015neu/cm2 • In forward acceptance so mass also critical • Readout rates under verification. • Monte Carlo hit data at PU=200 • Data formatting, Clustering, Data compression Possible alternatives: • High rate (~5Gbits/s) electrical to remote laser (ATLAS) • Opto conversion on pixel module • Outer modules: Lower radiation, more space • Silicon photonics
Service cylinder Phase1 service cylinder Opto conversion modules: LPGBT + VCSEL A B • Forward: Opto conversion modules can be placed close to/on forward disks • Barrel: Electrical links to opto conversion to be optimized for low mass • (20% of pixel cables from inner barrel)
Cable simulations and tests • Alukaptonflex and twisted pair: 0.1 – 1(2)m • Minimize mass for acceptable cable losses • S-parameter models • Verification of link: Eye diagrams, etc. • Cable driver optimization: Pre-emphasis, etc. • Extraction of cable models • Q3D simulation models • TDR measurements • VNA measurements • FPGA/pre-emphasis measurements • Cross coupling between cables: To come • Shielded/Unshielded twisted pairs • Stacking of multiple flex cables • Prototyping: To come • Connectors or soldered ? • Collaboration with ATLAS Currently no activity on this as Luismi focussed on DRAD testing ( + next 3 months) Alu flex measurement Alu flex simulation
Data rate/formatting • Hit rate @3GHz/cm2 (r=3cm, PU=200) for 2x2cm chip • Hits per event: 3GHz/cm2*4cm2*25ns: 300 • Could be more (4GHz/cm2) if not using optimized pixel aspect ration (25x100) • Assuming on-chip event assembly. • A bit more buffering required (to be determined how much) • Basic raw data estimate: • Hits: 9 + 9 address, 5bit TOT, 1flag: 24 bits(75% address, 25% TOT) • Event headers/trailers: 32bits • Rate: 750kHz * (2x32 + 300*24) = 5.5Gbits/s • 2x2 pixel regions: • Average number of hits per 2x2 region from Monte-Carlo data • Mid barrel: 1.5 • End barrel: 1.8 (25x100) • 2x2 data: 8+8 address, 4x4 TOT: 32bits(50% address, 50% TOT, but ~1/2 have no TOT) • Rate: 750KHz * (2x32 + 300*32/1.8) = 4.0Gbits/s(for 1.5x2cm CMS chip: 3.0 Gbits/s) • Statistical variations and rate margins • 75-80% link utilization (otherwise large derandomizersrequired) • Define short-middle term trigger rate constraints(have just made proposal for this to CMS: Backup slide) • 3-4 1.28Gbits/s E-links per chip for inner layer • Monte-Carlo simulations to confirm data rates and required de-randomization buffers • MC available from CMS, ATLAS ? Middle barrel End barrel
Clustering / compression • Clustering: Put together all hit data belonging to same particle • Barrel: “Square” clusters1-4 2x2 pixel regions (PR) hit per cluster/particle • End of barrel: Strongly elongated clusters from incidence angle and sensor thickness1-8(16) 2x2 pixel regions hit per cluster/particle • Data “reduction” from efficient data grouping and formatting • Address: 1 global address of cluster (16 or 18 bits) + local hit addresses or hit map(predefined max cluster size: 4x4=16bit mid barrel, 4 x 16=64 end barrel) • TOT: Only non-zero 4 bit TOT • Implementation: Identify/assemble neighbour 2x2 PR data on same/neighbour double columns • Simple logic in EOC • Small increase of required buffer sizes (MC to show how much) • Keep cluster data size word aligned (32/64bit), if not giving too high overhead • Monte-Carlo simulations required to determine effective data reduction • Guesstimate: 25-30% reduction compared to 2x2 PR readout • Additional data compression: Worthwhile ?. • TOT/charge information (> 50% after clustering): ~25% reduction would be good • Huffman encoding (4-5 bit lookup table) • Relative/delta Addresses (<50% after clustering): Most likely not worthwhile. • Variable bits in “data units” • Full event should be word aligned (32/64bit). • Not a must in RD53A but would be nice • But study critical to determine system (E-links, LPGBTs)
Bandwidth distribution • Distributing bandwidth over multiple (1,2,4) links • Fixed allocation of double columns • Pro: Simple implementation • Con: Not possible to by-pass broken link, Less statistical de-randomization • Round robin at event level (my preference for final chip) • Pro: Possible to by-pass broken link • Con: Full chip event building must be made, Additional buffering ?, Increased readout latency (for CMS not critical) • Synchronized data lane on multiple links • Pro: ? (defined in Aurora protocol) • Con: Complicated (power, SEU), Not “appropriate” with LPGBT in the middle. • Not needed in RD53A • But we should demonstrate high hit rate capability. • Option A should be simple to implement
Merging data from multiple chips • 1-3 inputs from neighbour chips + chip itself • Outer pixel layers • Allows a significant reduction in required E-links for pixel detector • Simple serial interface between chips on same module: • AC coupling not required (assuming chips powered in parallel) • Short distance: Few cm • Connected to same clock/control/trigger line, so fully synchronous • Data merging at frame level • If possible at event level • Data rate: Merged data on one 1.28Gbits/s E-link • 2 chips: 640Mbits/s • 4 chips: 320Mbits/s • Could be multiple bits (2,4) to match EOC clock rate ( 160/320 MHz) • Not needed to be demonstrated in RD53A
Constraining fluctuations • Determines size of on-chip de-randomizer buffers • When buffers gets (close to) full, hit data will have to be truncated • Must only happen on a small fraction of events/hits: ~0.1% at absolute highest luminosity (PU=200). • When hit data truncated it must be indicated in event header/trailer • Events should “never” be lost (but may happened under extreme conditions or caused by SEU) • Must never get into a deadlock state (has been seen in HEP chips/systems and courses major problems) • Local hit rate fluctuations • Localized jets with many particles/hits (part of MC hits) • Machine structure (Not part of MC, but can be included in our simulations) • Large (huge) clusters: Low incidence secondary particles (part of MC), Highly ionizing particles (sensor “SEU”), Background (e.g. CMS monster events) • These have (and still are) posing problems in current trackers • Last two cases not part of MC hits
Trigger fluctuations: Nominal 750KHz (CMS) • Does trigger have a tendency to select large/small events ? (no) • At PU=200 this should not be the case • Machine structure: No trigger when no collision • Unless used as a signal baseline calibration/monitoring trigger • Capability to accept consecutive triggers: yes, but constraining how many (e.g. 2) • Not accepting consecutive triggers would imply trigger dead time of: ~750Khz/40MHz = ~1.8% • Centralized mechanism to prevent problematic trigger “bursts” • Nominal trigger rate is an average rate over a given “long” time window (which is often not defined) • Trigger bursts will pose problems for most (all) subdetectors so it is better to get rid of these in the central trigger system than having a large number of detectors having trouble and reacting to this in different ways • Defining simple rules to prevent/limit trigger bursts: Max number of triggers in a give time window • Removes the problematic part of the “random” trigger distribution (with low trigger “loss”) • Initial proposal to CMS: • Short term: Max 2 triggers in any time window of 8 clock periods. • Mid term: Max 16 (8) triggers in any time window of 16*40= 640 clock periods =16us (8us) • Effective trigger dead time (~1%) to be calculated/verified and a global CMS agreement to be found
ETH Zurich plans ETHZ still with large commitment for Phase-I Pixel Upgrade construction. Preparations and planning for Phase-II activity started. Interested in: • Module design, test and production • Sensor test • ROC qualification (logic test, performance test, irradiation) • Module qualification (test beam, irradiation, high-rate) • Powering • Serial powering with focus on system aspects • On-module serial powering • shuntLDO characterization (RD53) • Build and test/compare chains of serially powered FE-I4, RD53A and PROC600 chips • Build serially powered double-chip module with FE-I4B and “IBL-like planar sensor” (ROCs serially powered on-module) • Long term: Module with AC-coupled planar sensor and on-module serially powered ROCs
Organization issues • Multiple groups working on different issues that are interdependent • Sensor • Pixel chip (RD53) • NDA problem giving access to detailed design meetings • A collaboration in itself • Assure appropriate flow of information between the two communities • Serial power • Readout • Module – Thermal • Layout, Integration, services, material budget, etc. • Assuring coordinated progress • Tracker weeks • Ad hoc dedicated meetings (e.g. serial power) • Regular phase 2 pixel meeting now seems necessary • Not political/management as there are meetings for this • Monthly ?, In between tracker weeks ? • 1-2h hours, longer whenneeded
Overall electronics schedule Schedule driven by availability of pixel chip • 2016 – 2017: • Finalization of RD53A demonstrator pixel chip with electrical and beam tests with bump bonded sensors (will most likely not be available for CMS tracker upgrade TDR) • Radiation qualification of pixel chip • Studies/verification of serial powering with RD53A • Studies/verification of readout cable options • Studies of pixel module design with cooling and thermal aspects from serial powering • 2018: • Design/optimization of “final” (CMS) pixel chip • Design and optimization of pixel module/readout cables/power cabling • Design of opto conversion module • 2019: • Test and verification of final (CMS) pixel chip • Submission of final production pixel chip • (Design of dedicated serial power supply (industry)) • 2020: • Exhaustive tests of final production pixel chip • System tests with final pixel modules/readout cables/power cables/Opto conversion modules • 2021 – 2024: • Production, system assembly, system tests, preparation for installation • 2025: Installation
CMS groups in electronics • Pixel chip development (RD53): INFN, CERN, Fermilab, Seville, RAL (+ ATLAS groups) • System: • Serial powering: CERN, INFN (Florence), Spain • Readout cable, optical conversion module: CERN • Pixel module: Cornell, Purdue Concurrent phase 1 pixel project has so far strongly constrained participation of other CMS pixel groups • This will be improved/clarified when phase 1 pixel detector is installed (end 2016)
RD53A News • Flavio Loddo, Bari has now formally taken over as project engineer, with Tomasz Hemperek, Bonn as deputy (digital) • Weekly general design meetings • Weekly digital/simulation meetings • Power meetings when appropriate • Assure information flow with people working on serial powering at system level ( NDA access issue) • 2 day design meeting at CERN • General review/discussions/decisions on design issues • 4 design blocks • Pixel array with analogFEs and digital buffering • Analog Chip Bottom: FE biasing, Monitoring, PLL, serializer, power on reset, • EOC: digital end of column: data collection from pixel array, buffering, data formatting/(compression), configuration, etc. • Pad frame: Wire bond pads, drivers/receivers, SLDO for serial powering • Analog – digital noise isolation scheme defined: Double use of deep N-WELL • ½ day review of 4 proposed analog FEs • Final decision of which will get in RD53A in October • Design repositories in place and IP blocks being uploaded and under verification • Digital design: Refinements, verifications, power estimation/optimization • Verification: Verification approach being integrated into VEPIX simulation environment ( using specific tools for verification coverage/verification) • Functional verification with MC data • Verification with extreme cases not seen in MC • Dedicated tests for specific design blocks • Verification coverage with systematic and random stimuli • SEU injection and verification
RD53A • SLDO: • 0.5A SLDO prototype testing starting • First version of 2A SLDO designed (4x 0.5A) • User defined voltage off-set of SLDO being integrated (discussed in last CMS tracker week) • Control protocol defined and RTL design made • Use of B-ID, E-ID identification (“Classical”) or using event tags distributed with trigger accepts (new ATLAS scheme) • Readout • Clarified readout schemes: ATLAS – CMS • Readout formatting based on Aurora (from Xilinx) assuring easy FPGA integration: Encoding, framing, • 4 serializers at 1.28(2.56Gbits/s) compatible with LPGBT readout (“CMS scheme”) • Serializer and Cable driver with programmable pre-emphasis currently under test. • Potentially ~5Gbits/s serializer and cable driver (“ATLAS scheme”) • Monitoring: Extensive monitoring with ADC • Temperatures, SLDO input/output voltages/current, biasing, PLL, references, (radiation), etc. • Rad hard digital libraries ( pixel array – EOC) • DRAD test chip working, test system ready, Xray radiation will be done in next 2 weeks • Determines required modifications to TSMC standard cells • Schedule: Design finished Dec., detailed versification jan-feb-march, Submission April 1st 2017. (agreed with MPA) • Pending/other issues: • High resolution (more than 4 bit) mode for sensor characterization at low hit rate • Power down pixels for use with large pixels (not critical for RD53A) • On-chip data compression (not critical for RD53A) • Data merging from several pixel chips into one link (not critical for RD53A) • Special test modes: chip testing, Bump bonding testing, self-calibration (not for RD53A)
Phase 2 pixel detector • Layout: Similar to CMS phase 1 pixel upgrade with extended forward coverage (under revision) • 4 barrel layers: r = 3.0 ; 6.8; 10.2; 16.0 cm • 10 forward disks on each side (7 additional disks for forward coverage) • Forward layout under review to enable replacement with beam pipe in place. • Service cylinder(s) for services. • CO2 cooling (can handle high power density with low mass) • Hybrid pixel size: • Inner layers: 25x100um2 and 50x50um2 (100-150um thick)Outer layers: 50 or 100 x 100um2 • Pixel sensor: Planar and possibly 3D • Radiation: 1Grad , 2 1016neu/cm2, inner layer, 10 years • 1/r2 dependency Layout currently being updated/improved
Modules and Modularity • Modular building blocks • Chip size to be adapted to final layout • Smaller chip (15mm x 20mm) preferred/required for inner layer(this can be different for ATLAS as min r = 4cm) • Minimize number of different module types. • 1x2, 1x4, 2x2 (possibly 2x4 for outer) • Module size must be appropriate for serial powering (failure scenarios) • Module production with bump-bonding will be cost driver • Will be adapted to: • Bump bonding: Module size and yield • Pixel sizes and sensor types • Mechanical and cooling constraints • Powering structure and granularity • Readout rates and granularity
System summarywith 1x4 and 2x2 modulesLayout, chip size and modularity under optimization
LPGBT • Updated “specs” slides: https://espace.cern.ch/GBT-Project/LpGBT/Specifications/LpGbtxSpecifications.pdf
Comment: 7 links is really awkward for pixel system (8 would be ideal)
Reached agreement on this very recently. (SLVS is a really bad “standard”, only appropriate for short on board connections)
LPGBT encoding • Optical link (5/10G) has its specific encoding with multibit error correction capability: FEC5/FEC12 • User does not need to know details of this • IP blocks for handling this in off-detector FPGA • E-links has no inherent encoding so this has to be encoded/decoded by FE chip and off-detector DAQ interface card (FPGA) • 100% user defined, but cuts directly into available bandwidth • Only needs to put appropriate encoding for local E-links. • AC coupling as having serial powering: 8/10B, 64/66B • Frame synchronization • No need of clock encoding as using same clock reference(LPGBT will have phase alignment features) • E-link bit rate multiple of 40MHz: 1.28Gbits/s, 640Mbits/s, ,
OK for physics ? • Material budget of tracker/pixel estimated • Pixel sensor – bump-bonding – pixel chip • Readout cables: Twisted pair cable (as used in phase 1) • Realistic serial power cables • HV cables • Local decoupling capacitors (conservative • High Density Interconnect • Cooling, mechanics • Tracking performance evaluated and looks acceptable. • Detailed effects on physics channels to be studied
Material estimates Dominating material contributions to be optimized !