1 / 13

XFEL-PIXEL Readout and control

XFEL-PIXEL Readout and control. Detector requirements/proposal Basic concepts for experiments Control electronics Selecting/rejecting bunches Backend-systems Time schedule Conclusion. Detector requirements. Call for experiments: large pixel detectors: > 1Mega-Pixel

laasya
Download Presentation

XFEL-PIXEL Readout and control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. XFEL-PIXELReadout and control Detector requirements/proposal Basic concepts for experiments Control electronics Selecting/rejecting bunches Backend-systems Time schedule Conclusion Peter Göttlicher, DESY-FEB, XFEL-DAQ-2007/03/05

  2. Detector requirements Call for experiments: large pixel detectors: > 1Mega-Pixel large number of frames: XFEL: 30000 bunches/sec large dynamic range 104-106 Photons/pixel/frame distinguish 0 and 1 Photon, rest Poisson-limit With 2Bytes/pixel and 1MPixel 60GByte/sec - but experiments will not handle frame rate - Data reduction of "compression" is not effective: Statements of nearly nothing to factor 2-4. Peter Göttlicher, DESY-FEB, XFEL-DAQ-2007/03/05

  3. Three Proposals AP-HPAD: analogue pipe-line Hybrid Array Pixel Detector Si-pixel: ASIC stores signal while bunch train into capacitors digitization/data-transfer in pause between trains 400 frames/train LSDD: Linear silicon drift detector One dimension is coded into drift-time: 200MS/s while train data transfer in pause between trains. 600 frames/train LPD: Large Pixel Detector analogue pipeline while train digitization between trains 512 (256) frames/train Þ All extendable beyond 1Mpixel All around every 5-10 bunch All might run in parallel Peter Göttlicher, DESY-FEB, XFEL-DAQ-2007/03/05

  4. Idea: Basic setup of all experiments Specific to experiment, here AP_HPAD Generic for all experiments Peter Göttlicher, DESY-FEB, XFEL-DAQ-2007/03/05

  5. Idea: Concept for control electronics General task: - Boot - Collect information - Synchronize to XFEL clocks, time - Synchronize Interface electronics Backend Experimental area - Generate/distribute Clocks/data - User interface - Write to backend Monitoring/Status - Usage at other laboratories Peter Göttlicher, DESY-FEB, XFEL-DAQ-2007/03/05

  6. Selecting/rejecting bunches Aim: - Best use of limited pipeline in detector-head - Limit resources for backend Staged concept: - Store predefined bunches into pipeline of experiment - Fast reject (before next bunch): hardware detector,.... - Slow reject: After bunch train Handle slower information, information from XFEL - Transfer to backend-system More fancy selection in CPU-farm Rates: hard to guess, input from science needed - gas stream through X-ray-beam: - solid targets in beam (just first bunch) - that is not all. Who needs what?/how much? Peter Göttlicher, DESY-FEB, XFEL-DAQ-2007/03/05

  7. Idea: Data stream to and concept of backend-system Aim: - Collect complete frames into one CPU - Data in mass storage sorted for "Offline"-analysis Allowance: - Calculation on frames before sending to mass storage - Decisions on frame to store/reject inside backend system. - Rejecting frames on more fancy information from external signals. - Write data frame-wise to mass storage - easier offline. Peter Göttlicher, DESY-FEB, XFEL-DAQ-2007/03/05

  8. Interface: Experiments to backend-system Rough: Data from Pixels written to RAM and read in other sequence By that: One group of frames transfers to 1 Link to BE. Still 8 or more of such modules transfer to same backend-system Peter Göttlicher, DESY-FEB, XFEL-DAQ-2007/03/05

  9. Need, controlled time sequence: So that each CPU gets data from all interface-modules without time conflict Peter Göttlicher, DESY-FEB, XFEL-DAQ-2007/03/05

  10. Idea: Concept of backend-system Switch CPU-Farm monitoring data control electronics Feedback XFEL-accel.? Stage 1: »100 Links/1Gbit/s Interface electronics mass storage Peter Göttlicher, DESY-FEB, XFEL-DAQ-2007/03/05

  11. Rates: - Not well defined, what does which of experiment - AP-HPAD: 2Bytes/pixel/frame is reasonable - LSDD: Start with over sampling: Where to reduce to what, has to be a common diccussion - LPD: Rate expected to be similar to AP-HPAD, but I haven't seen their proposal. - First tests of compressions had been not effective Recent statement for single case was factor 2-4. Estimate to storage: 1Mpixel * 500bunches/train * 2Bytes/pixel * 10trains/s 10GBytes/sec Discussion with inputs from backend/science needed to settle rates/uptime/technical-effort/offline-effort Peter Göttlicher, DESY-FEB, XFEL-DAQ-2007/03/05

  12. Time schedule 2007: Experiments write proposal 15-April 2007 end 2007-2013: Designing, Constructing, but also Laboratory tests and usage at other beam lines 2009(?) DAQ needed for tests - no large CPU-farm LCLS has 120Hz and not 30000bunches/sec 2013: 1Mpixel dedectors at XFEL 3*(10-20GB)/sec 20xx: Upgrades beyond 1MPixel (e.g. 4Mpixel) Peter Göttlicher, DESY-FEB, XFEL-DAQ-2007/03/05

  13. Conclusion Concept is there, open to change and do an other way Two aspects: Control electronics Backend-system Ideas about numbers, but a lot of open questions Discussions needed with input for - effort in backend - science requirements - offline possibilities - feasibility of interface-electronics Common effort for all experiments is a strong wish Peter Göttlicher, DESY-FEB, XFEL-DAQ-2007/03/05

More Related