1 / 16

Readout & Controls Update DAQ: Baseline Architecture DCS: Architecture (first round)

Readout & Controls Update DAQ: Baseline Architecture DCS: Architecture (first round). August 23, 2001 Klaus Honscheid, OSU. Data Rates. Input rate from Detector: 1.5 TBytes/s 3x the rate from the PTDR Simulation group works on new event size estimate Noise?

qiana
Download Presentation

Readout & Controls Update DAQ: Baseline Architecture DCS: Architecture (first round)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Readout & Controls Update • DAQ: Baseline Architecture • DCS: Architecture (first round) August 23, 2001Klaus Honscheid, OSU

  2. Data Rates • Input rate from Detector: 1.5 TBytes/s • 3x the rate from the PTDR • Simulation group works on new event size estimate • Noise? • Includes 50% extra capacity • Expansion options • L1 Acceptance: 1% • ~20 GBytes/s to L2/L3 farm • Event rate ~100 kHz • L2/L3 Acceptance: 5% • ~ 200 MBytes/s (event size reduction in L3) • ~ 4000 Hz

  3. DAQ Architecture I • Data FlowFront-end – DCB – L1buffer • Non-trigger systemsDCB can distribute data from one crossing to many L1BsORDCB can send all data from one crossing to a single L1B • Trigger systemsDCB send all data from one crossing to a single L1B • ConclusionBefore L1B the question of highways is only relevant for the implementation.

  4. DAQ Implementation: DCB -> L1B Implementation based on highways offers: - Significant advantage/simplifications for L1B processing - Lower event rate and larger packets => easier implementation - (Pseudo) random distribution of crossings possible - No net cost difference DCB +$200K L1B - $200K Proposal: The DCB-L1B connections will be structured as 8 (*) highways. (*) strong preference for a fixed number of highways. 4, 6, 8 or 12 are Considered. For baseline cost estimate 8 highways are assumed.

  5. DAQ Architecture II • How much flexibility is needed in sending specific events • to specific L2/L3 processors? • Provide support to split DAQ into multiple logical partitions that can be operated independently. • There is interest in getting the same event delivered to more than one L2/L3 node. • Deliver data from consecutive crossings? • Support partial event readout (for L2) and complete event readoutfor L2/L3 • A partition (description) includes a list of L2/L3 nodes, trigger condition(s), output stream (?), required rate or kind of service (e.g. sampling) Clearly there would be rate/bandwidth concerns that need to be supervised. Highways need to be connected. Mostly a Global L1 and Event Manager issue.(At rates < 50 MBytes/s no difference between single or multiple highways) Yes

  6. DAQ Implementation: L1B -> L2/L3 Farm Implementation based on highways offers: - Some cost savings ($200-300K) from smaller switches - Reduced control traffic (per highway) to signal L1 accepts etc. - Not difficult to change: Multiple highways can be combined by adding a second stage of identical switches. Single highway can be split by removing a stage of switches or reprogramming a larger switch. Proposal: The L1B – L2/L3 connections will be structured as 8 (*) highways. (*) strong preference for a fixed number of highways. 4, 6, 8 or 12 are Considered. For baseline cost estimate 8 highways are assumed.

  7. System Overview

  8. Example 1.5 TBytes/s System • DCB - 48 serial input links @ 1 Gbps (average 300 Mbps per link ) - 12 serial output links @ 2 Gbps - option to double output links if input rate doubles - 6, 8(*), or 12 highways - 504 DCBs (includes Pixel and Cal) - up to 24,192 FEBs or FEB equivalents • Optical Links - 504 cables (1 per DCB, expandable to 2) - 12 fibers/cable (6048 total) - bandwidth = 1.5 TBytes/sec (expandable to 3.0)  • L1B - 24 serial input links @ 2 Gbps - fiber to copper conversion external to L1B (for compatibility with trigger) - 1 serial output link @ 1 Gbps (Gigabit Ethernet) - 252 L1Bs • Event Builder - 12 switches X 48 ports/switch • L2/3 Farm - 252 serial input links @ 1 Gbps - up to 12 processors per link (with standard Gigabit to Fast Ethernet switch), 3024 total - 252 serial return links @ 1 Gbps - same switch used for L2/3 input and output• Storage-12 serial links @ 1Gbps from Event Builder

  9. Example 1.5 TBytes/s System • DCB - 48 serial input links @ 1 Gbps (average 300 Mbps per link ) - 12 serial output links @ 2 Gbps - option to double output links if input rate doubles - 6, 8(*), or 12 highways - 504 DCBs (includes Pixel and Cal) - up to 24,192 FEBs or FEB equivalents • Optical Links - 504 cables (1 per DCB, expandable to 2) - 12 fibers/cable (6048 total) - bandwidth = 1.5 TBytes/sec (expandable to 3.0)  • L1B - 24 serial input links @ 2 Gbps - fiber to copper conversion external to L1B (for compatibility with trigger) - 1 serial output link @ 1 Gbps (Gigabit Ethernet) - 252 L1Bs • Event Builder - 12 switches X 48 ports/switch • L2/3 Farm - 252 serial input links @ 1 Gbps - up to 12 processors per link (with standard Gigabit to Fast Ethernet switch), 3024 total - 252 serial return links @ 1 Gbps - same switch used for L2/3 input and output• Storage-12 serial links @ 1Gbps from Event Builder

  10. BTeV Controls? • A lot of things are hiding behind this topic: • ConfigurationRun state transitions • Data Quality Monitor • Fast Interlock, Fire Alarm • “Classical” Slow Control • Calibration Run ControlDAQ group provides skeleton software, hardware (Detector Manager) “Consumer”DAQ group provides skeleton software, hardware (?) Detector Control (DCS)DAQ group provides skeleton software, hardware (Detector Manager)

  11. Typical Control System (CMS)

  12. “Classical” Control System (MINOS) Remote Workstations OPI OPI terminal OPI terminal laptop Oracle DB WAN SCADA FNAL Safety Server DB server OPI Local Workstations OPI terminal bridge bridge LAN-ETHERNET I/O Servers distributed in Experimental area OPC IOS IOS IOS GPIB-ENET Beam Server PLC GPIB RS232 GPS RS232 LeCroy 1440 MIL/STD-1553B fieldbus CAN fieldbus Analog/Digital channels, PLC, Fieldbuses PLC fieldbuses Beam-line Swics,BPM Experiment Sub-Detectors & Equipment Magnets, Scalers

  13. Supervisory Control and Data Acquisition • Commercial systems, typically used in industrial production plants. • Examples include • LabView/BridgeView from National Instruments • iFIX from Intellution (CDF, MINOS) • EPICS (Babar, D0) • PVSS II (CERN) OLE for Process Control • Defines a standard to interface programs (SCADA) to hardware devices in a control system. • Based on Microsoft’s COM/DCOM object model • Provides multi-vendor inter-operability

  14. SCADA Architecture • HMI • Logging & Archiving • Handles distributed systems • Reports • Access Control • Alarms • Trending • … User Process Utilities C,C++,VBA Wizards SCADA Engine Process Alarm handling Event/Alarm logging Historical trending Networking Device Servers

  15. Control Example DAQGroup DetectorGroup

  16. BTeV Control System (DCS) • Solicit feedback from detector groups • Treat infrastructure in similar fashion • Rack monitoring • Magnet • Detector hall • Evaluate SCADA software • Develop/set up DCS test lab • Develop sample solutions (HV?) • Define DAQ – DCS connection • relevant for HV, Pixel “motor” • Calibration

More Related