1 / 24

L2 CPUs and DAQ Interface: Progress and Timeline

Explore the latest advancements and future plans for L2 CPU and DAQ interface. Learn about system requirements, data flow, CPU rules, and run control processes.

Download Presentation

L2 CPUs and DAQ Interface: Progress and Timeline

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. L2 CPUs and DAQ Interface:Progress and Timeline Kristian Hahn, Paul Keener, Joe Kroll, Chris Neu, Fritz Stabenau, Rick Van Berg, Daniel Whiteson, Peter Wittich UPenn Daniel Whiteson/Penn

  2. Requirements • Data Flow • Keep data processing path simple; make rejection very fast • Add no slow controls to event processing • Control: • Configuration (Load trigger exe, prescales, etc) • System Reset (HRR) • Monitoring • Trigger rates, algorithm times, resource usage, etc • Vital for commissioning of system • Commissioning • Phase I is Trigger Evaluation Director + single box • Phase II is Trigger Evaluation Director + 4 boxes Daniel Whiteson/Penn

  3. System & DAQ Architecture Trigger Evaluation Director Daniel Whiteson/Penn

  4. L2 Nodes Nodes can be mostly ignorant of the state of the rest of the system. Keep it as simple as possible. Need only two states: “Thirsty”: wait for events to process “Drunk”: disregard events Daniel Whiteson/Penn

  5. L2 Buffers CPU Rules: Events sit in L2 buffer between L1A and L2 decision Decisions are made in order If buffers are full on L1A  deadtime! L2 Buffer CPU L2 Buffer CPU L2 Buffer L2 Buffer CPU Daniel Whiteson/Penn

  6. L2 Buffers CPU Rules: Events sit in L2 buffer between L1A and L2 decision Decisions are made in order If buffers are full on L1A  deadtime! L2 Buffer CPU L2 Buffer CPU Parallel by Event Map buffer to CPUs Each CPU does whole event Phase I: 1 box does each event, FIFO L2 Buffer L2 Buffer CPU Daniel Whiteson/Penn

  7. L2 Buffers CPU Rules: Events sit in L2 buffer between L1A and L2 decision Decisions are made in order If buffers are full on L1A  deadtime! L2 Buffer CPU L2 Buffer CPU Parallel by Event Map buffer to CPUs Each CPU does whole event L2 Buffer L2 Buffer CPU Phase I: 1 box does each event, FIFO Phase II: Easy to extend to 1 box per buffer. Process events in parallel. Minimize deadtime! Daniel Whiteson/Penn

  8. L2 Buffers CPU Rules: Events sit in L2 buffer between L1A and L2 decision Decisions are made in order If buffers are full on L1A  deadtime! L2 Buffer CPU L2 Buffer CPU Parallel by Event Map buffer to CPUs Each CPU does whole event Future? Consider splitting events up to process in parallel. If triggers are partitionable, may reduce tails L2 Buffer L2 Buffer CPU Daniel Whiteson/Penn

  9. TL2D Creation Current L2 System Built by Alpha Pulsar + Nodes Bank created by nodes, sent to L2toTS for completion Bundle TL2D with L2 Decision Datasize is small, overhead is large Daniel Whiteson/Penn

  10. Prescales • Current L2 System • All events pass through single alpha • standard prescales are trivial • rate limited prescales require one counter each Phase I: Pulsars + TED +1 Node Single CPU handles all events Prescales done in L2TS (May be done in node as temporary measure for Phase I, but not useful in Phase II) Phase II: Pulsars + TED +4 Nodes: Standard prescales Easy to do in L2toTS, one counter per trigger Rate-limited prescales Easy to do in L2toTS, one clock per trigger Must be done on L2TS Daniel Whiteson/Penn

  11. Current L2 System • Scalars record: • Number of events into L2 • Number of events which pass • Number of events out of L2 (after prescales) • Data is sent: • In TL2D bank • To ScalarMon over ethernet from crate controller Scalar Monitoring • Pulsar + Nodes • Gather information in one place: • Incoming event rate • CPU decision • Prescale influence • Phase I: • L2TS has all information • (may be done on node as temporary measure for Phase I) • Phase II: • L2TS has all information • Cannot be done on node “In alpha, scalar incrementation takes 1-3 ms” -Greg Feild Daniel Whiteson/Penn

  12. SVT Information Current L2 System Wait for SVT before processing Pulsar + Nodes If SVT L1 bits are zero: Ignore SVT information, process event. Leave SVT data out of TL2D. If SVT L1 bits are nonzero: Wait for SVT information, process event Include SVT data in TL2D. merger Note: we must wait for SVT info on Rate-Limited triggers. This may be often! SVT Daniel Whiteson/Penn

  13. Run Control System L2 Paths & Processors Event Data Trigger Decisions Partition Configure Activate HRR End Front End Crates Run Control RunControl Commands L1A Trigger Supervisor Daniel Whiteson/Penn

  14. Run Control Signal Action Partition TED stores Partition Number Configure TED sends trigger table/exe TED starts monitoring Nodes go to “thirsty” mode Activate (none) L1A Process, generate decision Halt go to “drunk”. Flush? Recover Go to “thirsty” Run (none) End Dump trigger table Close stats Go to “drunk” mode Retain Partition Number Daniel Whiteson/Penn

  15. T.E.D. Control TED TED Control Run Control Run Control Client L2 Node Controller Node L2 Node Monitor L2 Node Controller Node L2 Node Monitor L2 Monitor Server L2 Mon GUI L2 Node Controller Node L2 Node Monitor Daniel Whiteson/Penn

  16. Run Control Interface Run Control Run Control Client Communication Run Control talks to clients over ethernet, with SmartSockets via publish/subscribe Progress Prototype client is built Talks to RunControl Tests: - joined partition - received configuration data - received RC transitions - ACK back to RC TED Daniel Whiteson/Penn

  17. Monitoring Server TED Asynchronous operation Nodes push data regularly, at low priority Nodes are never blocked by monitoring L2 Mon GUI Web based interface Pull monitoring information from nodes Access from anywhere, anytime Security issues? Statistics CPU/Memory usage by node, by trigger, etc L1 A rates, trigger rates, event sizes, processing times by trigger.. L2 Monitor Server L2 Mon GUI Daniel Whiteson/Penn

  18. TED <==> Nodes Communication Message Passing Need flexible protocol for sending commands, configuration data, monitoring statistics Serialization Convert any message into a serial buffer Communication Send/Receive serial buffers Considering TCP/IP Interface design independent of implementation L2 Node Controller Node Serializer Deserializer Communicator TED L2 Node Monitor Deserializer Serializer Communicator Daniel Whiteson/Penn

  19. Example: Configuration Trigger & Hardware Databases Trigger Table Node ID/ Prescales Exe location monitoring rate TED Control Run Control Run Control Client Node L2 Node Controller Ack Ack Ack Ack TED Daniel Whiteson/Penn

  20. T.E.D. 2 CPUs, 1 box Kristian and Fritz have shown that OS and interrupts can be isolated on one CPU, freeing second CPU to do Data IO & algorithms. Ether Node CPU 1 OS Interrupts Monitoring TED Interface Timing Software tails are reduced when all interrupts are sent to one CPU CPU 2 Data I/O Algo Data Decision+TL2D Daniel Whiteson/Penn

  21. Monitoring details Sharing Data between CPUs Algorithm CPU writes monitoring data to shared memory Manager CPU looks for monitoring data. Must ensure that data is locked Daniel Whiteson/Penn

  22. Big Picture: Commissioning Schedule • Phase I • Begin testing in parasitic mode with single node in June • Phase II • Extend to 4 nodes. • Design such that minimal work required: • Send event to specific box rather than single box [simple mask in merger] • Add nodes to TED [simple in current design] • L2TS receives events from multiple nodes • Like to avoid reworking prescales/scalar system Daniel Whiteson/Penn

  23. Phase I Tasks SVT => PC Node <-> Pulsar Specify Data & Decision Format Merger => PC PC => L2TS TL2D Formation Prescaling/Scalars (Node/L2TS?) RC Controlled configuration and event processing Integrate Control & Configure CPU Control TED’s Brain TED <-> RunControl Specify Config Data Parasitic Running & Decisions TED <=> Node Control and Interface Design Monitor GUI has node data TED <=> L2Mon CPU Data Sharing TodayApril MayJune 1 June 21 Daniel Whiteson/Penn

  24. Phase I Task Schedule Date Task to complete Names Needs March 12th Pulsar <=> PC testing Kristian+Fritz Control & Interface design All TED <=> RunControl Daniel Specify Data & Decision Format All March 26th TED’s Brain Daniel Prescaling/Scalars in Nodes? Kristian/Cheng Ju? Specify Config Data Daniel TED <=> L2Mon Daniel April 8th Begin: SVT =>PC Kristian+Daniel SVT Pulsar Begin: Merger =>PC Kristian+Daniel Merger Pulsar TED <=> Node Fritz CPU Control Kristian CPU Data Sharing Kristian April 15th TL2D Formation Daniel+Kristian Begin: PC => L2TS Kristian+Daniel L2TS Pulsar May 1st Monitoring GUI has node data Daniel Begin: integrate control pieces Daniel+Kristian June 1st Begin: RC-controlled testing Daniel+Kristian June 21st Begin: parasitic running All Daniel Whiteson/Penn

More Related