340 likes | 356 Views
Explore the intricate world of particle accelerators, from the early discovery of quarks to the advanced technology powering modern physics experiments. Discover how these machines unveil the mysteries of the subatomic realm.
E N D
BTeV Availability Issues We don’t have problems any more,we have issues... 21 Aug 2001 M. Haney, University of Illinois
Early on, high energy physicists found too many particles... fj D*2 Lc f3 f2 B h Sc w D* f4 f’2 W p2 a4 p f0 K* K3 S r h1 a1 a0 Bc h0 f1 K2 D1 w3 D2 Bs Ds K B* L K*2 w D K1 r3 a2 f h’ K*4 K*3 “the secret is to bang the rocks together…” Douglas Adams m-haney@uiuc.edu 21aug01
+ 2/3 -1 t m e top up charm (truth) - 1/3 0 ne nt nm strange down bottom (beauty) quark charge lepton charge which led to the Standard Model:quarks & leptons m-haney@uiuc.edu 21aug01
Bs quarks & anti-quarks + 2/3 - 2/3 u c t u c t - 1/3 + 1/3 d s b d s b Rule: Quarks cannot exist alone,but only in 2’s and 3’s (baryons) anti-proton proton neutron Now we can easily build any ofof the particles we have discovered(and predict some we haven't) (mesons) D0 p+ m-haney@uiuc.edu 21aug01
Start with some particles (usually electrons or protons) e e e e e e e e e e Electrons are easy: Just heat up a filament e e e e e e + - ENERGIZER ENERGIZER + + + + + + + + e e e e Protons are also quite easy: Ionize hydrogen e e e e - + How to build a Particle Accelerator m-haney@uiuc.edu 21aug01
- - - + + + + Linear Accelerator (concept) • if you match the frequency correctly, • the particle will “surf” the RF wave... m-haney@uiuc.edu 21aug01
Linear Accelerator (practice) • LINAC at Fermilab(0.4 GeV) m-haney@uiuc.edu 21aug01
bending magnet Accelerating section Focussingmagnets Vacuum tube (beam pipe) Synchrotrons m-haney@uiuc.edu 21aug01
Fermi National Accelerator Lab • The ring is about 4 miles around... m-haney@uiuc.edu 21aug01
- 2000 GeV 1000 GeV 1000 GeV + Two for the price of one • accelerate protons andantiprotons • same ring • oppositecharge,+ direction • bang themtogether... m-haney@uiuc.edu 21aug01
4p Detector • The typical arrangementis to wrap the detectoraround the interactionpoint: vertex(later…) CDF at Fermi m-haney@uiuc.edu 21aug01
BTeV is rather different... protons antiprotons m-haney@uiuc.edu 21aug01
Too much physics! • Let’s talk about computational requirements: • every 132ns, two “clouds of quarks”cross paths... • on the average, there will be 2 “events” (collisions) • 200 KBytes of data are produced per event m-haney@uiuc.edu 21aug01
BTeV Data Path About 1 TByte of (distributed) buffer memory... m-haney@uiuc.edu 21aug01
Why have a Trigger? • Compression is not enough • To get from 1.5 TBytes/sec from the detector, • to 200 MBytes/sec to tape, you need to: • identify/reject “boring” events • identify/accept/save-to-tape “interesting” events • keep extremely good records on every action! • All of the analysis is statistical • Bad statistics = Mission failure • Losing an event is survivable • Not knowing that you lost it, or why, is death m-haney@uiuc.edu 21aug01
The BTeV Pixel Trigger • (there is also a Muon Trigger; there may be others...) • 2 KBytes/crossing (from Pixel Detector) • every 132ns... • Fast, pipelined FPGA(s) to sort,match up pairs and triplets between planes • DSP(s) to find tracks, compute vertex(s) • time available • <1 ms, from crossing to trigger “opinion” m-haney@uiuc.edu 21aug01
Muon Preprocessor Pixel Preprocessor(FPGAs) Segment Preprocessor Segment Preprocessor(FPGAs) Detector Front End Board (DCB) Others... 1 Highway MTSM L1Buf PTSMRegionalControlandMonitoring L1Buf (raw) L1Buf (cooked) Crossing Switch /c/spot Muon DSP Farm BTeV Run Control L1Buf (basis) L1Buf (basis) Pixel DSP Farm(joint track+vertex) L1Buf (basis) L1Buf (details) Opinions GLSM L1Buf (basis) GL1 L1Buf (details) Accept/Reject Decisions Resource Mgr Requested/AssignedCrossing data L2/L3(Linux Farm) m-haney@uiuc.edu 21aug01
Trigger Path Issues • direct (optical) link, front-end to FPGA-part • geographic specificity • Many-to-many interconnect, from FPGAs to DSPs • work assignments handed out, one crossing to each DSP • all FPGAs contribute to the “one crossing” • hence many-to-(one-of)-many • of)-many ~ 2500 DSPs (!) • no inter-DSP connection (needed…) m-haney@uiuc.edu 21aug01
Trigger Path Issues (page 2) • DSPs to GL1 • each DSP offers one “opinion” per crossing • multiple triggers = multiple opinions • every 132ns • GL1 decides • accept/reject • based on the opinions received for that crossing • 7.6 million decisions per second m-haney@uiuc.edu 21aug01
Trigger Path Issues (page 3) • Resource Manager • conveys GL1 decisions to L1 Buffers, • “push” (as opposed to “pull”) architecture • L1 Buffers deliver data to appointed L2 processor • Pentium-class Linux machine(s) • again, many-to-(one-of)-many • of)-many ~ 2000 Linux machines m-haney@uiuc.edu 21aug01
Trigger Path Issues (page last) • L2 Processor is L3 Processor • sufficiently interesting eventsreceive additional (L3) processing • events that satisfy L3 are recorded to “tape” • or saved on disk, or burned to CD, or ... m-haney@uiuc.edu 21aug01
Muon Preprocessor Pixel Preprocessor(FPGAs) Segment Preprocessor Segment Preprocessor(FPGAs) Detector Front End Board (DCB) Others... 1 Highway MTSM L1Buf PTSMRegionalControlandMonitoring L1Buf (raw) L1Buf (cooked) Crossing Switch /c/spot Muon DSP Farm BTeV Run Control L1Buf (basis) L1Buf (basis) Pixel DSP Farm(joint track+vertex) L1Buf (basis) L1Buf (details) Opinions GLSM L1Buf (basis) GL1 L1Buf (details) Accept/Reject Decisions Resource Mgr Requested/AssignedCrossing data L2/L3(Linux Farm) m-haney@uiuc.edu 21aug01
Pixel (Muon) Trigger Supervisor Monitor (P/M)TSM • Control Functions: • Initialization • sets the maximum bandwidth necessary • can not take all day... • Command Parsing and Distribution • to subordinates, from RunControl • Error Response • autonomous, “local” (regional) • not to be confused with higher-level driven Error Handling, which appear as “Commands” m-haney@uiuc.edu 21aug01
(P/M)TSM (continued) • Monitor Functions • Error Message Collection and Reporting • organized, formatted, sent higher up • Hardware and Software Status • not unlike Error Messages... • Status and Data Histogramming • utilizes remaining (P/M)TSM system bandwidth m-haney@uiuc.edu 21aug01
Close-up: FPGATrigger Supervisor and Monitor EtherNet,for example Preprocessorfor example... BTeV Run Controland database(s), etc. MonitoringAwareness _TSMRegional Controland Monitoring FPGA _TSMLocal Controland Monitoring Local Config(Flash) ARCNet,for example FPGA ControlInfluence JTAG,programming and debug /c/spot run StandaloneOperationalCapability Regional copyof config data,and history Power, Cooling Monitoringand Control Fire Detect m-haney@uiuc.edu 21aug01
Close-up: DSPTrigger Supervisor and Monitor EtherNet,for example DSP BTeV Run Controland database(s), etc. MonitoringAwareness _TSMRegional Controland Monitoring FPGA _TSMLocal Controland Monitoring Local Config(Flash) ARCNet,for example DMAin Host PortInterface DSP BSPout ControlInfluence JTAG,programming,debug, and monitoring run(spot); run; StandaloneOperationalCapability Regional copyof config data,and history Power, Cooling Monitoringand Control Fire Detect m-haney@uiuc.edu 21aug01
Factoids • TMS320C67x DSP • ~70 Kbyte internal RAM, ~1200 MIP • fixed/floating point • TMS320C64x DSP • ~1 Mbyte internal RAM, ~3x faster… • fixed point only • times 2500 devices (!) • Sony Playstation 2.5K ? m-haney@uiuc.edu 21aug01
More Factoids • Host Port Interface (TI DSP only) • almost-direct access into the DSP • peek, poke • uses DMA (like) resources… • (concept not unique to TI) • DMA • crossing data in • Buffered Serial Port(s) • opinions out; dual, ~75Mbps (C6x) m-haney@uiuc.edu 21aug01
DSP/BIOS (Texas Instruments) • based on SPOX • “the oldest DSP RTOS…” • scalable real-time kernel • small footprint (< 2Kw) • preemptive multithreading • scheduling (tasks, soft/hard interrupts) • synchronization (between tasks, interrupts) • hardware abstraction • extensible m-haney@uiuc.edu 21aug01
DSP/BIOS (continued) • real-time analysis • real-time instrumentation • explicit API calls • (controllably) implicit, at context changes • host-target communications • statistics gathering • same as above • host data channels • binds kernel I/O objects to host files m-haney@uiuc.edu 21aug01
RTDX - Real Time Data Exchange • utilizes JTAG chain (and emulation bits) • target (kernel) component • moves messages to/from DSP/BIOS queues from/to JTAG-accessible buffer • “real time” target to host (?) • host component • data visualization and analysis tools • data to target “not” real-time… (?) m-haney@uiuc.edu 21aug01
RTDX (continued) • fundamental (TI) model • one PC running CCS (or app with CCS API calls) • small number (4’ish) of DSPs on one JTAG chain • JTAG/emulator “pod” connects PC to chain • (2 “extra” emulation bits, in addition to TDI/TDO/TMS/TCLK) • BTeV challenge • not buying 500 PC’s to support 2000 DSPs… m-haney@uiuc.edu 21aug01
BTeV Conclusion(s) • fierce data rates • massive parallelism • heterogeneous control-in-depth • limited loss-of-data (or performance) • tolerable • any loss of understanding • unacceptable m-haney@uiuc.edu 21aug01
BTeV TimeTable • detailed technical design report • required Feb 2002 • R&D for TDR, now • to be operational in 2006 • http://www-btev.fnal.gov/btev.html m-haney@uiuc.edu 21aug01