1 / 14

Functional r equirements for BGV demonstrator ( Run 2)

not discussing performance requirements here. Functional r equirements for BGV demonstrator ( Run 2). this is work ongoing will soon be available in written (proper document). Generalities.

romney
Download Presentation

Functional r equirements for BGV demonstrator ( Run 2)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. notdiscussing performance requirementshere Functionalrequirements for BGVdemonstrator (Run 2) this is work ongoing will soon be available in written (proper document)

  2. Generalities • The main purpose of the BGV is to deliver transverse beam size values (one H and one V) per bunch at the highest possible rate • aim for a per-bunch statistical accuracy of ~5% in 3 min for 1011 p • The beam size values are extracted from transverse 2D (HxV) distributions, which can also be made available to the operators • Accessorily, it can also measure • a beam trajectory ( => position, slopes) • relative bunch amplitudes for any bunch slot ( => ghost charge!) • The BGV will be able to measure beam sizes at any energy and any beam intensity • of course, the data rate will scale with the bunch intensity and depend on energy and beam species NB: here, bunch slot = BCID

  3. Some basic facts • BGV demonstratoris for LHC Ring 2, but the ideais to later (after Run2) develop one BGV per ring. Thesewillbetwoindependentsystems, like the LDM of each ring. • In a first phase, the BGV demonstratorwill have to becommissioned by experts only • In a second phase, the BGV willbeused by CCC operators • but experts will continue to debug/improve/calibrate reflected in the control/monitoring architecture LHC Vac ? BGV ECS (PVSS) for experts LHC SCADA LHC CMW for LHC operation ? interface ? vacsys gastarget detector FE Readout boards DAQ slow control data storage slow control data storage event data storage

  4. What "event data" does the BGV produce and how ? • Upona "Level-0 trigger" the detector FE deliversanalogsignals of all fibres to the Readoutboard (TELL1) • The TELL1 boarddigitizes(ADC) the signal and performs(FPGAs) a zero-suppression (subtractpedestals, subtractcommon-mode, find candidate "hits" and construct a "cluster" fromthat). It creates a data bank for thistriggeredevent(BCID...) and putstogethera fixednumber of events in a multiple event package (MEP) beforesendingit by Gbit to a specificnode of the DAQ farm(as decided by the ODIN readoutsupervisor) • On the DAQ node an event-builderprocesswaitsuntilreceives MEP from all sources (TELL1s+ODIN), builds the events and passes these to an HLT process. The HLT processmakes the pattern recognition and fitting (tracks + vertex). The event data are thenpassed to a diskwriterprocess. signal 0 fibre channel 0 ADC clus. charge cluster position tracks and vertex

  5. DAQ flow

  6. Basic requirementsfrom CCC operation Monitoring: • beam size X and Y vs time • same for a selectable BCID (or set of BCIDs) • ghost charge vs time • beam positions (H, V) and slopes (H, V) vs time • same for a selectable BCID (or set of BCIDs) • histogram of event rate vs BCID Control: • detector • "turn ON/OFF" (to bedefinedwhatitmeans) • "reset BGV" (to bedefinedwhatitmeans) • load a fillingscheme and editwhich slots shouldbeconsidered for data acquisition • gastarget • control gas injection

  7. Basic requirementsfrom experts • Control: • restart a run • loada fillingscheme and editprescale factor per BCID • change the proportions of vertex : cluster : NZS trigger rates • send test pulse triggers • change L0 trigger thresholds • change HLT trigger cuts Difference to LHCb: • addbeamenergy and beam modes to ODIN bank (couldbeused to adjust HLT cuts per event ?) • event time ordering must berespected to ~1s • BGV DAQ acts ~like a "monitoring farm" in the sensethatit must produce online results (trends) based on selectedevents

  8. Event data sizes out of TELL1s Zero-suppressed data: • TELL1 data willdepend on numNcl of clusters per BGV event and data size scl per cluster (as for VELO, fixed) • Assume hereNcl = 300 and scl= 4 B • Thus 1.2 kB per triggeredevent • As a comparison the VELO (84 TELL1s) has for pp=8 TeV an event size of about 9 kB. • To the TELL1 data one shouldadd the transport overhead and the ODIN data • Assume about 1.5kB per event. Non-zerosuppressed data: of the order of 128*144*1B = ~20kB per triggeredevent 128channels + 16header channels 8bit ADC value

  9. Event data rates TELL1s to DAQ Zero-suppressed data from TELL1s to DAQ: • Absolute maximum = 1.5kB/Trg x 1 MTrg/s = ~1.5 GB/s distributed over 8 TELL1 sources (~ 200MB/s per source) • willprobablybeless, becausewe set a L0 trigger thresholdwhichkeeps the L0 rate below 1MHz (few 100 kHz)

  10. DAQ output event data categories • vertex data: these are the data used online to make the beam profiles. They are produced in the online reconstruction. • these are "final" results, i.e. include already the vertex resolution corrections • storing split vertices also allows to get the resolution (but without changing alignment) • cluster data: needed for online monitoring of pattern recognition and for offline verification of results. Allow to re-do the tracks and vertices, perform alignment, parametrise the vertex resolution corrections, etc. These data are appended to some events at a reduced rate for monitoring, but it should be possible to increase the rate for dedicated "calibration" runs. • NZS data (non-zerosuppressed): these "raw" data are needed to monitor pedestals and noise, to check the overall behaviour of the front-end part. The data are appended to some events but at a very reduced rate. • Additionally: histograms of vertex data, for example.

  11. DAQ output event data: vertex data These are the data used online to make the beam profiles. They are produced in the online reconstruction. • these are "final" results, i.e. include already the vertex resolution corrections Estimated data size: Minimum data are*: vertex track multiplicity 1 short 2B vertex 3D positions 3 floats 12B vertex position covariant matrix 9 floats 32B header data ODIN 50B Rate of about 3 Hz per bunch (max. 2800 bunches) Max rate: 3Hz x 2800 x 100B = 800 kB/s This should be the dominant data rate *Alternative: split vertices 1 & 2, mult & 3D => 2x(2B+12B) = 28B (don't need covariance)

  12. DAQ output event data: cluster data Needed for online monitoring of pattern recognition and for offline verification of results. Allow to re-do the tracks and vertices, perform alignment, parametrize the vertex resolution corrections, etc. These data are appended to some events at a reduced rate for monitoring, but it should be possible to increase the rate for dedicated "calibration" runs. Estimated data size: this is the same as the TELL1 data input to DAQ => 1.5kB/Trig. Rate: To keep reasonable compared to the vertex data, limit to about 80kB/s which would mean abut 50Hz maximum

  13. DAQ output event data: NZS data These "raw" data are needed to monitor pedestals and noise, to check the overall behaviour of the front-end part. The data are appended to some events but at a very reduced rate. Estimated data size: order of 20 kB Rate: To keep reasonable compared to the vertex data, limit to about 80kB/s which would mean about 4Hz maximum

  14. Slow control data Front end: • 128 SiPMTemp values • 32 SiPMbias voltages and currents ? • xxx LV states (or voltage and current values ?) Readout: ... DAQ: ... Vacuum: ...

More Related