110 likes | 256 Views
FGT (FEE and) Readout/DAQ. Status and issues from operations perspective. Gerard Visser. General status is ‘ok’ – but deficiencies exist. Most runs run smoothly, under control of shift crew Deadtime is now pretty low, e.g. ~2% @ 250 Hz
E N D
FGT (FEE and) Readout/DAQ Status and issues from operations perspective Gerard Visser
General status is ‘ok’ – but deficiencies exist • Most runs run smoothly, under control of shift crew • Deadtime is now pretty low, e.g. ~2% @ 250 Hz • We’ve run 7 timebins, non-ZS for the whole run 12 • Event size 208 kB, is a lot (TPX ~1MB) • We better use 7 timebins in analysis, we may not get this next year! • 1100 token limit† • Can control all FGT-relevant APV parameters and all timing parameters • Can read temperatures of FEE but only when not in run; currently only done at configuration time • Version control used for ARM firmware (svn), ARC firmware (?), DAQ/control software (CVS) • No known hardware failures in crate, HVPS, ARM, or ARC in run 12 (maybe something in test stand) • Modularity was exercised on several occasions with module and cable swaps in short accesses • Remote firmware updates applied successfully • ARC patches and buffer size limit point to need to revise ARC for run 13+ †The 1100 token limit is purely my fault for way underestimating event size 2 years ago.
(Probable) FEE issues We need to now look at and discuss http://drupal.star.bnl.gov/STAR/system/files/STAR_Cosmic_Analysis.pdf . It looks to me like the issue is far from closed... 3AB group is generally not working due to I2C hardware problem. After 3/31/2012, 4AB group is not working, suspect voltage regulator failure, pedestals have gone crazy. Trouble started with Gerrit’s high rate running, but that’s probably just a coincidence. Many flaky APV chips, e.g. 7 in run 13105023. I don’t think it’s understood. It doesn’t look like a failure of configuring the APV, since readback is generally ok.
13105023 RTS Start Sat Apr 14 09:18:46 EDT • lots of variation in saturation level? • some ADC “above saturation”? is it some timebins saturate before others? • all that stuff below pedestal definitely concerns me • is all the above “normal behaviour of APV chips?” or is there something wrong in our application of them? • IST test stand will provide a point of reference. Can we run Jplots on that data?
13105023 RTS Start Sat Apr 14 09:18:46 EDT whole APV waveform is low amplitude, could it be a cable/connection issue? (did we ever swap ARM on this?) Incidentally, it looks like gain is simply not balanced? 2A much higher gain than 2B?
13105023 RTS Start Sat Apr 14 09:18:46 EDT I2C line hardware problem on cable/FEE, looks like it is shorted low
13105023 RTS Start Sat Apr 14 09:18:46 EDT the van Nieuwenhuizen effect?
13105023 RTS Start Sat Apr 14 09:18:46 EDT ? “Ouch! Who stepped on my dynamic range?” It might be interesting/useful to compare waveforms between say 5AB[640:767] and 5AB[0:127]
13105023 RTS Start Sat Apr 14 09:18:46 EDT What’s this?
Waveforms in FGT – discuss • above is from http://drupal.star.bnl.gov/STAR/blog/leun/2012/feb/08/fgt-timing-scan-02082012, run 13039071 – it’s the last I’ve heard about waveform data in FGT • a lot of variation in pulse shape (and I don’t show any pathological cases here) • a lot of these probably are saturated though – any more recent plots we can look at? • speaking of saturation, is that destroying our R-φ charge correlation? it could be...
Readout system issues Failure on run start. (Silence from one or sometimes both RDO’s) This problem has always been with us. Failure after running (usually 5-20 min)†. Sometimes one, often both RDO’s. This has been a real nuisance. Occasional avoidable extra trouble and confusion for the shift crew stemming from the 1100 token limit. This has been seen due to runaway calorimeter trigger rates (due to SEU) and due to operator error (e.g. including FGT in normal pedAsPhys run). Hopefully, heightened awareness by the crew, assisted by more thorough highlighting of errors and more automated configuration and recovery, will cut back on the amount of beamtime wasted owing to the various FGT readout problems. But, we need to do much better next year, this has been a disappointment and embarassment and has had nonzero costs to run 12. There are also numerous troubles specific to the test stand setup, which is a bit of a hack and has (firmware) diverged from the actual STAR setup. I need to refocus effort to synchronize that with STAR setup, resolving these issues as well of course. Tomorrow’s main task... †There is some confusion (at least on my part) whether this started after busy changes or was present from day one. Also, given the location of readout crate I am skeptical that it is an SEU issue. Not impossible though.