1 / 16

Run coordination Summary C. Gemme – INFN Genova on behalf of Run Coordination August 16 th , 2016

Run coordination Summary C. Gemme – INFN Genova on behalf of Run Coordination August 16 th , 2016. LHC Cardiogram. 2172 b. 2076. 2173b. Slow Dump of ATLAS Toroid due to electrical glitch during work on the network. 7h recovery in the interfill.

abryce
Download Presentation

Run coordination Summary C. Gemme – INFN Genova on behalf of Run Coordination August 16 th , 2016

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Run coordination Summary C. Gemme – INFN Genova on behalf of Run Coordination August 16th, 2016

  2. LHC Cardiogram 2172 b 2076 2173b Slow Dump of ATLAS Toroid due to electrical glitch during work on the network. 7h recovery in the interfill. 1. Dipole A31L2 investigations and mitigations 2. PS vacuum leak 2. 1.

  3. LHC Cardiogram 2220 b 3x3 2172b • 2220b should be the maximum this year

  4. A31L2 The problem: • Two “quenches” while ramping down RB.A12 from 6 kA with -10 A/s • 10 June 2016 @ 547 A • 3 Aug 2016 @ 295 A  The second event triggered detailed investigations. • Could be explained with inter-turn short in dipole A31L2 Follow-up (Wednesday) • Additional instrumentation • Measurement campaign • Various types of cycles evening and overnight. Thursday: analysis of measurements • No sign of changes in short • High current quenches and Fast Power Abort must be avoided: would destroy the magnet Mitigations: • Remove global protection mechanism: implemented on Thursday and validated • Reduce BLM thresholds: changed on Thursday/ Friday • Increase QPS thresholds on A31L2: new board installed on Thursday and successfully validated.

  5. LHC plans Plan: • Continue physics with 2220 bunches • Slowly increase bunch intensity up to 1.2e11 bunch intensity • Targeting restricted range for bunch flattening for LHCb (from current fill: 0.95-1.15 ns  0.95-1.05 ns) In discussion • Decrease crossing angle from 375 to 300 urad  10% more luminosity, z-length of luminous region, pile-up • Special fill request by CMS: low-mu running • Remove week 43 for pprunning to have one more week for training 2 sectors to 7TeV • Luminosity levelling test already this year

  6. ATLAS Week 5197 2172b 5181 2076b 5183 2173b 5198 2172b 5199 2220b 5187 2172b 5194 3b Cosmis Data taking Slow Dump ATLAS Toroid

  7. Silvia Fressard-Batraneanu, Jeremy Love TDAQ • Smooth running. • Patch for HLTSV available to fix occasional resource starvation. • Patch pending installation, needs at least one run with high rates before putting in production • Problematic Tile ROL fibres • In collaboration with Tile, organizing the installation of new spares Weekly physics efficiency : 94.03% Trigger held by system

  8. Marcello Bindi Pixel • IBL timing • IBL timing not optimal since TIM replacement after MD. Since then, running with 2BC. • Special fill 5183 on Monday with Individual bunch and a timing scan of -10ns/+10ns performed. • Timing constants in the TIM have been applied: we recovered most of the hits but we still seem to have a fraction of clusters on tracks from the neighboring bins. 1 BC on-time hit efficiency > 99%  1 BC in the next fill. • IBL calibration • IBL calibration was finally recovered after the sw upgrade during MD • re-tuning needed to compensate from TID effects • New version of fw for IBL and Layer2 with debugging capabilities to analyze the TTC-TIM stream • Best week of the year for Pixel in term of data taking efficiency  dead-time = 0.41%

  9. Chiara Debenedetti, A. Bocci, D. Derendarz, A. Romaniouk SCT/TRT • SCT • In general Very quiet running. • In Fill 5183, one problematic link not shown as removable by the shifter and not automatically recoverable. • Set ROD formatter overflow limit to 1600 (>max number of hits per link to protect against non-physical patterns). • deployed on Few RODs in Stable beam, ok. • TRT • Stable data taking • DAQ: Needs also replacement of one TTC board that works correctly during data taking but fails in test pulses • FastOR: Observed lower rate than expected. Caused by the change of the readout mode for high threshold bits (from Mar 2016), for three bit readout to single middle bit. May be reverted in future cosmics run.

  10. Kate Whalen L1 Calo/ L1Topo • L1Calo: • Generally a quiet week for L1Calo! • Monitoring improvements. • L1Topo • Test of complex deadtime with random triggers: bucket 3 changed from 7/260 -> 14/260 -> 15/260 (not deployed yet, need more testing) • Muon items enabled since Saturday morning, Fill 5197 • Total rate ~ 15 kHz (L1), 60 Hz (HLT) • Timing checks: all algorithms are well-timed

  11. Jose Benitez LAr • HW • An HV module (EMEC A) exchanged on Sunday due to 4 problematic channels. To be followed up if readings were correct. • M167.C3 (HEC C) HV decreased due to new short • HECA cell 0x3b1a5200 (tower 0x4110400) disabled at each run, consider if permanently disable this shaper switch • Reprocessing to consider it • DQ/Monitoring • under test: online flagging of Mini Noise Bursts (prev. offline) • added DQMD checks for PU removal: turning yellow if 1-2 PU’s are disabled, red if >2 - to complement information in Shifter Assistant

  12. Silvia Fracchia TILE link failure • Repeated stopless removals of ROL ROD5 EBC33-36 (4 neighbouring modules, intolerable defect for DQ) • Starting from Sunday from 21.18 during stable beam, with consequent HLT problems • Caused >3 hours interruption in data taking • Several tests and attempts to recover it (power cycles, TTC restarts, turning off affected modules) • Finally turned out to be a problem with ROD-ROS link , similar to what occurred on 13th July in LBA • The fiber was eventually replaced with a working one out of two spares the substitute fiber has low optical power due to fiber end misaligned in the connector. • Reflectometry measurement on Monday spot problem in the same location for the downstream links. • Emergency plan: restore a spare fiber from two bad ones • Short term plan: install few additional spare fibers (2 per ROD crate)

  13. Rafał Bielski TILE link failure • With the Tile ROL disabled any chain trying to read data from Tile sending events to debug stream at L1 rate. • 21:25 Switched to standby keys to mitigate the HLT backpressure for 40 • 22:00 Disabled all jet/met chains • Recovery completed at 01:00

  14. Claudio Luci Muons • CSC • CSC latency is now set using ATLAS latency (Instead of setting it manually and comparing it to Atlas latency) • MDT • RCD crash has been fixed. This was also the cause for some failed recovery. • The fake chamber drop reported to CHIP by RCD is under study. • RPC • Access to cavern to disconnect a module from HV channel • Few other cases under close monitoring and investigation. • TGC • new Sector Logic firmware was deployed to reduce L1_MU4 rate.

  15. Other subsystems Trigger • New sw release deployed on Tuesday  fine • Multiple keys deployed during the week, following LHC programs, including overnight Cosmics run • decrease in TRT rate wrt earlier cosmic runs due to change in readout (March), which can be reverted for the next cosmic run Data preparation • Everything is working fine on the data processing side • GM infrastructure is working smoothly too • Main point now is to update references: Lucid • Running smoothly. Only pending issue is the automatic recovery from all SEU occurences – ongoing. BCM/BLM/DBM • BCM/BLM/: Some minor hw intervention done or in progress • DBM: debug and commissioning when there was no LHC running.

  16. Conclusions • In general smooth data-taking with increasing efficiency. • Only serious problem Tile link on Sunday evening. • Next week MD2 and 2.5 km commissioning. • 6/7 weeks of pp running left

More Related