1 / 13

Production and Quality Assurance Issues for 2004 Data

This document outlines the Production and Quality Assurance (QA) issues discussed at the 2004 STAR Collaboration Meeting at Cal Tech. It covers the purpose and goals of QA, QA infrastructure evolution, expanded roles for Offline QA in Run 4, quantities examined by QA, detectors observed, available documentation for QA shift crew, examples of detected problems, and various QA parameters. The QA system displays histograms for different data types and functions, enabling efficient data validation and problem identification. Despite challenges in task prioritization, the QA team has been instrumental in ensuring the data and software integrity of the experiment runs. The document also details the key personnel involved and the history of QA system development within the STAR Collaboration at Cal Tech.

eunicew
Download Presentation

Production and Quality Assurance Issues for 2004 Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Production and Quality Assurance Issues for 2004 Data STAR Collaboration Meeting Cal Tech, February 16-20, 2004 Lanny Ray University of Texas at Austin For the STAR QA Team STAR Collaboration - Cal Tech

  2. Purpose and Goals of QA • Validate data and software through DST level. • Identify gross problems with the detector, calibrations, software, and/or production, i.e. is everything on that should be? • Determine if the reconstructed events look reasonable given the detector configuration and trigger conditions. • Rapid reporting to minimize wasted beam time or production cpu cycles. • Notify experts when problems are suspected and follow-up on problem resolution. • Help the experiment and DST production run efficiently. • Help codes and calibrations converge more rapidly. STAR Collaboration - Cal Tech

  3. QA Infrastructure for 2004 • Original QA web based system pioneered by Peter Jacobs, Gene van Buren, Bum Choi, Curtis Lansdell, Ben Norman (circa 2000). • Peter handed off to LR and the students all graduated. • Herb Ward joined the effort in 2002 and rewrote the infrastructure in the same web based format he has developed over the past 10 years for undergraduate homework and testing service, currently in use at 100s of Universities. • The QA team now consists of LR, Gene van Buren, Herb Ward and Jerome Lauret. STAR Collaboration - Cal Tech

  4. QA Infrastructure for 2004 • The QA system displays histograms for: 1. Fast offline 2. Real data production 3. Nightly Monte Carlo tests of dev library • The user may select from three histogram sets: 1. Regular QA histogram set (25 pages, 150 plots) 2. Full QA histogram set (almost 100 pages!) 3. TPC sectors – one sector per page • For the multi-trigger runs just started, the QA system will automatically present multiple histograms for: 1. General run/sequence summary information 2. Minbias trigger 3. Central trigger 4. High (EMC) tower trigger • The QA reports are automatically linked to the Online Runlog page for the specific experiment run number and are listed in shiftreport-hn • Additional QA discussion and information available in starqa-hn STAR Collaboration - Cal Tech

  5. Expanded Role for Offline QA in Run 4 • This year there are no shift crew members assigned specifically to fast offline QA like we had in Runs 1, 2 and 3. Of course they can still check the data using the QA system as before. • As a result this task now falls mainly upon the offline QA shift crew – which has one person per day only. • This presents a challenge for us to stay up with the data and provide quick feedback to shift leaders & crew. • When first pass DST production starts, QA shift crew will have to prioritize their work; we cannot cover all QA tasks for both fast-offline and offline DST production. • Due to time and personnel limitations QA can only identify gross problems - subtle problems must either be detected by online monitoring in the control room or in later physics analysis. STAR Collaboration - Cal Tech

  6. Quantities examined by QA • Distribution of space points, energy clusters • Distribution of global tracks, primary tracks and track quality parameters • Location of primary vertices • Distribution of secondary vertices • Number of Xi’s and kink candidates • Distributions are in 3D, either as 1D histograms or 2D scatter plots. STAR Collaboration - Cal Tech

  7. What’s in: TPC SVT FTPC East and West BEMC BSMD h and f, BBC East & West, Large & Small What’s not in: RICH EEMC ESMD FPD (plots included but are not filled for Au+Au) PMD TOFr, TOFp Detectors in QA STAR Collaboration - Cal Tech

  8. Documentation available for QA shift crew • QA Overview • Instructions for QA shift duties • Quick-Start step-by-step instructions for use of web based QA pages • QA daily shift report form and instructions • Example histograms and explanation (needs updating, I know) • Contacts STAR Collaboration - Cal Tech

  9. Examples of problems detected: ADC noise at ends of TPC time sequences: Excess fraction of broken, primary tracks (due to RDO-20 outage): starting space points Long tracks, but few space points Z (cm) STAR Collaboration - Cal Tech

  10. TPC drift speed, t0 calibration error: z of first TPC point on globtrk West TPC tracks tanl East TPC tracks Misalignment of upper/lower set of bands z-coordinate of global track extrapolation to beam axis each band corresponds to one collision vertex STAR Collaboration - Cal Tech

  11. Relative number of reconstructed events: magnetic field DB error, TPC drift speed or t0 gross error FTPC build up of noise & dead channels SVT intermittent noise Background contamination tracks in d+Au run. Intermittent TPC RDO/FEE outages and noise Intermittent BEMC & BSMD hot towers/wires & dead channels Primary vertex z-distribution with respect to ZDC timing selection dE/dx calibration errors Primary vertex transverse position errors: vertex finder coding bug V0 azimuthal distribution anomaly: TPC distortion corrections. Absence of V0, Xi finders in bfc – at least QA crew could have caught this had they been in place for post-experiment production. Examples, continued STAR Collaboration - Cal Tech

  12. QA System Performance • Last year QA was down periodically due to temporary method used to check data base for new jobs – problem corrected prior to run 4 using stable unix daemon. • This year disk space limitations often stop fast offline production runs which kills QA. • In general QA is vulnerable to problems in AFS, NFS, RCF – which STAR cannot control. • Nevertheless, QA is up and running the majority of the time but it could be better. But something seems to cause trouble every week. STAR Collaboration - Cal Tech

  13. Production Policy - QA • New offline production policy document: http://www.star.bnl.gov/STAR/comp/qa/Procedure/QA-propsal-Y4.html • Qualitative increase in STAR event rate coupled with RCF’s finite capacity means multiple, complete production passes are not possible anymore. • Careful QA, early in the process is essential to success. • DST production is likely to be done after the experiment run and after QA shifts end. What to do about it? • See above proposal – QA team with “volunteer” members from each detector group and PWG will be expected to check preliminary production data during two weeks prior to full-scale production and must either complain or sign-off during this time or kiss their re-run options good-bye. STAR Collaboration - Cal Tech

More Related