1 / 14

Identifying anomalous strips David Stuart, Noah Rubinstein University of California, Santa Barbara

Identifying anomalous strips David Stuart, Noah Rubinstein University of California, Santa Barbara June 18, 2008. 1. 2. Goals. Systematically understand effects revealed by the data, e.g., disconnected or noisy strips, bad runs, unstable modules. We do not want to develop a framework.

daryl
Download Presentation

Identifying anomalous strips David Stuart, Noah Rubinstein University of California, Santa Barbara

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Identifying anomalous strips David Stuart, Noah Rubinstein University of California, Santa Barbara June 18, 2008 1

  2. 2 Goals • Systematically understand effects revealed by the data, e.g., disconnected or noisy strips, bad runs, unstable modules. • We do not want to develop a framework. • The deliverable is understanding, not software. • This understanding could be used to cross-check and perhaps improve the standard bad channel id and data validation algorithms.

  3. Method Our approach is to measure each effect and remove it, one at a time. Once a large effect has been removed, smaller effects become visible. Ideally, the source of the effect would be understood, but even if not, it can at least be tracked vs time for later understanding, or… The sudden disappearance of a non-understood effect is as worthy of investigation as the appearance of one. This requires a detailed, & repeated, look at raw data. 3

  4. Examples of such effects are: Time (temperature) dependence 4 Run 39475 from TOB checkout data in March. Plot shows pedestal offset vs event number range for a single channel in each chip of one module. Chip pairs in a single laser track each other indicating that this is a temperature dependent gain variation. This effect is easily removed by a common-mode subtraction, but we still want to understand and monitor it.

  5. Time (temperature) dependence Common mode pickup Non-common mode pickup Raw Noise CMS Noise Linear-CMS Noise 5 Examples of such effects are: Run 39475 from TOB checkout data in March.

  6. Time (temperature) dependence Common mode pickup Non-common mode pickup Gain Raw Noise CMS Noise Linear-CMS Noise 6 Examples of such effects are: Cable mapping errors Normalize each laser to <noise>=4 Run 39475 from TOB checkout data in March.

  7. Time (temperature) dependence Common mode pickup Non-common mode pickup Gain Pedestal and noise slopes across a chip 7 Examples of such effects are:

  8. 8 Effects can be monitored E.g., Pedestal slopes are not a problem, but changes are noteworthy.

  9. 9 Effects can be monitored E.g., How does TOB wing noise change with time? Temperature change

  10. 10 Effects can be monitored Even worth monitoring odd effects. We observe “W” shapes in the pedestals. Not understood but stable.

  11. 11 Automation Since this requires monitoring many different variables for many different channels, It was worth automating the checks. That was done for the TIF data and awaits new data.

  12. 12 Bad channel ID The final effect to remove is statistics. Averaging over many runs clarifies bad channel ID All TIF data Normalize each laser to <noise>=4 Run 39475 from TOB checkout data in March.

  13. 13 Bad channel ID Averaging over many runs clarifies bad channel ID, but The variation is also useful to understand. E.g., is this a real step?

  14. 14 Plan • Would like to apply our algorithm to checkout data. • Have studied the TIB & TOB strings tested in March. • Data looks good, but lack of FED cabling info prevents mapping. • Would like to compare checkout data to TIF data and construction data • (where available, e.g., we have only what was built at UCSB). • Would like to examine stability of data through CRUZETn. • What we learn could help validate or improve a bad channel ID algorithm.

More Related