60 likes | 168 Views
Observatory Priorities. S. Corder ALMA Deputy Director. Year in review. Slow down for stability successful: Efficiency improvements kept with stability improved Testing/simulation improvements now in place, we will see how much they help Acceptance for Cycle 2:
E N D
Observatory Priorities S. Corder ALMA Deputy Director
Year in review • Slow down for stability successful: • Efficiency improvements kept with stability improved • Testing/simulation improvements now in place, we will see how much they help • Acceptance for Cycle 2: • On time with the needed capabilities present • Time will tell if the number of residual issues is significant, so far looks very good
Goals for the next ˜12 months • Transition to observatory-wide, steady-state operations: • This will include continual upgrades of hard/software • Stopping of science time for engineering and computing will need to be optimized to maximize return for minimum impact • Begin using subarrays to minimize the downtime due to reintegration of antennas • Work towards a more realistic 2-ant correlator at the OSF for testing • Establish and work towards a realistic model of time usage for explicit software testing and science software testing accounting for probable development needs
Obsmode Major Decisions • 75% of the time granted should result in projects that are easily generated and pipeline reducible in principle. • Session linkage will be used sparingly in Cycle 2 • Complicated daytime specialization will wait
(My) Goals for this meeting • Annual cycles will begin with Cycle 3. Input from computing needed to devise a definitive calendar: • What activities are required on a regular basis that will result in extended downtime (outward or inward facing items) • What items have proven difficult to push to the schedule? How much time between critical preparation milestones is requested/needed? • Agreement on what the priorities are at which point in the calendar
Operational Metrics and Targets (of relevance for software) • Online: AE*hr(collected and to-be-delivered) • Technical downtime: container crashes, archive issues, scheduler issues, etc. (target such that execution efficiency is >50%; needs refining) • Efficiency improvements/losses • Excessive flagging • Online: # FSR/night (target< 0.25 in accepted v.) • Mixed: QA2 failure rate (<20%) • Offline: Pipeline uptime • Commissioning: continue as we have agreed.