500 likes | 571 Views
The Collaborative Radar Acquisition Field Test (CRAFT): Next Steps. Kelvin K. Droegemeier University of Oklahoma 2 nd Level II Stakeholders Workshop 26-27 September 2002 Norman, Oklahoma. NCDC. The Issues Before Us.
E N D
The Collaborative Radar Acquisition Field Test (CRAFT): Next Steps Kelvin K. DroegemeierUniversity of Oklahoma2nd Level II Stakeholders Workshop26-27 September 2002Norman, Oklahoma NCDC
The Issues Before Us • Grant funding for CRAFT communication links and personnel is nearly exhausted (data will stop flowing from CAPS sometime in November) • The private and academic sectors are finding value in real time Level II data • A real time Level II NWS collection system • is likely more than 1 year away • may not provide the latencies and reliability needed by the private sector for the short term • may be perfectly suited for meeting all needs in the longer term • What options exist? • How can we maximize the benefits to all stakeholders: Government, industry, academia?
Options • A wide range of potential options exists, all of which require Government approval • Shut CRAFT down and wait for the NWS system • Timeline not yet defined • Not clear the NWS system will meet non-Government user needs • We likely won’t know until the system is in place • If it does meet all user needs, we’re set • If it does not, no alternative will exist (might take months to create) • Continue the present collaborative system (58 radars) or expand to all 120 NWS radars (lots of sub-options) • Create a stand-alone system that includes all 120 NWS WSR-88D radars, serves as a back-up to whatever the NWS implements, and has 7x24 support, improved reliability, etc • Must consider administration of system (later in talk) • The ideal perhaps is a partnership among all groups, with “partnership” defined many ways
Suppose the NWS Deploys and Manages its Own Level II Distribution System (a very sensible approach)
CRAFT as a Scalable System: The Current Concept Expanded for “Operational” Deployment
Logical Network Topology LDM Server
Logical Network Topology At the moment, OU is the only server – Single points of failure (server and line from each radar) LDM Server OU
Logical Network Topology Universities NOAA Laboratories NOAA Joint InstitutesNCAR/UCAR MIT/Lincoln Lab NWS Regional HQ, NCEP Centers, RFCs LDM Server
Logical Network Topology Universities NOAA Laboratories NOAA Joint InstitutesNCAR/UCAR MIT/Lincoln Lab NWS Regional HQ, NCEP Centers, RFCs LDM Server These already exist!!
Logical Network Topology Commodity Internet via phone linesor commodityInternet LDM Server Abilene Backbone (no commercial traffic)
LDM Server LDM Server LDM Server LDM Server LDM Server LDM Server
LDM Server LDM Server Abilene Network LDM Server LDM Server LDM Server LDM Server
Each LDM “Hub Site” Carries all 88D data on Abilene “bus”-- redundancy LDM Server LDM Server Abilene Network LDM Server LDM Server LDM Server LDM Server
HUB HUB HUB HUB HUB HUB HUB
LDM Server LDM Server Abilene Network LDM Server LDM Server LDM Server LDM Server
Commodity Internet Commodity Internet LDM Server LDM Server Abilene Network Commodity Internet Commodity Internet LDM Server LDM Server LDM Server LDM Server Commodity Internet Commodity Internet
Commodity Internet Commodity Internet LDM Server LDM Server Abilene Network Commodity Internet Commodity Internet LDM Server LDM Server LDM Server LDM Server Commodity Internet Commodity Internet
Commodity Internet Commodity Internet LDM Server LDM Server Abilene Network Commodity Internet Commodity Internet LDM Server LDM Server LDM Server LDM Server Commodity Internet Commodity Internet
Commodity Internet Commodity Internet LDM Server LDM Server Abilene Network Commodity Internet Commodity Internet LDM Server LDM Server LDM Server LDM Server Commodity Internet Commodity Internet
Customers Dedicated or Commodity LDM Server PrivateCompany Commodity Internet Commodity Internet LDM Server LDM Server Abilene Network Commodity Internet Commodity Internet LDM Server LDM Server LDM Server LDM Server Commodity Internet Commodity Internet
LDM Server LDM Server Abilene Network LDM Server LDM Server LDM Server LDM Server
Features of this Concept • NOAA runs its own operational ingest system but allows connections to the BDDS of each NWS radar • The CRAFT configuration • Is completely scalable to more nodes or radars • Is highly redundant (each major hub server contains all of the data) • Is highly reliable (loss of a major hub has minimal impact) • Leverages existing infrastructure • Links easily to other networks (e.g., AWIPS) • Has significant capacity for future growth (dual-pol, phased array) • Could have dual communication lines from each radar • Could serve as a backup system for the NWS
Features of this Concept • Many variants exist • May require enhancements to LDM, e.g., multi-cast • Must consider support of LDM to the commercial sector • Key point is to create a national hierarchical distribution system along the lines of the current Unidata IDD
Possible Scenarios • Scenario #1: Maintain the current system of 58 radars with OU as the single ingest node • Assumptions • Line charges paid by same groups as now, at the same rates
Possible Scenarios • Scenario #1: Maintain the current system of 58 radars with OU as the single ingest node • Assumptions • Line charges paid by same groups as now, at the same rates • 6 Sea Grant sites: $31K/year • 6 SRP sites $72K/year • 21 MIT sites $200K/year • 4 Florida sites $5K/year • 10 OU sites $80K/year • 11 other sites FSL, NASA, GTRI, SLC, RAP, SEA (no cost estimates available) • Total leveraging is ~ $450,000 per year
Possible Scenarios • Scenario #1: Maintain the current system of 58 radars with OU as the single ingest node • Assumptions • Line charges paid by same groups as now, at the same rates • No significant s/w development or 7x24 QOS • Maintain current OU staff levels (C. Sinclair at 1.0 FTE and S. Hill at 0.5 FTE) • $20K for h/w replacement, $10K for travel (per year) • $1K for supplies (per year) • KD, DJ, DE at 1 month each (1.0 FTE) (per year) • Yearly cost: $355,000 (could be reduced by shifting some existing lines to cheaper alternatives) • Advantages • No additional h/w costs (above replacement) • Continue using a proven reliable infrastructure
Possible Scenarios • Disadvantages • Not all radars are included • Continue with heterogeneous communications infrastructure, latency problems • Relies on existing groups to continue paying their local costs • Little increase in QOS (i.e., no 7x24) • 56K lines will continue to fall behind in weather • Single ingest system at OU provides no redundancy • Reliance upon university for private sector mission-critical needs • No clear path to deal with data volume increase; however, this may not be critical if NWS system is available relatively soon
Possible Scenarios • Scenario #2: Same as Scenario #1, but add the remaining 64 NWS radars • Additional assumptions • New CAPS technical staff member ($40K/year) for QOS and other work • $100K in one-time costs for PCs • $200K for one-time line installation costs and routers • $50K in travel • $5K for supplies • $50K in h/w replacement costs and hot spares • 30 new lines cost average of current OU lines; rest cost $50/month based on DSL/cable modem • Year-1 cost: $1.3M (could be reduced by shifting some existing lines to cheaper alternatives) • Beyond Year-1: Estimate $900,000/year
Possible Scenarios • Advantages • No additional h/w costs (above replacement) • Continue using a proven reliable infrastructure • All 120 NWS radars available • Improved QOS via 2nd OU staff person
Possible Scenarios • Disadvantages • Not all radars are included • Continue with heterogeneous communications infrastructure, latency problems • Relies on existing groups to continue paying their local costs • Little increase in QOS (i.e., no 7x24) • 56K lines will continue to fall behind in weather • Single ingest system at OU provides no redundancy • Reliance upon university for private sector mission-critical needs
Possible Scenarios • Scenario #3: Same as Scenario #2, but add UCAR as a second Abilene ingest node • Additional assumptions • $100K in computer hardware at UCAR • One new UCAR technical staff member • Year-1 cost: $1.5M (could be reduced by shifting some existing lines to cheaper alternatives) • Beyond Year-1: Estimate $1.2M/year • Note: Could possibly add MIT/LL as third redundant node, but this has not been discussed with them
Possible Scenarios • Advantages • No additional h/w costs (above replacement) • Continue using a proven reliable infrastructure • All 120 NWS radars available • Improved QOS via 2nd OU staff person • Greatly improved redundancy, reliability, latencies
Possible Scenarios • Disadvantages • Not all radars are included • Continue with heterogeneous communications infrastructure, latency problems • Relies on existing groups to continue paying their local costs • Little increase in QOS (i.e., no 7x24) • 56K lines will continue to fall behind in weather • Single ingest system at OU provides no redundancy • Reliance upon university for private sector mission-critical needs (not clear that UCAR can provide needed QOS)
Scenario Summaries (1-3) * Leverages $450K/year paid by other organizations ** Could try and add MIT/LL as third node
Possible Scenarios • Scenario #4: Same as Scenario #3, but with a national telecommunications carrier providing uniform delivery service to the additional 64 radars only • Additional assumptions • AT&T line costs for 2-year contract for 64 additional radars is $850,000/year. • Mixture of T1, DSL • Note that these costs have not been negotiated and likely could be reduced substantially (might also be able to eliminate T1 lines) • Removes need for one-time installation charges and router costs • Still have the costs of the 64 new LDM PCs • Yearly cost: $2.1M (hope this could be brought down to $1.6 or $1.7M with tough negotiation)
Possible Scenarios • Advantages • No additional h/w costs (above replacement) • Continue using a proven reliable infrastructure • All 120 NWS radars available • Improved QOS via 2nd OU staff person • Greatly improved redundancy, reliability, latencies • Uniform networking for 64 radars • QOS should be much higher (AT&T rapid response)
Possible Scenarios • Disadvantages • Not all radars are included • PARTLY heterogeneous communications infrastructure, latency problems • Relies on existing groups to continue paying their local costs • Little increase in QOS (i.e., no 7x24) • 56K lines will continue to fall behind in weather • Single ingest system at OU provides no redundancy • Reliance upon university for private sector mission-critical needs
Scenario Summaries (1-4) * Leverages $450K/year paid by other organizations ** Could try and add MIT/LL as third node
Possible Scenarios • Scenario #5: Same as Scenario #4, but with a national telecommunications carrier providing uniform delivery service to all radars • Additional assumptions • AT&T line costs for 2-year contract for all radars is $1.4M/year. • Mixture of T1, DSL • Note that these costs have not been negotiated and likely could be reduced substantially (might also be able to eliminate T1 lines) • Removes need for one-time installation charges and router costs • Still have the costs of the 64 new LDM PCs • Yearly cost: $2.8M (hope this could be brought down to $2.2 or $2.3M with tough negotiation)
Possible Scenarios • Advantages • No additional h/w costs (above replacement) • Continue using a proven reliable infrastructure • All 120 NWS radars available • Improved QOS via 2nd OU staff person • Greatly improved redundancy, reliability, latencies • Uniform networking for ALL radars • QOS should be much higher (AT&T rapid response) • Increased bandwidth needs (e.g., dual-pol, new VCP, ¼ km by ½ degree resolution) could be handled by the telecomm carrier “automatically”
Possible Scenarios • Disadvantages • Not all radars are included • PARTLY heterogeneous communications infrastructure, latency problems • Relies on existing groups to continue paying their local costs • Little increase in QOS (i.e., no 7x24) • 56K lines will continue to fall behind in weather • Single ingest system at OU provides no redundancy • Reliance upon university for private sector mission-critical needs
Scenario Summaries (1-5) * Leverages $450K/year paid by other organizations ** Could try and add MIT/LL as third node
Other Scenarios • Scenario #6: Use NWS River Forecast Centers as points of aggregation • May make sense only if the NWS wishes to pursue a non-AWIPS collection strategy • The general CRAFT concept still could be applied • Scenario #7: Use the Planned NWS Distribution System • Scenario #8: Create a System Operated Entirely by the Private Sector (no university or UCAR involvement)
Administrative Structure • Points of Reference (for the sake of argument) • Must be able to ensure 7x24 service (high reliability) • Latency must be as low as possible • Government receives data at no cost but could/should cost share overall expenses in light of benefits to NCDC (direct ingest for long-term archive), NCEP, FSL, NWS Offices (Level II recorders) • Educational institutions receive data at no cost • Presumably don’t want another “NIDS arrangement” • Options • For-profit private company • University-based consortium • Not-for-profit 501(c)3 • University-based center (e.g., Wisconsin for satellite data) • Others?
Key Items for Discussion • Sustaining the operation of CRAFT beyond November • Establishing private sector requirements • Reliability • Latency • Hardware and software support • Meeting private (and academic) sector needs in the short, medium and long term • Administrative issues (including data access rules) • Dealing with future data volumes • Further analysis of system capabilities • Impact of weather on data reliability/latency • Networking simulation