10 likes | 246 Views
TDRSS. GPS. TDRSS. Space Segment. A-DCS. SARSAT. NPP (1330). JPSS & DWSS Satellites. Residuals. 1330. 1730. Svalbard Primary T&C NPP SMD. C3 Segment. White Sands Complex LEO&A Backup T&C. HRD Field Terminal. LRD Field Terminal. Field Terminal Segment. LTA. SDS.
E N D
TDRSS GPS TDRSS SpaceSegment A-DCS SARSAT NPP (1330) JPSS & DWSSSatellites Residuals 1330 1730 Svalbard Primary T&C NPP SMD C3Segment White Sands Complex LEO&A Backup T&C HRD Field Terminal LRD FieldTerminal FieldTerminal Segment LTA SDS Offline Support NAVO FNMOC AFWA NESDIS Data Del Data Del Data Del Data Del 15 Globally DistributedReceptor Sites Interconnectedby Commercial Fiber DQM Process Process Process Process Infra Infra Infra Infra TM Data Mgt Data Mgt Data Mgt Data Mgt Ingest Ingest Ingest Ingest Launch SupportSegment Interface Data Processing Segment Aurora MMC Contingency Operations Team MMC at Suitland Flight Operations Team • Enterprise Management • Mission Management • Satellite Operations • Data Monitoring & Recovery One full set resides in each of the 4 Centrals RDBMS-1 RDBMS-3 RDBMS-2 RDBMS-4 1960 - 2010 2000 - 2010 2010 – 2020+ JPSS Stored Mission Data Data Handling Nodes reside at each Central Command and Telemetry NPP Stored Mission Data NPP NPOESS DMSP (NPOESS (National Polar - orbiting (Defense Meteorological Preparatory Operational Environmental Satellite Program) Project) Satellite System) POES EOS (Polar Orbiting (Earth Observing Operational System) Environmental Satellites) CMN GEO CMN GEO Sensor data rate: 1.5 Mbps 15 Mbps sensor data rate 20 Mbps sensor data rate Data latency: 100 - 150 min. Data latency: 100 - 180 min. Data latency: 28 min. Data availability: 98% Data availability: 99.98% Ground revisit time: 12 hrs. Autonomy capability: 60 days Selective encryption/deniability Ground revisit time: 4 - 6 hrs. 1.7 GigaBytes per day (DMSP) 2.6 TeraBytes per day (EOS) 6.3 GigaBytes per day (POES) 2.4 TeraBytes per day (NPP) 8.1 TeraBytes per day Processing Architecture ACI Fiber Channel Switches PRO1 PRO2 DDS I&T DDS I&T PRO1 I&T PRO2 4Gbps IDPS Fiber Channel Switches Ethernet Switches DQM ING Hot Spare INF I&T DQM I&T ING I&T INF HMC 2 HMC 1 Internal Web External Web Admin Joint Polar Satellite System Common Ground System (JPSS CGS) Interface Data Processing Segment (IDPS) Product Generation David Smith, JPSS CGS Chief Architect Kerry Grant, JPSS CGS Chief Engineer Raytheon Intelligence and Information Systems, Aurora CO The IDPS segment (Figure 1) combines software and hardware flexibility, expandability, and robustness to meet stringent performance requirements levied by the Joint Polar Satellite System Common Ground System (JPSS CGS) requirements. Sensor application packets are passed to IDPS. The data stream is broken into granules, which are a subsectioning of the data stream into manageable time intervals. The granules of data can be processed in parallel by IDPS thus ensuring processing of high quality products within latency timelines. Ingest (ING) The ING Subsystem creates sensor Raw Data Records (RDR) from multiple Sensor Application Packet streams received from C3S. It separates the incoming streams into granules by sensor. Ingest extracts JPSS Auxiliary data from Stored Mission Data (SMD) and create RDRs or Bus TM and S/C Diary information. It also accepts external Ancillary data sets that are required for EDR processing. Processing (PRO) The PRO Subsystem encapsulates all of the data algorithms that must be executed to turn the RDRs into higher level products. First processing creates sensor specific SDRs or Temperature Data Records (TDRs) from RDRs. These are corrected, calibrated and geolocated sensor data. A complex set of processing chains (see Figure 6) is then used to produce the required 46 EDRs. Some of these are produced as individual products or one algorithm may yield a group of related EDR products. Data Management (DMS) The DMS Subsystem provides internal short-term (24 hour requirement) storage of all JPSS data, as well as management of shared memory (cache), which is a critical component of the IDPS in meeting data product latency. Infrastructure (INF) The INF Subsystem provides the Workflow management functions for IDPS. It has total control of process startup, monitoring, shutdown and re-start upon error conditions. INF also provides common utilities and tools, such as logging, debug, timers, performance monitoring, data availability and accounting, and HW monitoring. Data Delivery (DDS) The DDS Subsystem is the single provider of all data between IDPS and the local Central. It converts requested products into Hierarchical Data Format 5 (HDF5) format with data and metadata aggregation. HDF5 is a self-describing data format with community supplied implementation libraries. Data Quality Monitoring (DQM) The DQM Subsystem provides the Data Quality Engineer automated and ad-hoc processing in support of Data Quality Notifications from the Processing system. The Data Quality Engineer is provided with a tool kit of Geographic Information System based modules that allow the IDPS data to be registered to a geographic grid and analyzed, viewed, and trended. The DQM and the DQE support the program’s larger Calibration and Validation (Cal/Val) activities and supports the troubleshooting of data anomalies. Figure 3 JPSS EDR Latency The JPSS sensor suites on the JPSS vehicles produce data at up to a 20 MB/sec rate. This is an increase over DMSP of more than 13:1. The total xDR (RDR, SDR and EDR) products produced by an instance of IDPS per day have a volume increase over DMSP of 2000:1 In addition to handling these increased rates, latency requirements (defined as time from sensing of phenomena to production and delivering of products) decreases by a factor of 4. A graphical representation of the latency requirements that are imposed on JPSS CGS and projected actual performance is contained in Figure 3. Figure 2 - - JPSS System - - 1960 1960 - - 2010 2015 2000 2000 - - 2010 2017 2016 2010 2020+ – + NPP NPP NPOESS JPSS DMSP DMSP (NPOESS (NPOESS (Defense Meteorological (Defense Meteorological (Joint Polar Satellite System) Preparatory Preparatory Satellite Program) Satellite Program) Project) Project) POES POES EOS EOS (Polar Orbiting (Polar Orbiting (Earth Observing (Earth Observing Operational Operational System) System) Environmental Satellites) Environmental Satellites) Sensor data rate: 1.5 Mbps Sensor data rate: 1.5 Mbps 15 Mbps sensor data rate 15 Mbps sensor data rate 20 Mbps sensor data rate 20 Mbps sensor data rate - Data latency: 100 Data latency: 100 - - 150 min. Data latency: 100 Data latency: 100 - 180 min. 180 min. Data latency: 28 min. Data latency: 28 min. 150 min. Data availability: 98% Data availability: 98% Data availability: 99.98% Data availability: 99.98% Ground revisit time: 12 hrs. Ground revisit time: 12 hrs. Autonomy capability: 60 days Autonomy capability: 60 days Selective encryption/deniability Selective encryption/deniability - Ground revisit time: 4 Ground revisit time: 4 - 6 hrs. 6 hrs. 1.7 1.7 GigaBytes GigaBytes per day (DMSP) per day (DMSP) 2.6 2.6 TeraBytes TeraBytes per day (EOS) per day (EOS) 6.3 6.3 GigaBytes GigaBytes per day (POES) per day (POES) 2.4 3.4 TeraBytes TeraBytes per day (NPP) per day (NPP ) 8.1 8.1 TeraBytes TeraBytes per day per day Figure 5 - - IDPS Hardware Architecture Figure 1 Figure 4 - - Remote Sensing System Evolution Remote Sensing System Evolution - - Figure 1 - - Remote Sensing System Evolution Figure 1 Figure 1 - - Remote Sensing System Evolution Remote Sensing System Evolution Figure 4 depicts the evolution of Government remote sensing systems over the last 40 years.JPSS, the Joint Polar Satellite System, is the next generation low-earth orbiting environmental remote sensing platform. JPSS will play a pivotal role in our nation’s weather forecasting and environmental awareness for the next two decades. * -- current estimates IDPS utilizes a mix of IBM System p servers and Storage Area Network (SAN) / Fibre Channel technology to meet the program’s demanding data product latencies, assure fast and successful delivery of data to Users, provide very high operational availability, and allow for significant expandability to meet any changes to support JPSS objectives. SUMMARY The Interface Data Processing Segment is designed to provide high-quality environmental and meteorological data to the JPSS System Users with very low latency. It leverages highly flexible and expandable, robust IBM hardware to maximize data availability, operational availability, and assured delivery. The IDPS SW architecture provides an efficient solution for NPP and provides a scalability to meet future JPSS needs. JPSS products are generated from a complex network of processing algorithms. A number of interdependencies between algorithms exist in order to provide the required data quality to the end User. Figure 6 illustrates the interdependencies between products and processes within the IDPS needed to produce the NPP-era SDRs, EDRs, and Gridded Intermediate Products (GIP’s). The algorithm interactions for the JPSS era will be even more intricate. The execution of this processing flow for each granule of data is managed within the Infrastructure SI by the Workflow Manager (WFM) component through the use of a configurable Data Processing Guide (DPG). Figure 1 - - Interface Data Processing Segment Architecture - - NCEP Data Geoid & DEM QST GIP ATMS APs Ephem. & Pointing Atm Vert Moist Profile CrIS SDRs CrIS APs CrIS Cal Tables CrIS RDRs Atm Vert Press Profile ATMS SDRs CrIS SDR ATMS Cal Tables Atm Vert Tempr Profile ATMS TDRs ATMS RDRs CrIS RDR ATMS Remap ATMS RDR CrIMSS ATMS SDRs (remapped) ATMS SDR CrIMSS VIIRS OMPS Figure 6 - - NPP Algorithm Processing Interdependencies - - 01/09