180 likes | 331 Views
IODE Ocean Data Portal - ODP. The objective of the IODE Ocean Data Portal (ODP) is to facilitate and promote the exchange and dissemination of marine data and services.
E N D
IODE Ocean Data Portal - ODP • The objective of the IODE Ocean Data Portal (ODP) is to facilitate and promote the exchange and dissemination of marine data and services. • This is achieved through a network of interconnected National Oceanographic Data Centres (NODCs) which can provide seamless access to collections and inventories of marine data from these data centres. • The Ocean Data Portal provides the full range of processes including data discovery, access, and visualization. • The ODP supports the data access requirements of all IOC programmes areas.
ODP - Access to data The IODE ODP provides on-line access to distributed ocean data: • at operational and delayed-mode time scales • at various processing level (observation, climate, analysis, forecast) • across both oceanographic and marine meteorological domains • from multiple data source formats
ODP - Standards are the key to interoperability • Data Discovery requires a common metadata standard and standard vocabularies to ensure that the distributed datasets are discoverable and conformable. • Data Visualization allows interactive presentation of data and products using web map service such as WMS and WFS. • Data Access provides ability to download data using common file formats such as NetCDF, XML, ASCII. • Interoperability with existing systems: IODE is working closely with the Joint WMO-IOC Technical Commission for Oceanography and Marine Meteorology (JCOMM) to ensure the ODP is interoperable with the WMO Information System (WIS)
User access JCOMM/IODE Data Portal Security Registry Discovery Delivery Monitoring E2E Integration Server HTTP communication Web-services Data provider nodes Request message Connection Mapping Data Access E2E Data Provider Response message NetCDF transport data-file Data Object transport data-file Local Data file system Local Database system Data Centers Network ODP - End to end (E2E) technology The goal of this E2E technology is to integrate the non-homogeneous local data systems into unified distributed marine data system, that will provide the transparent exchange between these local systems and a end-user. HTTP communication Web-services MINCyT CRIBAB CEADO DNA INIDEP etc. etc. CENPAT
ODP - E2E technology architecture • Basedon the “client-server with mediator and wrappers”: • wrapper (Data Provider) provides access to data or metadata of the local data system that exists in DBMS, structured, formatted and object data files (such as images, video files, documents, etc.). As soon as the wrapper is installed on the local data system, the latter becomes a data source for the distributed data system; • mediator (Integration Server) integrates data from various local data systems interacting with wrappers (Data Providers) and with other mediators (Integration Servers, other portals accumulating descriptive metadata). This makes possible to construct a complex network as a federation of data sources to meet the needs of various projects and applications.
Scope • DP provides access to data and metadata of the local data systems. When the wrapper is installed in the local data system, the latter becomes a data source for the distributed data system. • DP processes the local data sets and in a semi-automated way generates the discovery metadata. • These services are based on the OpenDAP data (point, profile and grid) structures and specific metadata model based on ISO 19115.
How to become a DP The data center which agrees to be the DP should provide: • the middleware for communications: application server available for HTTP protocol, • installation of the DP software OR use light DP extension, • registration of the data source and its discovery metadata, • support of the local data system
How to become a DP Light DP extension: • Allows integration of data from data centres unable to install the DP software. In this case the owner of the DP must create new user with login and password and provide this information to remote user • Data centres can use remote DP for providing catalogs of data to ODP distributed system
Installing DP It is recommended to use a dedicated computer with these minimum characteristics: CPU 1GHz or more, 2 GB RAM, 300 Mb hard disk space, running under Windows or Linux. JBoss Application server DP web application Apache web-server PHP Database access serviceDiGIR Dedicated Server DBMS Structured and objective data files 10 10
Network requirements • HTTP and SOAP protocols must be available • JBoss AS port must be opened in firewall settings • IP-address verification: DP receives requests only from the Integration Server
Supported data storage types • Data in relational Database Management System (DBMS): Oracle, MS SQL Server, MySQL, PostgreSQL, etc.; • Structured data files with non-hierarchical data formats (CSV, TSV, …) • Object data files: documents, images, data which has format not supported by ODP technology • Links (web site pages, web-applications, URLs, web-services)
How to provide the data • Data in database inside local network with the Data Provider software • Structured data files: upload to the Data Provider server or specify URL to data files location (FTP, HTTP) Data Provider FTP, HTTP Local network of Data Centre files DBMS Internet
Functional requirements The local data administrator should provide: • design of resources; • data source registration; • discovery metadata registration; • provision of data
How to generate metadata Use web-interface of the Data Provider to generate, update and maintenance discovery metadata
Coding and decoding issues • Provides the unification of specifics of storage structures and data coding systems of the local data systems through the mechanism and tools of translating local formats and codes to the common transport data file and system codes. • Code list – ships • Local (ESIMO)System (IMO) • 23640 8806802 • 23641 9347621 • ….. ……. • Local (ESIMO)System (WMO) • 684 21821
Resource life-cycle • define the schedule for updating of discovery metadata, • check the local dataavailability using the report submitted by Integration Server, • take the needed actions to provide the data source actuality(connection, data files storage availability),