190 likes | 228 Views
Mike Koranda Release Architect. Streams – DataStage Integration InfoSphere Streams Version 3.0. Agenda. What is InfoSphere Information Server and DataStage? Integration use cases Architecture of the integration solution Tooling. Information Integration Vision.
E N D
Mike Koranda Release Architect Streams – DataStage Integration InfoSphere Streams Version 3.0
Agenda • What is InfoSphere Information Server and DataStage? • Integration use cases • Architecture of the integration solution • Tooling
Information Integration Vision Transform Enterprise Business Processes & Applications with Trusted Information Deliver Trusted Information for Data Warehousing and Business Analytics Address information integration in context of broad and changing environmentSimplify & accelerate: Design once and leverage anywhere Build and Manage a Single View Secure Enterprise Data & Ensure Compliance Integrate & Govern Big Data Consolidate and Retire Applications Make Enterprise Applications more Efficient
IBM Comprehensive Vision Traditional ApproachStructured, analytical, logical New ApproachCreative, holistic thought, intuition HadoopStreams DataWarehouse Data Warehouse HadoopStreams Transaction Data Web Logs Social Data Internal App Data StructuredRepeatableLinear UnstructuredExploratoryIterative InformationIntegration &Governance Mainframe Data Text & Images OLTP System Data Sensor Data TraditionalSources NewSources ERP data New Sources Traditional Sources RFID
IBM InfoSphere DataStage Industry Leading Data Integration for the Enterprise Simple to design - Powerful to deploy Rich capabilities spanning six critical dimensions Developer Productivity Rich user interface features that simplify the design process and metadata management requirements Runtime Scalability & FlexibilityPerformant engine providing unlimited scalability through all objects tasks in both batch and real-time 1 4 Transformation ComponentsExtensive set of pre-built objects that act on data to satisfy both simple & complex data integration tasks Operational ManagementSimple management of the operational environment lending analytics for understanding and investigation. 2 5 Connectivity ObjectsNative access to common industry databases and applications exploiting key features of each Enterprise Class AdministrationIntuitive and robust features for installation, maintenance, and configuration 6 3
Runtime Integration High Level View Streams DataStage Job Job DSSource / DSSink Operator StreamsConnector TCP/IP Composite operators that wrap existing TCPSource/TCPSink operators
Streams Application (SPL) use com.ibm.streams.etl.datastage.adapters::*;composite SendStrings { type RecordSchema = rstring a, ustring b; graph stream<RecordSchema> Data = Beacon() { param iterations : 100u; initDelay:1.0; output Data : a="This is single byte chars"r, b="This is unicode"u; } () as Sink = DSSink(Data) { param name : "SendStrings"; } config applicationScope : "MyDataStage"; } • When the job starts, the DSSink/DSStage stage registers its name with the SWS nameserver
DataStage Job User adds a Streams Connector and configures properties and columns
DataStage Streams Runtime Connector • Uses nameserver lookup to establish connection (“name” + “application scope”) via HTTPS/REST • Uses TCPSource/TCPSink binary format • Has initial handshaking to verify the metadata • Supports runtime column propagation • Connection retry (both initial & in process) • Supports all Streams types • Collection types (List, Set, Map) are represented as a single XML column • Nested tuples are flattened • Schema reconciliation options (unmatched columns, RCP, etc) • Wave to punctuation mapping on input and output • Null value mapping
Tooling Scenarios • User creates both DataStage job and Streams application from scratch • Create DataStage job in IBM Infosphere DataStage and QualityStage Designer • Create Streams Application in Streams Studio • User wishes to add Streams analysis to existing DataStage jobs • From Streams Studio create Streams application from DataStage Metadata • User wishes to add DataStage processing to existing Streams application • From Streams Studio create Endpoint Definition File and import into DataStage
Streams to DataStage Import On Streams side, user runs ‘generate-ds-endpoint-defs’ command to generate an ‘Endpoint Definition File’ (EDF) from one or more ADL files User transfers file to DataStage domain or client machine User runs new Streams importer in IMAM to import EDF to StreamsEndPoint model Job Designer selects end point metadata from stage. The connection name and columns are populated accordingly. IMAM Streams command line or Studio menu EDF EDF ADL Xmeta ADL FTP
DataStage to Streams Import On Streams side, user runs ‘generate-ds-spl-code’ command to generate a template application that from a DataStage job definition The command uses a Java API that uses REST to query DataStage jobs in the repository The tool provides commands to identify jobs that use the Streams Connector, and to extract the connection name and column information The template job includes a DSSink or DSSource stage with tuples defined according to the DataStage link definition Streams command line or Studio menu Java API SPL Xmeta REST API HTTP
Availability • Streams Connector available in InfoSphere Information Server 9.1 • The Streams components available in InfoSphere Streams Version 3.0 in the IBM InfoSphere DataStage Integration Toolkit