350 likes | 368 Views
Learn to migrate base app servers to Network Deployment, manage nodes with Deployment Manager, and create cells for centralized administration in this comprehensive guide.
E N D
After completing this unit, you should be able to: Build a functional Network Deployment cell, and incorporate existing base application server nodes into it Unit Objectives
Building a Network Deployment Environment Building a Network Deployment Environment
After completing this topic, you should be able to: Describe the change in component structure and functionality when a base application server is migrated to Network Deployment mode Evaluate the advantages of making this migration Design a Network Deployment structure for WebSphere for z/OS Use the WebSphere tools supplied to migrate a base application server to a Network Deployment environment Topic Objectives: Network Deployment
Consider a Two-System Sysplex Imagine you have two application servers configured on each system. Your desire is to administer this centrally as one logical whole. How is this accomplished? MVS System or LPAR MVS System or LPAR Server C Server A SR CR CR SR Server B Server D CR SR CR SR SYSA SYSB CF HFS HFS First we need to introduce something called the "Deployment Manager" ...
Introducing the Deployment Manager The Deployment Manager is a special kind of application server instance. The administrative application runs in the Deployment Manager: MVS System or LPAR MVS System or LPAR DM Browser A CR Server C Server A CR SR CR SR Server B Server D CR SR CR SR SYSA SYSB CF HFS HFS Before the Deployment Manager can manage application servers, those application servers need to be grouped into something called “nodes" ...
Nodes and Node Agents "Nodes" are a grouping of 1 to n servers; "Node Agents" are special single-CR servers that the DM communicates with to manage the app servers in the node: MVS System or LPAR MVS System or LPAR DM Browser A CR Server A Server C Node Agent Node Agent CR CR CR SR CR SR Server B Server D CR SR CR SR Node 1 Node 2 SYSA SYSB CF HFS HFS Nodes must stay within a system or LPAR ... they can't span boxes. Each application server node managed by a DM must have a Node Agent server.
The DM Has Its Own Node The Deployment Manager server has its own node structure. It doesn't need a Node Agent because there are no applications servers in the node, just the DM: MVS System or LPAR MVS System or LPAR DM Browser A CR Server A Server C Node Agent Node Agent CR CR CR SR CR SR Server B Server D CR SR CR SR Node 1 Node 2 SYSA SYSB CF HFS HFS This is the first hint that multiple nodes are permitted per system or LPAR. More on multiple nodes per box later.
The "Cell" Now Spans Systems When the DM is managing servers, the "cell" expands to include all the application servers and nodes being managed. Cells may span boxes, as illustrated here: MVS System or LPAR MVS System or LPAR DM Browser A CR Server A Server C Node Agent Node Agent CR CR CR SR CR SR Server B Server D The Cell CR SR CR SR Node 1 Node 2 SYSA SYSB CF HFS HFS Now we start to see the purpose of the "cell" structure.
Bring the Daemon Server into Picture The rule of thumb is this: one daemon server per cell per system or LPAR. Here we have one cell, but two systems ... so two daemon servers are required: MVS System or LPAR MVS System or LPAR DM Daemon Daemon Browser A CR CR CR Server A Server C Node Agent Node Agent CR CR CR SR CR SR Server B Server D The Cell CR SR CR SR Node 1 Node 2 SYSA SYSB CF HFS HFS The daemon server is a special single-CR server. Recall that each Base Application Server required a daemon ... "one daemon per cell per system"
Key Point: Cell Marks Admin Domain The Deployment Manager is managing this whole environment. The cell boundary marks the boundary of the administrative domain. MVS System or LPAR MVS System or LPAR Daemon DM Daemon Browser A CR CR CR Server C Server A Node Agent Node Agent CR CR CR SR CR SR Server B Server D The Cell CR SR CR SR Node 1 Node 2 SYSA SYSB CF HFS HFS This suggests that when isolated environments are needed -- for example, Test and Production -- then separate cells are called for...
Multiple Cells Per Sysplex? Yes! SYSA SYSB Node DM Daemon Daemon A Admin Cell A PROD Server C Node Agent Server A Node Agent Server D Server B Node Node Node Node DM Daemon Daemon DM A A Admin Admin Server E Server G Node Agent Node Agent Server F Server H Cell B TEST Cell C QA Node Node CF
SYSA Node Daemon DM A Admin Server A Node Agent Server B Cell Prod Node Node Daemon DM A Admin Server G Node Agent Server H Cell Test Node Single Box, Test and Production? Yes! • DM cells may span systems in Sysplex, but don't have to • Cells may enjoy complete separation from one another: • Separate address spaces • Separate HFS mount points and file systems • Separate JNDI naming • Separate administrative interfaces; separate administrative domains • If all you need is a small "sandbox" environment, consider "Base Application Server" described earlier • Recall that a Base App Server node is itself a cell
Let's Make This "Real" • The past several charts introduced the following concepts: • The Deployment Manager • Nodes • Cells • Let's now see how a multi-system cell is constructed, and show "real" things like the HFS and JCL procedures Concept Concept Concept
How Is Deployment Manager Created? 1 Load security domain • jobs 2 CNTL System locations ISPF Customization Dialogs System environment customization 3 • scripts • parameters DATA 4 Server customization The DM starts out as its own cell and node BBOCCINS Cell HFS Node • Instructions: • Update • Copy ... • Run job ... • Run job ... Daemon Deployment Manager CR WebSphere configuration A CR • Directory structure created and populated with XML and properties files JCL Start Procedure JCL Start Procedure JCL Start Procedure Virtually identical in concept to how the Base Application Server Node was created
The Deployment Manager HFS Deployment Manager /<DM mount point> <Cell Name> HFS <Node Name> <DMC1>.<DMN1>.<DMS1> Daemon <Server Name> /Daemon CR /DeploymentManager CR SR /config /cells /<DM Cell Name> cell.xml /applications • The HFS structure of the Deployment Manager is very close to that created for Base App Server • Deployment manager is really a special-purpose application server /clusters /nodes /<DM Node Name> node.xml Symbolic Link serverindex.xml /servers /<DM Server Name> was.env
Starting the Deployment Manager /<DM mount point> HFS z/OS console <DMC1>.<DMN1>.<DMS1> /Daemon S V5DMCR,JOBNAME=DM1,ENV=DMC1.DMN1.DMS1 /DeploymentManager JCL //V5DMCR PROC ENV= <CS>.<NS>.<SS> ,... // SET ROOT='/<DM server root> : //BBOENV DD PATH='&ROOT/&ENV/was.env' /config /cells /<DM Cell Name> cell.xml /applications /clusters /nodes • Could re-use Base App Server CR JCL if DM and Base App are defined in same HFS • Virtually identical to how the Base App Server is started. Only difference: BBOENV DD points to DM Server's was.env /<DM Node Name> node.xml serverindex.xml Symbolic Link /servers /<DM Server Name> was.env
Can't Create Nodes through DM Admin Browser MVS System or LPAR MVS System or LPAR DM Daemon Daemon A CR CR CR Server A Server C Node Agent Node Agent CR CR SR SR How are these created? CR CR Server B Server D SR SR CR CR Node 1 Node 2 SYSB SYSA
"Federating" a Base App Node into Cell • Base App Nodes may be "federated" (joined) into a Deployment Manager cell with the "addNode" shell script. Several key things happen: ISPF • "Node Agent" structure created in HFS • Administrative application in federated Base App Server is disabled • Cell name of your Base App Node changed to match DM's cell name • Various XML files are updated to reflect the merging of two cells Cell C1 Node Cell C1 Node Deployment Manager Daemon Deployment Manager Daemon CR A CR CR A CR Previous cell for base app node absorbed into DM cell Cell B1 "addNode" shell script Daemon Node Agent CR CR Node Node Note how this cell's daemon is no longer needed Server D Server D CR A CR SR
addNode Utility • addNode.sh <cell host> <cell port> [options] • Mandatory parameters: Deployment Manager host and port • Application handling during addNode • By default applications not propagated to cell • Therefore they are no longer configuredfor the newly managed servers • -includeApps option • addNode attempts to include existingapplications from the new node • If application with the same namealready exists in the cell • a warning is printed • application will not be installed in cell addNode Deployment Manager App Server multi-node XML Config Stand-alone Node Node Agent Node Agent App Server App Server XML Config XML Config
addNode Utility – Security Considerations • Newly added node inherits security settings from the cell • Add a secure Node to an unprotected Cell • Node will be unprotected • Add secure Node to secure Cell • Use addNode with -username and -password options • Differences in security configuration settings from Cell override settings of Node • Add unsecured Node to secure Cell • Use addNode with -username and -password options • Node will be secure • Steps to enable security on ND after nodes added: • Ensure you have your RACF definitions correctly setup (z/OS only) • Change the configuration and synchronize config files with nodes • Stop the Deployment Manager and restart it (so it starts up in secure mode) • Stop and restart each of the Nodes (so they switch to using the new security config) • Complete documentation in InfoCenter
removeNode Deployment Manager Node Agent multi-node App Server Node Agent Node Agent XML Config App Server App Server XML Config XML Config removeNode: Detach Node from a Cell • Execute command line utility "removeNode.sh" from any node to detach itself from a cell • Stops all running server processes in the node • Backed-up original base configuration will be restored • Lose all configuration changes done after joining a cell • Uninstall will also makes use of the same command • Syntax: removeNode.sh [options] • Alternative to avoid loss ofconfiguration information: • Don’t execute removeNode • Set server's standalone propertyto true • AppServer will retain the existingconfiguration • It will no longer participate in thedistributed Admin network
Changes to the Base App HFS /<Mount point> Symbolic link for server updated to reflect DM cell's short name HFS <DMC1>.<N1>.<S1> <DMC1>.<N1>.<NA1> /Daemon /AppServer New symbolic link for Node Agent server /config /cells /<DM Cell Name> cell.xml Directory name changed from old Base App cell name to DM cell name /applications /clusters • Plus changes to the XML inside the XML files • Message: federating a Base App Server Node into a Deployment Manager cell implies changes to the HFS to reflect the DM cell name /nodes /<Node Name> node.xml serverindex.xml /servers /<Server Name> New directory and files for new Node Agent server was.env /<Node Agent Name> was.env
DM and Base: Separate or Common HFS? ISPF Customization Dialogs Base App Cell DM Cell Base App Node DM Node Daemon Daemon Base Server DM Server CR CR CR SR CR SR /<Common Mount Point> /<DM Mount Point> HFS HFS /<DM sym link> /<DM sym link> /Daemon /<Base App sym link> /DeploymentManager /Daemon /DeploymentManager /<Base App Mount Point> /AppServer HFS /<Base App sym link> • Both possible. Pros/cons relate to: • Size of HFS • Common or separate JCL /Daemon /AppServer
How to Construct Multi-Node Cell ISPF Customization Dialogs 1 MVS System or LPAR MVS System or LPAR DM Daemon A CR CR 2 4 5 3 addNode.sh addNode.sh Server A Server A Daemon Daemon CR CR CR SR CR SR SYSA SYSB • The construction of a multi-node DM cell • Use the ISPF customization dialog to build DM and Base Application Servers • Federate the Base App Servers into the DM Cell.
Adding Instances After Federation /<serverroot> Browser HFS <symbolic for Server A <symbolic for Server B Deployment Manager <symbolic for Server n Daemon /AppServer A CR CR /config /cells Server A Node Agent /<DM Cell Name> CR CR SR /nodes original Base App Server /<Node Name> Server B /servers CR SR CR /<Server A> • One set of JCL for all servers. • Parameters make JCL specific to the instance. created through Admin Console was.env SR /<Server B> Server n was.env /<Server n> CR SR • Server may be started/stopped from DM Admin Console! was.env
Manual Work for Additional Servers • What the administrative program does for you: • Creates new server HFS directories • Creates XML files and was.env file • Creates symbolic link to server's was.env file • What you must do manually: • Create RACF STARTED profile (unless you want to use existing) • Create JCL (unless you use existing) • Create WLM application environment Browser Deployment Manager Daemon A CR CR Server A Node Agent CR CR SR original Base App Server Server B CR SR new server created with Admin Console
Revisit JCL Start Procedures • Assume whole Deployment Manager cell is under the same HFS mount point: • Key assumption- all servers are under the same "root" • PROC SET PATH= value names mount point - command line override is not possible Daemon Proc Deployment Manager Daemon JCL A V5DMN CR CR PGM=BBODAEMN "Generic" Servant Proc Server A Node Agent JCL V5ASR CR CR SR "Generic" Controller Proc PGM=BBOSR Server B JCL V5ACR CR SR • PROC ENV= value • specific pointer to was.env file for each server • PROC JOBNAME= value • gives started job a unique name. • WLM invokes servant proc with ENV= parameter • Appl Env named in was.env PGM=BBOCTL Server n CR SR
Federate Multiple Nodes on a Box? MVS System or LPAR Only one shared Daemon because DM node and both AppServer nodes are all in the same cell on the same image Node Cell DM Daemon A CR CR • Yes, this is allowed. • You can federate different base application server nodes on same system or LPAR into a Deployment Manager cell. • Not a good approach for "Test" and "Production" configurations. Server A Node Agent CR SR CR Server B CR SR Node Server C Node Agent CR SR CR Server D CR SR Node
Clustering of Servers in Cell MVS System or LPAR MVS System or LPAR 1. Cluster servers B and D DM A CR Browser Server A Server C Daemon Daemon CR CR CR SR CR SR 2. Install application into cluster Server B Server D Cluster CR SR CR SR HFS HFS APP APP SYSB SYSA CF • A cluster is a grouping of two or more servers within a cell into a logical entity • Clusters cannot span cells. • They may span systems in a Sysplex, but only if the cell also spans those systems. • Applications are then installed into the cluster; WebSphere will automatically deploy the app into all the servers in the cluster
Different Ways to Cluster Servers MVS System/LPAR MVS System/LPAR • Servers are clustered through the administrative interface. • Any given server may be a member of only one cluster at a time. • You can't have Server_C be a member of two different clusters, for example. • Hybrid of vertical and horizontal is permitted. Daemon Daemon Cell A Node DM A Node Node Node Agent Node Agent "Vertical" Cluster Server A Server D Server B Server E "Horizontal" Cluster Server C Server F CF
Failover and Load Balancing MVS System or LPAR MVS System or LPAR DM A SYSB CR SYSA CF Server A Daemon Daemon JNDI LOOKUP CR CR CR SR Server B Server D Cluster CR SR CR SR Network • WebSphere Edge Server • WebSphere Affinity Plugin • External balancing via a device like WebSphere Edge Server • Internal balancing and failover on JNDI lookups (clustered objects deployed with cluster JNDI name, WebSphere will automatically resolve)
Base App Node Cell versus DM Cell Base App Node Cell Deployment Manager Cell Configuration: Initial cell configuration through ISPF panels. Nodes added through federation. Additional servers and clustering through admin console Each base app node cell requires ISPF panels. Additional servers within node via admin console Address Spaces: Min: 6 (Daemon, DM CR, DM SR, Node Agent, CR, SR) Max: limited only by resources Min: 3 (Daemon, CR, SR) Max: limited only by resources Administrative Isolation Each DM cell an isolated domain. Nodes, servers within DM cell are within the same admin domain Each standalone server a separate administrative domain from others Operational Isolation Servers may be started and stopped independently. Some JNDI conflicts possible. Application name conflicts possible. Servers may be started and stopped independently. No JNDI naming conflicts; no application name conflicts with other servers Multiple Server Regions Yes Only if administrative application not present Clustering None Yes
Having completed this unit, you should be able to: Describe the change in component structure and functionality when a base application server is migrated to Network Deployment mode Evaluate the advantages of making this migration Design a Network Deployment structure for WebSphere for z/OS Use the WebSphere tools supplied to migrate a base application server to a Network Deployment environment Unit Summary