240 likes | 253 Views
Learn how the channel subsystem in System Z manages data flow between I/O devices and storage, optimizing processing efficiency. Explore the roles of channels, control units, and subchannels in this key system component.
E N D
IO_for System Z Basics: The channel subsystem directs the flow of information between I/O devices and main storage. It relieves CPUs of the task of communicating with I/O devices and permits data processing to proceed concurrently with I/O processing. The subsystem uses one or more channel paths. A Channel Path ID (CHPID) is the communication link managing the flow of information to or from I/O devices.
IO_for System Z Basics: Channels are the communication path from the CSS to the control units and I/O devices * Channel Path Identifier (CHPID) is assigned to each path uniquely identifying that path * Control Units provide the capabilities to operate and control an I/O Device * Subchannels provide the appearance of a device and contains the information for sustaining an I/O
IO_for System Z Basics Within the subsystem are subchannels. One subchannel is provided for and dedicated to each I/O device Each subchannel provides information concerning the associated I/O device and its attachment to the subsystem. The subchannel also provides information concerning I/O operations and other functions involving the associated I/O device. The subchannel is the means by which the subsystem provides information about an associated I/O device to CPUs,
IO_for System Z Basics I/O devices are attached through control units to the subsystem by means of channel paths. An I/O device may be attached to more than one control unit. Control units may be attached to the subsystem by more than one channel path. An individual I/O device may be attached to the subsystem by as many as eight different channel paths using a subchannel. The total number of channel paths provided by a subsystem depends on the model and the configuration.
IO_for System Z The Subchannel and Channel Program IDAAW The subchannel consists of internal storage that contains information in the form of a Channel Program (one or more Channel Command Words (CCWs) chained together). Each 8-byte CCW contains a command, count, address, flags and I/O-interrupt code. Non-contiguous real memory, is supported by a construct called an indirect address word (IDAW) list The IDAW list allows scattering of data in memory for non-contiguous real pages and a 64-bit addressing. I/O operations are initiated with a device by the execution of I/O instructions that designate the subchannel associated with the device. The IDAW designated by the CCW can designate any location. Data is then transferred, for read, write, channel status,etc.
IO_for System Z The Modified IDAW (MIDAW) facility is a address word facility added to z/Architecture to coexist with the current IDAW facility. The MIDAW facility is a method of gathering or scattering data from and into discontinuous storage locations during an I/O operation. The I/O z/Architecture supports indirect addressing implementing the Modified Indirect Data Address Word facility for both ESCON and FICON channels. The use of the MIDAW facility, by applications that currently use data chaining, results in improved channel throughput in FICON environments.
IO_for System Z The Channel Subsystem (CSS) a.k.a. The Subsystem The CSS comprises all of the hardware and firmware required to implement the Channel architecture. There are dedicated I/O processors also known as the System Assist Processors(SAPs) (SAPs)* and I/O channel paths execute the bulk of the I/O instructions as well as I/O interrupt processing. Firmware in the CPs that initiates the I/O and participates in the handling of I/O interruptions. The CSS directs the flow of information between I/O devices and main storage. The CSS uses one or more channel paths as the communication links in managing the flow of this information.
IO_for System Z – The CSS The Channel Subsystem (CSS) a.k.a. The Subsystem The CSS also performs: -channel-path management e.g. test path availability, -selecting an available channel path, and -initiating execution of I/O operations When I/O operations are completed, the -CSS analyzes the resulting status and -transmits it back to the program by interrupts* and status information.
IO_for System Z -The CSS To provide communication about the I/O configuration, a set of control blocks are allocated in the Hardware System Area (HSA) [storage accessible only to the embedded firmware]. One such control block in the HSA is the subchannel control block (SCB). Each SCB contains much of the information about a device. One HSA subchannel for each device is associated with an LPAR. It contains information required to communicate with the associated I/O device. An SCB contains information such as the channel program address, path selection controls, the device address, subchannel and device status. This is the major control block used to pass information among the elements in the CSS. There are additional control blocks to manage I/O operations with the channels,while others allow the queuing of work or interruptions.
IO_for System Z Sharing Channels: Channels are shared within a CSS via the Multiple Image Facility (MIF) and across CSSs using Spanned Channels. -I/O sharing was possible in a pre-MIF ESCON environment, where multiple systems could share control units, devices, and common links through ESCON device features. -Under this situation, channel assignment to an LPAR, however was more ‘static’. -Channels could only be defined as reconfigurable,enabling them to be administratively removed from one logical partition (LPAR) and attached to another. -They were dedicated to one partition at a particular time and could not be shared by other partitions.
IO_for System Z -Multiple Image Facility (MIF) See fig IO.7 With MIF, the server’s channel subsystem provides channel path sharing by extending the access capability of the channel architecture to logical partitions. MIF provides the same communication between logical partitions and I/O devices, but using fewer physical channels, and therefore, fewer ports and possible control unit link interfaces. Also, manual reassignment of channels between logical partitions to handle different workloads is no longer necessary, which improves reliability, availability and less costly not needing to acquire additional channel cards.
IO_for System Z - MIF Using MIF, each LPAR has its own view of a shared channel ( channel path image) and each control unit connected to the shared channel. The allocation of additional channels when adding new logical partitions or for availability reasons is no longer required. MIF eases configuration management tasks such as enabling disaster backup solutions, consolidating applications, and providing migration, test, and other special environments. Moreover, MIF improves configuration flexibility, especially in handling greater numbers of logical partitions, through easier access to control units.
IO_for System Z – Multiple Channel Subsystem Images Multiple channel subsystems Considerations. The mainframe can support more than one Channel Subsystem. The Multiple-Channel SubSystem (MCSS). MCSS intentions were to: 1) Minimize the changes necessary to provide greater I/O capacity, 2) Build upon and increase the MIF channel-sharing capabilities, 3) Ensure backward compatibility with previous mainframe computing environments. Each Logical CSS (LCSS) may have from 1 to 256 channels and may in turn be configured with 1 to 15 logical partitions (LPARs). ). The LCSS uses virtualization in order to share channel between the CSS’.
IO_for System Z Several z/Architecture constraints redefined in a manner that minimizes the impact of providing more than 256 channels and associated I/O devices on the z platform operating systems. Specifically, the channel-path identifier (CHPID), had to be maintained. The CHPID value is an 8-bit binary number, therefore, a maximum of 256 channel paths were possible on previous S/370, S/390, and early z/Architecture-class systems. This 8-bit CHPID has been maintained without change because of its pervasive use in the z/OS and z/VM operating systems.
IO_for System Z – The PCHID To accommodate more than 256 channel paths an additional level of channel addressing indirection was created. This allows more than 256 physical channel paths to be installed without changing the legacy 8-bit CHPID value. This new channel-path identification value, called the physical-channel identifier (PCHID), is a 16-bit binary number ranging from 0 to 65,279, and identifies each installed channel path. With the current mainframe platform, a maximum of 1024 external channel paths ) and 48 internal channel paths are each assigned a unique PCHID. The PCHID value is transparent to the programs operating in each LPAR.
IO_for System Z - Spanned Channels Spanned channels Channel paths are called “spanned” channel paths when they allow the channel paths (and their attached I/O devices) to be dynamically and transparently shared by programs that are: -operating in LPARs which are configured to different channel-subsystems, that is, they span multiple subsystem images. -Each configured LPAR is assigned to an appropriately defined CSS in order to accommodate the requirements of the operating system and application programs that are executed in the LPAR. See fig IO.8
IO_for System Z - Parallel Access Volumes (PAV) This feature is incorporated into defining MSS. In the past, any single device located amongst the DASD farm used one hardware address. Therefore, only one I/O request at a time could access data on that disk. Other I/O requests targeted for that device were queued waiting for their turn. This could have a severe performance impact which could easily back up the overall mainframe workload. I.e. when there was an active I/O to a disk volume, its hardware address was flagged “busy”. For Parallel Access Volumes, there are multiple addresses associated with the same logical volume and each such address isassociated with a subchannel known as a PAV Alias.
IO_for System Z - Parallel Access Volumes (PAV) Thus, a PAV disk is represented by a base address and possibly one or more aliases. The mainframe I/O architecture permits a unit address (and its associated subchannel) to handle only a single request at a time, PAV supports multiple concurrent I/O requests from the same system against the same logical volume. -Using this technology reduces I/O queuing to a device. -Allowing multiple requests breaking the serialization to a volume. Greater throughput is achieved transparent to the executing applications. See Fig IO.9
IO_for System Z - The System Assist Processor (SAP) The mainframe uses an asynchronous processor called the System Assist Processor (SAP) to drive the mainframe’s channel subsystem(s). This is an I/O Processor (IOP) and takes responsibility during the execution of an I/O operation. The SAP relieves the OS (and general CP involvement) during the setup of an I/O operation. It does the scheduling of an I/O, that is, - it finds an available channel path to the device and - guarantees that the I/O operation starts. SAP is not in charge of the movement between main storage and the channel.
IO_for System Z - The System Assist Processor (SAP) A SAP processes the Start-Subchannel (SSCH) instruction and locates a subchannel or logical device in its work queue. The requests in this queue are processed based upon: -the I/O Priority assigned by the Workload Manager or the hardware - it tries to locate an available channel that succeeds in connecting to a control unit, then - it starts the I/O operation. The SAP uses information in the subchannel to determine which channels and control units can be used to reach the target device. Each model mainframe comes with a default number of SAP engines, although more SAPs can be added
IO_for System Z -Dynamic Channel path Management - Originally, LPAR to channel mapping was static and required a reconfiguration of the LPAR to adjust resources to changing workload. - Unwieldy since workloads change during the day and channel utilization at any point was either under or over utilized. - This affected the entire operation of the machine- burdening hardware cost with equipment not used or applications experiencing decreased throughput. - It also required skill personnel to analyze reports and do resource balancing to achieve daily service levels. - Month end or year end processing required further analysis and reconfiguration. - In instances where workload spiked additional channel cards were likely purchased to meet the peak demands. At other times the cards went unused.
IO_for System Z- Dynamic channel path management (DCM) Dynamic channel path management (DCM) allows Channels to dynamically change channel path definitions to attached DASD control units in response to changing workloads. When combined with WLM moving channel resources to the “busy” control units as required, IO thru-put is increased. DCM moves the channels to control units that are being used by business-critical workloads: -to help them meet their service level objectives. -to enhances availability -to maximizing overall hardware usage. This Intelligent Resource Director (IRD) feature also complements PAV when moving dynamic aliases to high utilized devices as workload changes.
IO_for System Z - IO Queuing There are a number of places where an I/O request can be queued because the resources for the next phase are unavailable. These queuing points include: • The UCB queue or device address queue. This is a local queue, because it is per device. (all requests come from the same OS and no PAV implemented). • Queues within the CSS (Channel Subsystem ). The CSS queues are global, because they are for all the devices for the machine and all the I/O requests coming from the LP images of the machine. • The control unit queue. The control unit queue type is global because it applies to all I/O requests, from all the LPs, in all the connected machines (which includes remote systems).
IO_for System Z - Summary The features described as designed to solve problems where I/O delays can affect a business’ ability to meet customer demand. See Fig IO.10 for an overview of an IO flow