230 likes | 418 Views
Chapter 3: Scalability. Chapter objectives. Upon completion of this chapter the student should be able to understand: What scalability means Differences between scaling in and scaling out Mainframe’s hardware relationship with scalability Software scalability levels
E N D
Chapter objectives • Upon completion of this chapter the student should be able to understand: • What scalability means • Differences between scaling in and scaling out • Mainframe’s hardware relationship with scalability • Software scalability levels • Parallel Sysplex relationship with scalability • Workload management main concepts
Introduction to scalability • Some definitions: • Hardware capability of a system to increase performance under an increased load when resources are added (From Wikipedia Encyclopedia http://en.wikipedia.org/wiki/Scalability) • Software ability to grow with your needs. A scalable software package means that you only buy the parts you need, and that it has the ability to grow by adding on as you grow. (From The Concise Tech Encyclopedia: http://www.tech-encyclopedia.com/term/scalability)
Scalability concepts • Scalability approaches • Scale vertically or scale up : add resource to a single node in a system • Scale horizontally or scale out : add nodes to a system
Scalability concepts • Scalability influences • Vertically growth : Upgrade the installed server processor capacity to a larger one within the same family • Horizontally growth through Parallel Sysplex: add processor capacity by adding more servers in a cluster.
25000 20000 15000 10000 Sublinear Performance 5000 0 Scalability influences : be realistic IBM System z Linear growth Relative Performance 1 3 5 7 9 15 23 25 11 13 17 19 21 27 29 31 Nways
Scalability concepts Provisioning Provisioning is the end-to-end capability to automatically deploy and dynamically optimize resources in response to business objectives in heterogeneous environments. • helps to respond to changing business • is a critical step to being able to then orchestrate the entire environment to respond to business needs on demand.
System I/O Bandwidth 172.8 GB/sec Balanced System CPU, nWay, Memory, I/O Bandwidth* 96 GB/sec 24 GB/sec ITRs for 1-way GBs 512 GB ~ 600 256 GB 450 288.15 64 GB 16-way 32-way System z9 109* zSeries 990 54-way zSeries 900 CPUs Generation 6 Generation 5 *z9-109 exploits a subset of its designed I/O capability ITR = Internal Throughput Rate IBM System z implementation – Hardware scalability
Z9 Model 38 Configuration Processor Book 3 Processor Book 1 Processor Book 0 Processor Book 2 Memory Cards Memory Cards Memory Cards Memory Cards Ring Structure Ring Structure L2 Cache L2 Cache L2 Cache L2 Cache PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU 8 MBA Fanout 8 MBA Fanout 8 MBA Fanout 8 MBA Fanout 16 STIs 16 STIs 16 STIs 16 STIs ICB-4 2 GB/sec STIs @ 2.7 GB/sec STI-MP & STI-A8 • 1 GB/sec • 500 MB/sec • 333 MB/sec • Speed set based on I/O type STI-MP & STI-A8 STI-MP & STI-A8 STI-MP & STI-A8 STI-MP & STI-A8 STI-MP & STI-A8 STI-MP & STI-A8 STI-MP & STI-A8 I/O Card I/O Ports I/O Ports I/O Ports OSA-Express2 ESCON Crypto Express2 I/O Cage I/O Cage FICON Express2 Note: Each MBA Fanout card has 2 STI ports. STI connectivity is normally balanced across all installed Books MBA supports 2 GB/sec for ICB3 and ICB-4 and 2.7 GB/sec for I/O channels. ICB-3 actually run at 1GB/sec IBM System z processors
Coupling Facility Z9 EC Z9 BC SYSPLEX Timer Z9 EC ESCON/FICON Z9 BC Shared Data Scalability of IBM System z – Parallel Sysplex
Parallel Sysplex • Serialization : to coordinate access to resources • Enqueuing : serialization for a large number of resource • Locking : extremely quick, but only for a small number of resources • Communication : Cross-System coupling facility provides simplified multisystep management services withjin a base sysplex configuration
Parallel Sysplex (Cont...) • Data sharing and Coupling facility
Parallel Sysplex (Cont...) • Workload distribution • Manually • Round robin • Dynamic workload distribution • Workload Management-driven application servers
Provisioning • Dynamic Resource Distribution • Up to 60 logical partitions (LPAR) • Each LPAR is completely isolated and protected • Processors can be shared • Workload Manager (WLM) can distributed processing resources across LPAR clusters • I/O bandwith can be shared amongs LPARs under WLM control • Each LPAR has own physical memory, it can be altered dynamically
CoD encompasses the various capabilities for you o dynamically activate one or more resources in your server as your business peaks dictate. Capacity On Demand (CoD) Different CoD options: • Capacity Upgrade on Demand (CUoD) • Customer Initiated Upgrade (CIU) • On/Off Capacity on Demand
Workload Manager (WLM) • The idea of Workload Manager is to make a contract between the installation (end user) and the operating system. The installation classifies the work running on the z/OS operating system in distinct service classes and defines goals for them which express the expectation how the work should perform. WLM uses these goal definitions to manage the work across all systems of a parallel sysplex environment.
WLM (Cont...) • Work unit identification • Managing Units of Work on z/OS (e.g. A transaction)
WLM (Cont...) • Defining the service level • Importance of a goal • Adjustment routine • Workload Management controls
WLM (Cont...) • WLM extensions • Intelligent Resource Director (IRD) • CPU LPAR management • Dynamic channel path management • I/O Priority
Summary • The New Mainframe: • is scalable • On hardware and software level • Parallel Sysplex is involved • WLM
Access time CF CoD Communication Coupling facility Enqueuing IRD ITR Locking LPAR Parallel Sysplex Provision Scalability Scale in Scale out Serialization SLA WLM Workload Key terms in this chapter