1 / 51

Week Two

Week Two. Parallel Systems. Parallel systems are also called tightly coupled systems Parallel systems are those multi-processing systems which share, the bus, the clock, memory and peripheral devices Communication usually takes place through the shared memory

sema
Download Presentation

Week Two

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Week Two

  2. Parallel Systems • Parallel systems are also called tightly coupled systems • Parallel systems are those multi-processing systems which share, the bus, the clock, memory and peripheral devices • Communication usually takes place through the shared memory • Provide increased throughput because of n processors • Save money due to sharing of peripherals, storage devices, files etc. • Increase reliability as failure of one processor will not badly affect the system- graceful degradation (fault-tolerant systems) and fail soft systems • Require hardware duplication for continued operation using primary and backup processors to detect, diagnose and correct failures • Processes can share certain data structures to avoid idle/overloading of processing • Additional front-end /back-end processors (slave) relieve the load of main CPU

  3. Parallel Systems • Symmetric multiprocessing (SMP) • Each processor runs an identical copy of the operating system • Many processes can run at once without performance deterioration • SMP means that all processors are peer –no master-slave relationship exists between them • Asymmetric multiprocessing (AMP) • Master-slave relationship exists with processors • Each processor is assigned a specific task; master processors schedules and allocates work to slave processors • More common in extremely large systems

  4. Distributed Systems • A collection of components that execute on different computers • Distribute the computation among several physical processors using networks (LAN/MAN(Metropolitan Area Network)/WAN) • Termed as loosely-coupled systems as processors do not share memory or clock • Distributed processors may vary in size and functions in accordance with their utilization at various sites • Processors/sites/nodes/computers communicate through high speed communication lines such as high-speed buses, telephone lines, microwave dishes, radio links etc. • Characteristics of distributed systems are: • Multiple independent components • Heterogeneous systems • Components are not shared by all users • Concurrent processing at different processors • Multiple points of control • Multiple points of failure (but more fault tolerant)

  5. Distributed Systems • Implementation Challenges • Heterogeneity • Resource sharing • Openness • Concurrency • Scalability • Fault tolerance • Transparency • Security • Coherency • Advantages of distributed systems • Increased reliability and availability • Local processing • Fast response • Lower communication costs • Data and load sharing • Modular growth • Protection and security • Resource sharing • Enhanced output

  6. Distributed Systems • Client-Server Systems (Architecture) • Services are provided by the servers and used by clients • Clients of servers but servers need not to know about clients • Clients and servers are logical processes • Mapping of processes to processors is not necessarily 1:1 • Centralized systems today act as server systems to satisfy requests generated by client systems using PCs • Servers can be categorized as: • Computer - Server Systems provide an interface to which clients can send requests to perform an action, in response to which they perform an action and send back results to the respective client • File – Server Systems provide a file-system interface where clients can create, update, read and delete files

  7. Distributed Systems • Peer to peer systems • End users share resources via exchange between computers so information is distributed among member nodes instead of concentrated at a single server (decentralized computing) • Examples: messenger in communication, audio-video in remote collaboration, distributed computing and file sharing • Advantages : increases extensibility, higher system availability, sharing files and resources, exchange of messages, optimized loading of processors, concurrent processing for enhanced output, higher throughput and improved resilience • Structural Characteristics : Inherently scalability, self –organization, congestion minimization and fault-tolerance

  8. Clustered Systems • Clustered computers share resources and are very closely linked via LAN networking • Clustering provides high reliability. • In an asymmetric clustering, one machine is in hot standby mode while the other is running the applications (the active server). It takes over when the active server machine fails. • In symmetric clustering, all N hosts (two or more) are running the applications and they are monitoring each other. It is more efficient. • Cluster directions include global clusters and a lot of R&D is going on in this direction. • Storage Area Networks (SAN) allow easy attachment of multiple hosts to multiple storage units.

  9. Basic System • Group of independent servers which: • Function as a single system • Appear to users as single system • Are managed as a single system

  10. Other Types of Clusters • Parallel Clusters – It allows multiple hosts to access the same data on a shared storage (e.g. Oracle parallel server) • Clustering over WAN • Do not allow share access to data on a disk • Distributed Lock Manager (DLM) Distributed file systems must provide access control and locking to the files to ensure no conflicting operations occur. DLM facility is therefore included • High Availability Clusters • High-Availability clusters exist to keep the overall services of the clusters available as much as possible in order to take into account the fallibility of computer hardware and software • Load Balancing Clusters • Load balancing clusters provide a more practical system for business needs. As the name implies, that systems entails sharing the processing load as evenly as possible across a cluster of computers running the same set of applications • Load balancing clusters distribute the networks or compute load across multiple nodes • Such a system is perfectly suited for large number of users • MOSIX • MOSIX uses a modified kernel to create a process load balanced cluster • Servers and networks can join or leave the clusters increasing or decreasing of the clusters

  11. Why are Clusters Important? • Clusters improve systems availability • Clusters enable application scaling • Clusters simplify system management • Clusters are a superior server solution

  12. Clusters Improve System Availability • When a network server fails, the service it provides is down • When a cluster server fails, the service it provides is “failover” and downtime is avoided

  13. Clusters Enable Application Scaling • With networked SMP servers, application scaling is limited to one server • With clusters, applications scale across multiple SMP servers (typically up to 16 servers)

  14. Clusters simplify system management • Clusters present a Single System Image i.e. the cluster looks like a single server to management applications • Hence, reduce management costs

  15. Real-Time Systems • Used as control devices in dedicated application • Sensors gathering information, computer analyzing it and adjusting appropriate controls to modify the sensor output • Hard Real – Time System • Guarantees that critical tasks are completed in time • Used with controls and robotics requiring precision movements • Conflicts with time-sharing systems not supported by general purpose operating systems • Secondary storage limited or absent, data stored in short term memory or read only memory (ROM) • Soft Real – Time System • A critical task gets priority over the tasks and retains that priority until its execution is completed • Due to lack of deadline support, these are risky to use for industrial controls and robotics • Useful in applications (multimedia, virtual reality) requiring advanced operating system features

  16. Real-Time Systems • Such systems are used in controlled scientific experiments, medical imaging systems and industrial controlled systems • Processing must be done with the definite constraints or the system will fail (quick response) • More RAM required • Advanced features of OS such as virtual reality are not needed

  17. Handheld Systems • PDAs such as Palm-Pilots or cellular phones with connectivity to networks like internet • Small size, weigh less • Issues: • Limited memory • Slow processors • Small display screens (Web clipping is used to display contents in web pages) • Some handheld devices also use wireless technology (WAP) allowing remote access to e-mail and web browsing • Major benefits of handheld systems are convenience and portability

  18. Computing Environment • Traditional Computing • PCS, terminals, laptops, etc. attached to networks • Portals provide web access to servers • Handheld devices are used to get necessary information • Firewalls are used in some applications for security purposes • Web-based computing • Workstations, handheld PDAs and cellular phones provide access to web-based computing • It has increased the emphasis on networking (wired or wireless access) • It provides faster network connectivity • Load balancers distribute network connections among a pool of similar servers • Embedded Computing • Computers run embedded real-time systems • These devices are found every where (car engines, robots, ovens, controllers etc.) • These have low or little user-interface • Can be used to computerize houses (central heating and lighting, alarm system etc.)

  19. Computer – System Structures Chapter 2 • Computer System Operations • I/O Structure • Storage Structure • Storage Hierarchy • Hardware Protection • General System Architecture

  20. Computer-System Operations • A modern computer consists of a CPU, memory, system bus and a number of device controllers • I/O Devices and the system can execute concurrently • Each device controller is in charge of a particular device type • A device controller for each device which contains local buffer storage and a special purpose registers • A bootstrap program is required to initialize the computer system • CPU moves data from/to main memory to/from local buffers • I/O is from the device to local buffer of controller • Device controller informs CPU that it has finished its operation by causing an interrupt

  21. Computer-System Architecture

  22. Common Functions of Interrupts • The occurrence of an event is usually signaled by an interrupt either the hardware or software • System call or monitor call is executed to trigger an interrupt • Interrupt transfers control to the interrupt service routine generally, through the interrupt vector, which contains the addresses of all the service routines • Interrupt architecture must save the address of the interrupted instruction • Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interrupt • A trap is a software-generated interrupt caused either by an error or a user request • An operating system is interrupt driven and priority interrupts have been introduced in modern systems

  23. Interrupt Handling • When the CPU is interrupted, it stops what it is doing and immediately transfers execution to a fix location to execute the interrupt the routine through a table of pointers which is stored in LMA • On completion of execution of service routine, the CPU resumes the interrupted computation • LMA locations hold the addresses of the interrupt service routines (Interrupt Vector i.e. Memory Address of an Interrupt handler) for the various devices • Separate segments of code determine what action should be taken for each type of interrupt • The operating system preserves the state of the CPU by storing registers and the program counter • Determines which type of interrupt has occurred: • Polling • Vectored interrupt systems • Interrupts are important part of a modern computer system and should be handled immediately • System call is a method used by a process to request action by the operating system

  24. Interrupt Time Line for a single process doing output

  25. I/O Structure • The computer system has a number of device controllers connected through a common bus • A device controller contains local buffer storage and a set of special purpose registers • The device driver is responsible for moving the data between the peripheral devices and it controls its local buffer storage • I/O Interrupts are used by the device controllers for transfer of data • I/O methods: Synchronous and Asynchronous • In synchronous method, after I/O starts, control returns to user program only upon I/O completion • Waiting for I/O may be accomplished by either wait instruction or wait loop • Wait instruction idles the CPU until the next interrupt • Wait loop continues until an interrupt occurs • At most one I/O request is outstanding at a time, no simultaneous I/O processing

  26. I/O Structure • In asynchronous method, after I/O starts, control returns to user program without waiting for I/O completion It requires:- • System call – request to the operating system to allow user to wait for I/O completion. • Device-status table contains entry for each I/O device, indicating its type, address and state. • Operating system indexes into I/O device table to determine device status and to modify table entry to include interrupt • Operating System will maintain a queue for each I/O device • An I/O device interrupts when it needs service, OS determines I/O device and updates its table entry • An interrupt signals completion of an I/O request, control then returns from I/O interrupt to another request or user program • Interrupt schemes vary from system to system • This method increases system efficiency

  27. Two I/O Methods Synchronous Asynchronous

  28. Device Status Table

  29. Direct Memory Access Structure • Involvement of CPU in data transfer is a time consuming process. It the CPU needs two microseconds to respond to each interrupt and interrupts arrive every four microseconds then left time is less for process execution • DMA is used for high-speed I/O devices able to transmit information at close to memory speeds • Device controller transfers blocks of data from buffer storage directly to main memory without CPU intervention • Only one interrupt is generated per block, rather than the one interrupt per byte

  30. Direct Memory Access Structure • DMA controller has its own registers for source and destination addresses • A device driver sets the DMA controller register to use the appropriate source and destination addresses, transfer length and it is then instructed to start I/O operation • While the DMA controller is performing the data transfer, the CPU is free to perform other tasks • As the DMA controller steals memory cycles from the CPU so it slows down CPU execution during DMA operation • DMA interrupts the CPU when the transfer has been completed

  31. Storage Structure • Registers • Cache • Main Memory • Electronic Disk • Magnetic Disk • Optical Disk • Hard Disk • Magnetic Tape

  32. Registers • Registers are available in the CPU and are accessible within one cycle of the CPU clock • Faster operations are carried out on contents of CPU registers due to faster accesses • Processor does not stall while performing operations on registers • Size of the registers is very small • Registers are volatile

  33. Cache • Cache needs to stall as RAM is slower than CPU speed for providing data required to complete the instruction • Cache is a faster memory between the CPU and main memory and is a remedial measure to reduce idling time of CPU • Cache is a memory buffer which stores information required by the CPU using register-allocation and register-replacement algorithms • Instruction cache holds the next instruction to be executed whereas data cache keeps required data for the instruction. They are known as hardware caches • Cache has limited size so cache management is a problem for designers • Careful selection of the cache size and of a replacement algorithm can provide 80 – 99 % of all accesses within the cache – maximizing system performance • Caches are unstable

  34. Main Memory • Main memory can be viewed as a fast cache for secondary storage • Program must be loaded in the RAM for execution and main memory is a large media that CPU can access directly • Main memory is implemented in a semiconductor technology (DRAM – Dynamic Access Random Memory Stores each bit of data in a separate capacitor) • Load and store instructions specify memory addresses for interaction • A typical instruction is executed using fetch-decode-execute cycle • All programs and data can not be stored in RAM due to its size and volatility • Special I/O instructions allow data transfers between the device controller registers and main memory • In memory-mapped I/O, ranges of memory addresses are set aside and are mapped to the device registers for providing more convenient access to I/O devices • In programmed I/O, CPU uses polling to watch the bit in the control register to see whether the device is ready for transfer of data between device and main memory • In an interrupt-driven I/O, CPU receives an interrupt when the device is ready for the data transfer

  35. Magnetic Disks • Magnetic Disks provide a large space for storing programs and data on permanent basis • Disks are relatively simple and consist of • Disk speed depends upon transfer rate and positioning time (seek time and rotational latency) • Head crash damages the magnetic surface and the whole disk is replaced for safety of data and programs • The storage size of a Hard Disk in in GBs • FDD rotates slowly than HDD which reduces wear on the disk surface. Its storage capacity is very small compare to HD or CD • Buses attached to a disk drive or EIDE (Enhancements to Integrated Drive Electronics), ATA and CSI • Data transfer through a bus is carried out between the host controller and disk controller • Magnetic disks are non-volatile

  36. Moving-Head Disk Mechanism

  37. Magnetic Tapes • Magnetic tapes are used to backup the data and programs in order to protect any loss due to HD failure • Magnetic tapes can hold large quantities of data/programs • Access time is slow compared to HD,CD, Main Memory etc. • Magnetic tapes for non-volatile • Storage/handling of information is very slow due to winding/rewinding of tapes • Random access is not available on tapes

  38. Storage Hierarchy • Storage systems can be organized in a hierarchy according to • Speed • Capacity • Cost • Volatility • Register, cache and memory are constructed using semiconductor memory and are volatile • Electronic disks can be volatile or non volatile • All secondary storage devices (magnetic disk, optical disk, floppy disk, magnetic drums) are non volatile

  39. Storage-Device Hierarchy

  40. Coherence and Consistency • The same data may appear in different levels of the storage system. For example, value of variable (X) of file G may reside on magnetic disk, main memory, cache or CPU register • In a multi-tasking environment, each process wishing to use the value of variable (X) must obtain the most recently updated value • A copy of variable (X) may exist simultaneously in several caches having different value in multiprocessor environment. For cache coherency, the system hardware must make sure that an update to value of X in one cache is immediately reflected in all other caches where X resides for concurrent execution of file G • For cache consistency in a distributed environment, the various replicas of the file G may be accessed and updated so system must ensure that when a replica is updated in one computer, all other replicas are brought up-to-date quickly through client or server initiated approach

  41. Migration of X From Disk to Register

  42. Hardware Protection • Dual-Mode Operation • I/O Protection • Memory Protection • CPU Protection

  43. Dual Mode Operation • Sharing of system resources improved system utilization but increased problems. Many jobs could be affected by a bug in a program • A good operating system must ensure that a faulty program can not cause other programs to execute incorrectly • If a user program fails, the hardware will trap to OS, the OS dumps the memory of the program for debugging and terminates it • The hardware-supported dual-mode operation protects the OS, all other programs and their data from any mal-functioning program • User-mode of operation (mode-bit is 1) • Monitor/supervisor/system mode of operation (mode-bit is 0) • Whenever an interrupt or trap occurs, the hardware switches from user-mode to monitor-mode. OS is in the monitor mode

  44. Dual-mode Operation • The dual-mode of operation provides us with the means for protecting the OS from errant users and errant users from one another • The hardware allows privileged instructions (i.e. system calls) to be executed in only monitor mode • When an interrupt or fault occurs hardware switches to monitor – mode Interrupt/Fault Monitor User Set user mode

  45. I/O Protection • All I/O instructions are defined as privileged instructions so users can not issue instructions from user-mode • Must ensure that a user program can never gain control of the computer in monitor-mode (i.e. a user program that, as part of its execution, stores a new address in the interrupt vector) • To do I/O, a user program executes a system call to request that the OS performs I/O on its behalf and returns the control to user after completion of I/O operation

  46. Memory Protection • Must provide memory protection for the interrupt vector, the interrupt service routines, and user programs from one another • In order to have memory protection, two registers are used to determine the range of legal addresses a program may access: • Base register – holds the smallest legal physical memory address • Limit Register – Contains the size of the range • Memory outside the range is protected • A trap is generated if any user’s program attempts to access unauthorized memory area • When executing in monitor- mode the OS has unrestricted access to monitor and user’s memory • The load instructions for the base and limit registers are privileged instructions

  47. CPU Protection • A program may be stuck: • In an infinite loop • Fail to call system services • Fail to return control to the OS • Timer - Interrupts computer after specified period to ensure that OS maintains control • Timer is decremented with every time click • When timer reaches the value 0, an interrupt occurs and control is automatically transferred to OS • Timer is also commonly used to implement time sharing mechanism • Timer can be used to compute the current time • Load-timer is privileged instruction

  48. Network Structures • LAN (Local Area Networks) • Were introduced in 1970 for economical use of a number of small computers and share resources • Cover a small geographical area and are generally used in an office environment • Communication links or LANs are high speed and lower error rates • High quality cables (TP, Fiber optic etc.) are used for establishing LANs • Common topologies are bus, ring and star • Communications speed range from Mbps to Gbps • A typical LAN may consist of PCs/Laptops/ PDAs, shared peripheral devices and or more gateways

  49. Local Area Network Structure

  50. Network Structures • WAN (Wide Area Networks) • Emerged in the late1960s to provide efficient communication among sites • Are physically distributed over a large geographical area • Hardware and software resources are shared conveniently and economically by a wide community of users • ARPANET grew from four sites to million of sites using internet • The communication links (telephone lines, leased lines, microwave links, satellite channels) are relatively slow and less reliable • Communication processors control the communication links for transferring information among the various sites • The Internet WAN provide the ability for hosts at geographically separated sites to communicate with one another • The host computers differ from one an other in speed, type, operating system etc. • Connections between networks use a telephone-system service to provide communication • The router controls the path each message takes through the net. Dynamic routing enhances communication efficiency whereas static routing reduces security risks • Modems convert digital data to analog signals and vice versa for communication • WANs are slower than LANs

More Related