320 likes | 853 Views
Introduction to Computer Systems. “ The class that bytes ”. Topics: Theme Five great realities of computer systems (continued). Course Theme. Abstraction is good, but don ’ t forget reality! Courses to date emphasize abstraction Abstract data types Asymptotic analysis
E N D
Introduction to Computer Systems “The class that bytes” Topics: • Theme • Five great realities of computer systems (continued)
Course Theme • Abstraction is good, but don’t forget reality! • Courses to date emphasize abstraction • Abstract data types • Asymptotic analysis • These abstractions have limits • Especially in the presence of bugs • Need to understand underlying implementations
Course Theme • Useful outcomes • Become more effective programmers • Able to find and eliminate bugs efficiently • Able to tune program performance • Prepare for later “systems” classes in CS & ECE • Compilers, Operating Systems, Networks, Computer Architecture, Embedded Systems
Great Reality #3 Memory Matters: Random Access Memory is an un-physical abstraction Memory (RAM) is all the processor knows • It is used to store instructions • It is used to store data Memory is not unbounded • It must be allocated and managed • Many applications are memory dominated Memory referencing bugs especially pernicious • Effects are distant in both time and space Memory performance is not uniform • Cache and virtual memory effects can greatly affect program performance • Adapting program to characteristics of memory system can lead to major speed improvements
A lph a M I PS Linux -g 5.30498947741318e-315 3.1399998664856 3.14 -O 3.14 3.14 3.14 Memory Referencing Bug Example main () { long int a[2]; double d = 3.14; a[2] = 1073741824; /* Out of bounds reference */ printf("d = %.15g\n", d); exit(0); } (Linux version gives correct result, but implementing as separate function gives segmentation fault.)
Memory Referencing Errors C and C++ do not provide any memory protection • Out of bounds array references • Invalid pointer values • Abuses of malloc/free Can lead to nasty bugs • Whether or not bug has any effect depends on system and compiler • Action at a distance • Corrupted object logically unrelated to one being accessed • Effect of bug may be first observed long after it is generated How can I deal with this? • Program in Java, Python, or C# (but not C++) • Understand what possible interactions may occur • Use or develop tools to detect referencing errors
Great Reality #3 (cont) Memory Matters: Memory forms a hierarchy. Smaller memory is faster, larger memory is cheaper. Memory forms a hierarchy • L0: Register file are part of the processor; fastest memory • L1, L2: Cache is small fast memory either on-chip or on separate chip • L3: Main memory is currently RAM • L4: Local disks (HD, CD, etc) • L5: Remote secondary storage (distributed file systems, web servers) Example • Typical disk drive is 100 times larger than RAM • Takes a typical disk drive 10,000,000 times longer to read a word than RAM
L1 cache holds cache lines retrieved from the L2 cache. L2 cache holds cache lines retrieved from memory. Main memory holds disk blocks retrieved from local disks. Local disks hold files retrieved from disks on remote network servers. Memory Hierarchy L0: CPU registers hold words retrieved from cache memory. Smaller, faster, and costlier (per byte) storage devices Registers On-chip L1 cache (SRAM) L1: Off-chip L2 cache (SRAM) L2: Main memory (DRAM) L3: Larger, slower, and cheaper (per byte) storage devices Local secondary storage (local disks) L4: Remote secondary storage (distributed file systems, Web servers) L5:
Example: cache memory Cache Matters: Small fast memory can make a big BIG difference in performance Processor speed is increasing faster than memory speed • Over the years this gap in speed has increased Cache • As we've seen, larger devices are slower, faster devices expensive Each level stores the most used data from the level below • This type of storage is called a cache for the level below • We can take advantage of this physical reality to make programs faster by an order of magnitude • Chapter 6 discusses this in detail.
Cache An L1 cache holds 10's of thousands of bytes and is nearly as fast as a register file An L2 cache holds 100's of thousands to millions of bytes and is connected to the CPU via a special bus Takes 5 times longer to access the L2 cache than L1 cache Takes 5 to 10 times longer to access RAM than the L2 cache CPU chip Register file L1 cache (SRAM) ALU Cache bus System bus Memory bus Main memory (DRAM) L2 cache (SRAM) Bus interface Memory bridge
59,393,288 clock cycles 1,277,877,876 clock cycles (Measured on 2GHz Intel Pentium 4) 21.5 times slower! Memory System Performance Example • Hierarchical memory organization • Performance depends on access patterns • Including how step through multi-dimensional array void copyij(int src[2048][2048], int dst[2048][2048]) { int i,j; for (i = 0; i < 2048; i++) for (j = 0; j < 2048; j++) dst[i][j] = src[i][j]; } void copyji(int src[2048][2048], int dst[2048][2048]) { int i,j; for (j = 0; j < 2048; j++) for (i = 0; i < 2048; i++) dst[i][j] = src[i][j]; }
copyij copyji s1 2k s3 8k s5 s7 32k s9 128k s11 512k s13 2m s15 8m The Memory Mountain Pentium III Xeon 1200 550 MHz 16 KB on-chip L1 d-cache 16 KB on-chip L1 i-cache 1000 512 KB off-chip unified L1 L2 cache 800 Read throughput (MB/s) 600 400 xe L2 200 0 Mem Stride (words) Working set size (bytes)
Memory Performance Example Implementations of Matrix Multiplication • Multiple ways to nest loops /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } } /* jik */ for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum } }
Matmult Performance (Alpha 21164) Too big for L1 Cache Too big for L2 Cache jki throughput kij kji Matrix size n
160 140 120 100 bijk bikj 80 ijk ikj 60 40 20 0 50 75 100 125 150 175 200 225 250 275 300 325 350 375 400 425 450 475 500 matrix size (n) Blocked matmult perf (Alpha 21164)
Real Memory Performance Pointer-Chase Results 1000 100 Iteration Time [ns] 10 1 From Tom Womack’s memory latency benchmark
Great Reality #4 • There’s more to performance than asymptotic complexity • Constant factors matter too! • Easily see 10:1 performance range depending on how code written • Must optimize at multiple levels: algorithm, data representations, procedures, and loops
Great Reality #4 • There’s more to performance than asymptotic complexity • Must understand system to optimize performance • How programs compiled and executed • How to measure program performance and identify bottlenecks • How to improve performance without destroying code modularity and generality
Great Reality #5 The operating system manages the hardware No user program can access any hardware directly • Writing/Reading to/from disk, keyboard, display, RAM all controlled by the OS Application programs Software Operating system Processor Main memory I/O devices Hardware
OS The operating system manages the hardware Purpose of the OS • Protect the hardware from misuse (accidental and purposeful) • Provide simple and easy access to hardware • Ensure efficient use of hardware
OS The operating system manages the hardware Abstractions provided by the OS • Processes • Virtual memory • Files Processes Virtual memory Files Processor Main memory I/O devices
OS: Processes Abstraction of a running program provided by the OS • Multiple processes can run concurrently • Each process appears to have exclusive use of the hardware • Reality: process instructions are interleaved • OS performs this interleaving with a mechanism called context switching. • The OS keeps track of all state information (or context) that a process needs to run. • Includes value of PC and register values
OS: Concurrency Appearance of simultaneous execution • Multiple processes can run concurrently • Each process appears to have exclusive use of the hardware • Reality: process instructions are interleaved • OS performs this interleaving with a mechanism called context switching. • The OS keeps track of all state information (or context) that a process needs to run. • Includes value of PC and register values
OS: Concurrency Example • Shell process and hello process execute concurrently • We consider concurrency in chapter 8 • Concurrency distorts the notion of time. Chapter 9 discusses various notions of time in a modern system. shell process hello process Time Application code Context switch OS code Application code Context switch OS code Application code
OS: Threads A process can contain multiple execution units, called threads • Each thread runs in the process's context • Each thread shares code and global data. • Easier and more efficient to share information between threads than processes • Discuss in chapter 13
OS: Virtual Memory An abstraction that provides each process with the illusion that it has exclusive use of main memory (RAM) • This illusion is called virtual memory • Each process is provided a virtual address space • Example: see next slide for Linux virtual address space • VM requires OS and hardware to interact • Use disk to extend RAM • Requires translation of every logical address to a physical address • Discuss in chapter 10
OS: Concurrency Memory invisible to user code 0xffffffff Kernel virtual memory 0xc0000000 User stack (created at runtime) Dynamic Space used for function calls Chap 3 Shared Libraries like stdio, math, etc. Chap 7 Memory mapped region for shared libraries printf() function 0x40000000 Heap: dynamically allocated (malloc/free) Chap 10 Run-time heap (created at runtime by malloc) Program code and global variables Chap 7 Read/write data Loaded from the hello executable file Read-only code and data 0x08048000 Unused 0
OS: Files A sequence of bytes • Every I/O device is modeled as a file. • Includes disks, keyboards, displays, networks • All input/output performed by reading/writing files • Provides programs with a uniform view of all the varied I/O devices that might be part of a system • Discussed in chapter 11