820 likes | 844 Views
Module-1 Multicore Architecture. SSU. Dr.A.Srinivas PES Institute of Technology Bangalore, India a.srinivas@pes.edu 9 – 20 July 2012. Schedule. Day-1: Module 1: Multicore Architecture Module-2 : Parallel Programming Days 2-5: Parallel Programming with OpenMP
E N D
Module-1Multicore Architecture SSU Dr.A.Srinivas PES Institute of Technology Bangalore, India a.srinivas@pes.edu 9 – 20 July 2012
Schedule Day-1: Module 1: Multicore Architecture Module-2 : Parallel Programming Days 2-5: Parallel Programming with OpenMP Assigment : JPEG Compression & Decompression using Parallel Programming OpenMP Directives
Memory Hierarchy of early computers: 3 levels • CPU registers • DRAM Memory • Disk storage
CACHE MEMORY Principle of locality helped to speed up main memory access by introducing small fast memories known as CACHE MEMORIES that hold blocks of the most recently referenced instructions and data items. Cache is a small fast storage device that holds the operands and instructions most likely to be used by the CPU.
Due to increasing gap between CPU and main Memory, small SRAM memory called L1 cache is inserted. L1 caches can be accessed almost as fast as the registers, typically in 1 or 2 clock cycle Due to even more increasing gap between CPU and main memory, Additional cache: L2 cache inserted between L1 cache and main memory : accessed in fewer clock cycles.
L2 cache attached to the memory bus or to its own cache bus • Some high performance systems also include additional L3 cache which sits between L2 and main memory . It has different arrangement but principle is the same. • The cache is placed both physically closer and logically closer to the CPU than the main memory.
CACHE LINES / BLOCKS • Cache memory is subdivided into cache lines • Cache Lines / Blocks: The smallest unit of memory than can be transferred between the main memory and the cache.
Core Vs Processor - A core means, there could be more than one CPU inside; - A Quad core processor of 3 GHz will have four cores in the CPU running at 3 GHz, each with its own Cache..
Amdahl’s Law Speedup = In Terms of No. of Cores: Speedup = Where S is the time spent in executing the serialized portion of the parallelized version And n is the number of cores.
Multicore Philosophy - Two or more cores with in a single Die - each core has its own set of instructions and architectural resources
Hyper Threading: • Parts of a single processor are • shared between threads • - Execution Engine is shared • - OS task switching does not happen in Hyper threading. • Processor is kept as busy as • possible