390 likes | 403 Views
This thesis evaluates and optimizes image processing algorithms on various multi-core architectures, aiming for faster real-time execution. It explores platforms like Intel NetBurst, Core Microarchitectures, Cell Broadband Engine, and NVIDIA GeForce 8 Series GPU, presenting comparisons and timings. The focus is on adapting Gaussian Smoothing, Canny Edge Detection, and KLT Tracking algorithms for improved performance on different architectures. Experimental results and conclusions provide insights for efficient image processing implementations.
E N D
Evaluation of Multi-core Architectures for Image Processing Algorithms Masters Thesis Presentation by Trupti Patil July 22, 2009
Overview • Motivation • Contribution & scope • Background • Platforms • Algorithms • Experimental Results • Conclusion
Motivation • Fast processing response a major requirement in many image processing applications. • Image processing algorithms can be computationally expensive • Data needs to be processed in parallel, and optimized for real-time execution • Recent introduction of massively-parallel computer architectures promising significant acceleration. • Some architectures haven’t been actively explored yet.
Overview • Motivation • Contribution & scope • Background • Platforms • Algorithms • Experimental Results • Conclusion
Contribution & scope of the thesis • This thesis adapts and optimizes three image processing and computer vision algorithms for four multi-core architectures. • The timings are found • Obtained timings are compared against available corresponding previous work (intra-class) and architecture type (inter-class). • Appropriate deductions are made based on results.
Overview • Motivation • Contribution & scope • Background • Platforms • Algorithms • Implementation • Conclusion
Background • Need for Parallelization • SIMD Optimization • The need for faster execution time • Related work • Canny edge detection on CellBE [Gupta et al.] and on GPU [Luo et al.] • KLT tracking implementation on GPU [Sinha et al., Zach et al.]
Overview • Motivation • Contribution & scope • Background • Platforms • Algorithms • Implementation • Experimental Results • Conclusion
Intel NetBurst & Core Microarchitectures • Improved performance/watt factor. • SSSE3 support for effective XMM registers’ utilization. • Supports SSE4 • Scales upto Quad-core • Can execute legacy IA-32 and SIMD applications at higher clock rate. • HT allows simultaneous multithreading. • Has two logical processors on each physical processor • Support for upto SSE3
Cell Broadband Engine (CBE) PPE PPU L1 Instruction Cache L1 Data Cache Structural diagram of the Cell Broadband Engine L2 Cache SPE SPE SPE SPE SPE SPE SPE SPE SPE SPE SPE SPE Graphics Device PPE Graphics Device Graphics Device PPE PPE EIB EIB EIB I/O Devices SPE Main Memory I/O Devices I/O Devices SPU Main Memory Main Memory Local Store (LS) SPE SPE SPE SPE SPE SPE SPE SPE SPE SPE SPE SPE Memory Flow Controller (MFC)
Cell processor overview • One Power-based PPE, with VMX • 32/32kB I/D L1, and 512kB L2 • dual issue, in order PPU, 2 HW threads • Eight SPEs, with up to 16x SIMD • dual issue, in order SPU • 128 registers (128b wide) • 256 kB local store (LS) • 2x 16B/cycle DMA, 16 outstanding req. • Element Interconnect Bus (EIB) • 4 rings, 16B wide (at 1:2 clock) • 96B/cycle peak, 16B/cycle to memory • 2x 16B/cycle BIF and I/O • External communication • Dual XDR memory controller (MIC) • Two configurable bus interfaces (BIC) • Classical I/O interface • SMP coherent interface
Graphics Processing Unit (GPU) Data flow in GPU F R A M E B U F F E R Vertex Processor Assemble & Rasterize Fragment Processor Frame buffer Operations Application Textures
NvidiaGeForce 8 Series GPU Graphics pipeline in NVIDIA GeForce 8 Series GPU
Compute Unified Device Interface (CUDA) • Computing engine in Nvidia GPUs • Makes GPU a compute device into a highly multithreaded coprocessor. • Provides both low level and a higher level APIs • Has several advantages over GPUs using graphics APIs (e.g.: OpenGL)
Overview • Motivation • Contribution & scope • Background • Platforms • Algorithms • Experimental Results • Conclusion
Algorithm 1: Gaussian Smoothing • Gaussian smoothing is a filtering kernel • Removes small-scale texture and noise for given spatial extent • 1-D Gaussian kernel written as: • 2-D Gaussian kernel: Separable
Algorithm 2: Canny Edge Detection • Edge detection a commonly operation in image processing • Edges are discontinuities in image gray levels, have strong intensity contrast. • Canny Edge Detection is an optimal edge-detector algorithm. • Illustrated ahead with an example.
Algorithm 3: KLT Tracking • First proposed by Lucas and Kanade. Extended by Tomasi and Kanade and Shi and Tomasi . • Firstly, determine what feature(s) to track through feature selection • Secondly, track the selected feature(s) across image sequence. • Rests on three assumptions: temporal persistence, spatial coherence and brightness constancy
Overview • Motivation • Contribution & scope • Background • Platforms • Algorithms • Results • Conclusion
Gaussian Smoothing: Results Lenna Mandrill
Canny edge detection: Results Lenna Mandrill
Results: Canny Edge Detection Comparison with other implementations on Cell Comparison with other implementations on GPU
Results: KLT Tracking Comparison with other implementations on GPU Comparison with other implementations on GPU • No known implementations yet.
Overview • Motivation • Contribution & scope • Background • Platforms • Algorithms • Results • Conclusion & Extension
Conclusion & Future work • GPU still ahead of other architectures, most suited for image processing applications. • Optimizing PS3 could improve timings to narrow the gap between its and GPU timings. We could provide: • Support for faster color Canny. • Support for kernel width larger than 5 • Better management of thread alignment in GPU if not a multiple of 16 • Include Intel Xeon & Larrabee as potential architectures.
CBE Architecture • Contains traditional microprocessor, PowerPC Processor Element (PPE) – Controls tasks • 64-bit PPC: 32 KB L1 instruction cache, 32 KB L1 data cache, and 512 KB L2 cache. • PPE controls 8 synergistic processor elements (SPEs) operating as SIMD units • Each SPE has an SPU and a memory flow controller (MFC) - data intensive tasks • SPU (RISC) with 128 128-bit SIMD registers 256KB local store (LS). • PPE, SPE, MIC, BIC connected by Element Interconnect Bus (EIB) – for data movement - ring bus consisting of four 16 byte channels providing sustained b/w of 204.8 GB/s. MFC connectionto Rambus XDR memory and BIC interface to I/O devices connected via RapidIO provide 25.6 GB/s of data b/w.
CBE: What makes it fast? • Huge inter-SPE bandwidth • 205 GB/s sustained output • Fast main memory • 256.5 GB/s bandwidth for Rambus XDR memory • Predictable DMA latency and throughput • DMA traffic has negligible impact on SPE local store bandwidth • Easy to overlap data movement with computation • High performance, low-power SPE cores
NvidiaGeForce (Continued) • GPU has K multiprocessors (MP) • Each MP has L scalar processors (SP) • Each MP performs block processing in batches • A block is processed by only one MP • Each block is split into SIMD groups of threads (warps) • A warp is executed physically in parallel • A scheduler switches between warps • A warp contains threads of increasing, consecutive thread IDs • Currently a warp size is 32 threads
CUDA: Programming model Grid of thread blocks Block (0,0) Block (1,0) Block (2,0) Block (3,0) • Grid consist of thread blocks • Each thread executes the kernel • Grid and block dimensions specified by application. Max. by GPU memory • 1/ 2/ 3-D grid layout • Thread and Block-IDs are unique Block (0,1) Block (1,1) Block (2,1) Block (3,1) Block (2,1) Thread (0,0) Thread (1,0) Thread (3,0) Thread (4,0) Thread (5,0) Thread (0,1) Thread (1,1) Thread (3,1) Thread (4,1) Thread (5,1) Thread (0,7) Thread (1,7) Thread (3,7) Thread (4,7) Thread (5,7) Warp 1 Warp 2
CUDA: Memory model • Shared memory(R/W) - For sharing data within block • Texture memory – spatially cached • Constant memory – About 20K, cached • Global Memory – Not cached, coalesce • Explicit GPU memory alloc/de-allocation • Slow copying between CPU and GPU memory