270 likes | 291 Views
Learn how AFNI's FFT is parallelized using MPI, exploring the process, results, and implications for medical image processing efficiency. Experiment results and analysis are discussed, showcasing a significant speedup.
E N D
Parallelization of FFT in AFNI Huang, Jingshan Xi, Hong Department of Computer Science and Engineering University of South Carolina
Motivation • AFNI: a widely used software package for medical image processing • Drawback: not a real-time system • Our goal: make a parallelized version of AFNI • First step: parallelize the FFT part of AFNI
Outline • What is AFNI • FFT in AFNI • Introduction of MPI • Our method of parallelization • Experiment result and analysis • Conclusion
What is AFNI? • AFNI stands for Analysis of Functional NeuroImages. • It is a set of C programs (over 1,000 source code files) for processing, analyzing, and displaying functional MRI (FMRI) data - a technique for mapping human brain activity. • AFNI is an interactive program for viewing the results of 3D functional neuroimaging.
How to run AFNI? • Log on to clustering machine (daniel.cse.sc.edu) • Go to directory /home/ramsey/newafnigo • Run “afni” • Interface should show up at this time
AFNI Interfaces --- Cont. Axial Sagittal Coronal
AFNI Interfaces --- Cont. Axial Sagittal Coronal
AFNI Interfaces --- Cont. Axial Sagittal Coronal
FFT in AFNI • Fast Fourier Transform:a kind of finite FT from discrete time domain to discrete spatial domain • Reduces the number of computations needed for N points from O(N2)to O(NlgN) • Extensively used in AFNI • To parallelize FFT has great significance for AFNI
What is MPI? • MPI stands for Message-Passing Interface. • MPI is the most widely used approach to develop a parallel system. • MPI has specified a library of functions that can be called from a C or Fortran program. • The foundation of this library is a small group of functions that can be used to achieve parallelism by message passing.
What is Message Passing? • Explicitly transmits data from one process to another • Powerful and very general method of expressing parallelism • Drawback --- “assembly language of parallel computing”
What does MPI do for us? • Makes it possible to write libraries of parallel programs that are both portable and efficient • Use of these libraries will hide many of the details of parallel programming • Therefore make parallel computing much more accessible to professionals in all branches of science and engineering
Our Objective • To parallelize FFT part of AFNI • In AFNI, when we call FFT function, we are in fact calling the csfft_cox() function, which we will see the detail in next slide
Flow Chart of csfft_cox SCLINV return fft16 fft32 fft2 fft4 3 fft8 csfft_cox start fft64 fft128 fft256 fft512 fft1024 fft_4dec fft2048 fft_4dec fft4096 fft_4dec fft8192 fft_4dec fft16384 fft_4dec fft32768 fft_4dec 3n fft_3dec 5n fft_5dec
One-level parallelization • There are several options for us to parallel the csfft_cox() function. • At present, we adopt the one-level parallelization method, that is, when fft4096() calls fft1024() and when fft8192() calls fft2048().
Correctness of our parallel code • By doing FFT and IFFT consequently, we obtain a set of complex numbers that are almost the same as the ones in the original data file • The only difference comes from the storage error of floating point number (in the original code, such phenomena also exists) • So, what is the speedup then?
Two Kinds of Time • There are two kinds of time in analyzing our experiment result: CPU Time and Wall Clock Time (Elapsed Time). • CPU time is the time spent in the calculation part of the code. • Wall Clock Time is the total elapsed time from the user’s point of view.
Experiments • Time analysis of Original code (4096 * 200,000 * 1)
Experiments --- Cont. • Time analysis of Parallelized in 2 processors (4096 * 200,000 * 1)
Experiments --- Cont. • Time analysis of Parallelized in 4 processors (4096 * 200,000 * 1)
Analysis of speedup --- Cont. • Two main reasons that we did not obtain the ideal speedup: 1. There exist the competitions among different users in the same CPU. 2. Due to the existing communication cost and some other overhead, it is impossible to obtain the ideal speedup in the real machines.
Conclusion We have parallelized the FFT part of AFNI software package based on MPI. The result shows that for the FFT algorithm itself, we obtain a speedup of around 30 percent. Increase the speedup of FFT parallelization of 3dDeconvolve program