380 likes | 392 Views
Research Update. Friday Presentation FALL 2008 – October 31, 2008. Research Presentation, The University of Tennessee, Knoxville. by Muharrem Mercimek. Outline. Image registration Gaussian Fields Framework vs. ICP Experiments to show how efficient the new registration method is
E N D
Research Update Friday Presentation FALL 2008 – October31, 2008 Research Presentation, The University of Tennessee, Knoxville. by Muharrem Mercimek
Outline • Image registration • Gaussian Fields Framework vs. ICP • Experiments to show how efficient the new registration method is • Deconvolution • Stem Imaging
Similarity Measure for ICP Image Registration Registration Goal: to transform sets of surface measurements into a common coordinate system. Four main groups according to the manner of the image acquisition: Different viewpoints (multi-view analysis). Different times (multi-temporal analysis). Different sensors (multi-modal analysis). Scene to model registration. General registration pipeline components Reference and Floating datasets Correspondence Transformation model Similarity Criterion Optimization Method ICP : Iterative Closest Point registration [1] Correspondences: ICP assumes closest points correspond to each other, The initial estimate of the transformation should be sufficiently close to the correct registration. [1] Besl, P.J. and McKay, N.D. ‘A method for registration of 3D shapes’, IEEE Trans. Patt. Anal. Machine Intell., Vol. 14, No. 2, pp. 239-256, (1992)
Reference Model Moving Model Floating model point Reference model point Shape attributes –saliency and Moment invariant value for the point Inverse of covariance matrix Confidence parameter Gaussian Fields Framework • Gaussian Fields Framework • The smooth behavior of the registration criterion, • Standard optimization scheme extends the range of convergence. • There is no need for close initialization and overcoming local convergence problems of the standard Iterative Closest Point (ICP) algorithms. Obtained using Cyberware 3030 MS [1] Real-world datasets used in the analysis. For each dataset a color image is shown to the left, then the 3D views in unregistered position in the middle,followed by the registered views to the right. [1] G. Turk, M. Levoy, Zippered polygon meshes from range images, in: Proc. ACM SIGGRAPH, 1994, pp. 311–318.
Gaussian Fields Framework vs. ICP • Experiments: Effect of noise Overlap and outliers Effect of resolution Effect of Sigma parameter Basins of convergence
Reference Model Moving Model Sigma %5 Sigma %10 Sigma %20 Sigma %40 Sigma %60 Sigma %80 Sigma %100 Effect of Sigma parameter
Reference Model Moving Model Sigma %5 Sigma %10 Sigma %20 Sigma %40 Sigma %60 Sigma %80 Sigma %100 Effect of Sigma parameter
0.6 0 -0.6 Basins of convergence 0.6 0 -0.6
Outline • Image registration • Gaussian Fields Framework vs. ICP • Experiments to show how efficient the new registration method is • Deconvolution • linear and Iterative deconvolution methods. • Stem Imaging
In many imaging systems the signal of interest is degraded by noise, blur and the presence of other extraneous data. Separating the data stream into its useful components based on the shift invariance and linearity assumptions is defined as deconvolution. • degraded image • true image • Point Spread Function • additive noise • convolution process Deconvolution
are the jth column vector of the matrices of U are the jth column vector of the matrices V is the jth singular value of Σ is a number less than equal to the rank of H Linear Methods Tikhonov Regularization • Where H is a Toeplitz matrix and different from the Fourier transform of function h. • The only difference from the Least Squares Method is the regularization parameter . . Using the SVD of H The convolution operation can be constructed as a matrix multiplication, where one of the inputs is converted into a Toeplitz matrix. Tikhonov regularization with SVD
Linear Methods Least Squares • Is the basic form can help us to approximate true function with normal equation which is Least Squares solution with SVD Total variation method • Most of the regularization methods expects smooth and continuous information from the data to be reconstructed. • Total variation is independent of this assumption and it preserves the edge information in the reconstructed data. : regularization parameter, linear first order difference operators at pixel along horizontal and vertical directions. J. Bioucas-Dias, M. Figueiredo, and J. Oliveira, “Total variation image deconvolution: a majorization-minimization approach,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '06), vol. 2
Iterative Methods Van-Cittert correction term is used to adjust the kth estimate of f. can be arrange the iterations. Constrained Iterative : The non-negativity constraint is added. Relaxation based iterative or Jannson’s Method Relaxation function is used to put natural corrections during iterative updates. The upper magnitude limit is taken as the upper bound of the data c Lower limit is taken 0. Pre-filtering before applying deconvolution Gold’s Ratio Noise strongly deteriorates the quality of the approximation. It is always advantageous pre-filtering the blurred image before iteration starts.
1D Deconvolution Experiments • The data and PSF Functions are created synthetically.
1D Deconvolution Experiments Tikhonov regularization Noisy data Noise-free data
1D Deconvolution Experiments TV regularization Noisy data Noise-free data
1D Deconvolution Experiments Van-Cittert Noisy data without pre-filtering Noisy data Noisy data with pre-filtering
1D Deconvolution Experiments Constrained Iterative Method Non-negativity constraint Noisy data without pre-filtering Noisy data with pre-filtering
1D Deconvolution Experiments Jannson’s iterative method Noisy data without pre-filtering Noisy data Noisy data with pre-filtering
2D Deconvolution Experiments a) True image b) PSF c) Observed image d) Pre-filtered Observed Image %5 random noise g’
2D Deconvolution Experiments a) TV Approximation b) MSE of a) a) Tikhonov Approximation b) MSE of a)
2D Deconvolution Experiments Van-Cittert a) Direct Approximation b) MSE of a) c) Approximation with pre filtering d) MSE of c)
2D Deconvolution Experiments Constrained iterative- Truncation a) Direct Approximation b) MSE of a) c) Approximation with pre filtering d) MSE of c)
2D Deconvolution Experiments Jannson’s a) Direct Approximation b) MSE of a) c) Approximation with pre filtering d) MSE of c)
Several Deconvolution algorithms are added. Using Toeplitz matrix makes the problem to handle easier in 1-D. 2D convolution process is computationally expensive, when using TV and Tikhonov with naive numerical algorithms, such as finding SVD and calculating the inverse of the functions. Conclusions
Outline • Image registration • Deconvolution • Stem Imaging • 3D Deconvolution of a unique dataset
STEM Imaging • 3D characterization of both biological samples and non-biological materials provides us nanometer-scale resolution in three dimensions permitting the study of complex relationships between structure and existing functions. • A collection of 2D images of an object taken at different planes through depth sectioning (or optical sectioning) is the basis for the reconstruction of some specific properties of the specimen in 3D space. • Revealing atomic arrangements are indispensable for 1-first principles calculations, 2-chemical reactivity measurements, 3-electrical properties, 4-point defects, 5-optical properties *Depth Sectioning *http://www.loci.wisc.edu/optical/sectioning.html
Most Common STEM signals HAADF-High angle annular dark field images ADF- Annular dark field images BF-Bright Field Images. STEM Imaging Sketch of the basic principle for depth sectioning a sample by acquisition of a through-focal series. Z-contrast image of a Al-Co-Ni metal sites (large circles) are distinguished from Al sites (small circles) purely on the basis of intensity. Al=13, Co=27, Ni=28 HAADF:(Θ > 3°), BF: ADF (Θ = 0.5°-3°) • During depth sectioning of each slice, high angle electrons scatterings contributes an HAADF image. • Probability that the electron strikes a certain position on the screen or detector.
The STEM Data 1024x0.4 nm – 409.6 nm Several defocus sections of the Z-contrast image of the material 40x20nm - 800 nm -Lateral:1024x1024 Pixels with 4 A˚/pix. -Axial:40 slices with 20nm spacing. -Obtained with a VG HB603U microscope with Nion Aberration Corrector in Oak Ridge National Lab. by Materials Science and Technology Division researchers.
Applied method: In this study we used the *Dougherty Method which is an iterative non-negative least squares solver. It is a linear deconvolution method, in which we know the information about the PSF and try recover true image form degraded image. Applied Method * Dougherty, R.P., "Extensions of DAMAS and benefits and limitations of deconvolution in beamforming", AIAA Paper 2005-2961, May, 2005
3D stem visualization Raw Models Aligned Models
3D stem visualization Aligned & Deconvolved Models
3D stem visualization Raw data Aligned data Aligned-Deconvolved data
xyres=0.005 nm 1 out of 80 points zres=0.1 nm 0.4nm 20nm 1 out of 200 points x-y: 1024x1024, xyres=0.4nm, xydim =409.6 nm Relative frame Positions along Z axis +400nm 0 z:40 frames, zres=20nm, zdim = 800nm File Size: 160 Mb (32 bits data) -400nm 05_266k_OutputDF_-20nmsteps 06_266k_OutputDF_-20nmsteps_Y-10 Problems with Data We have to discard many points to establish scale consistency between the data and the PSF Relative frame Positions along Z axis x-y: 256x256, xyres=0.005 nm, xydim =1.28 nm +20nm z:401 frames, zres=0.1 nm, zdim = 40nm, File Size:25 Mb (32 bits data) 0 -20nm PSF x-y: 1024x1024, xyres=0.4 nm, xydim =409.6 nm Relative frame Positions along Z axis +200nm z:20 frames, zres=20 nm, zdim = 400nm File Size:80Mb (32 bits data) 0 -200nm 04_266k_OutputDF What we used in our experiments is 17x17x5 instead of 4x4x3. But this surely mislead us from the realistic solution
Problems with Data For registration step of three datasets the main difficulty that there is a big difference between the xy and z resolutions. We can apply interpolation at least to achieve the equality of the resolutions x-y: 1024x1024, xyres=0.4nm, xydim =409.6 nm x-y: 1024x1024, xyres=0.4nm, xydim =409.6 nm AFTER interpolation z: 40x50 frames, zres=0.4 nm, zdim = 800nm File Size: 160x50=7.8125 Gb (32 bits data) BEFORE interpolation z:40 frames, zres=20 nm, zdim = 800nm File Size: 160 Mb (32 bits data) 05_266k_OutputDF_-20nmsteps 06_266k_OutputDF_-20nmsteps_Y-10 TOTAL 19.53215 Gb for three datasets
In this research, a very unique data obtained with a high-resolution electron microscope, was studied. An iterative non-negative least squares solver, proposed by Dougherty, towards approximating the true image function is used. Two main problems effecting the 3D visualization of the data are handled; Misalignments of the depth sections. Degradations seen on the image model. A visual significant enhancement is obtained. Since there assumed simplification for the PSF function the real deconvolution process is just approximated. Conclusions