230 likes | 406 Views
LTSI. Semi- nonnegative INDSCAL analysis. Ahmad Karfoul (1) , Julie Coloigner (2,3) , Laurent Albera (2,3) , Pierre Comon (4,5). (1) Faculty of Mech. & Elec. Engineering, University AL-Baath, Syria. (2) Laboratory LTSI - INSERM U642, France. (3) University of Rennes 1, France.
E N D
LTSI Semi-nonnegative INDSCAL analysis Ahmad Karfoul(1), Julie Coloigner (2,3), Laurent Albera (2,3), Pierre Comon (4,5) (1)Faculty of Mech. & Elec. Engineering, University AL-Baath, Syria (2)Laboratory LTSI - INSERM U642, France (3)University of Rennes 1, France (4)Laboratory I3S - CNRS, France (5)University of Nice Sophia - Antipolis, France
Outlines • Preliminaries and problem formulation • Global line search • Optimization methods • A compact matrix form of derivatives • Numerical results • Conclusion
Preliminaries and problem formulation Outer product Ex. Order 3 Ex. Order q Outerproduct of q-vectors rank-one q-th ordertensor
Preliminaries and problem formulation : Tensor – to – rectangular matrix transformation (unfolding according to the i-th mode) : Tensor – to – vector transformation 4
λ1 Preliminaries and problem formulation CANonical Decomposition (CAND) [Hitchcock 1927], [Carroll & Chang 1970], [Harshman 1970] CAND : Linear combinantion of minimal number of rank -1 terms λP
λ1 λP Preliminaries and problem formulation INDSCAL decomposition [Carroll & Chang 1970]
λ1 λP λ1 λP Preliminaries and problem formulation CANonical Decomposition (CAND) INDSCAL decomposition INDSCAL = CAND of 3-order tensor symmetric in two of three modes
Case 1 : Nonnegative INDSCAL decomposition Preliminaries and problem formulation (Semi-) nonnegative INDSCAL decomposition for (semi-) nonnegative BSS Example : Diagonalizing a set of covariance matrices : the (N P) mixing matrix s : zero-mean random vector of P statistically independent components Covariance matrix : where : Case 2 : Semi-nonnegative INDSCAL decomposition
Problem 1 : Given , find its INDSCAL decomposition subject to Problem 2 : Given , find its INDSCAL decomposition Preliminaries and problem formulation Problem at hand Constrainedproblem [Chu et al. 04] • Parametrizing the nonnegativity constraint: :Hadamard product (element-wise product) Unconstrainedproblem with
: Khatri-Rao product Preliminaries and problem formulation • Solution : minimizing the following cost function : with : Some iterative algorithms • Steepest Descent First & second order derivatives of ψ • Newton • Levenberg Marquardt
Optimizationmethods Global line search (1/2) • Looking for the global optimum in a given direction Update rules : : learning steps . : Directions given by the iterative algorithm with respect to A and C, respectively.
Optimizationmethods Global line search (2/2) and • Minimization with respect to • 3-th order symmetric (in two modes) tensor Global optimum in the considered direction for : Stationary point of a quadratic polynomial : Stationary point of a 24-th degree polynomial • Global optimum in the considered direction for : Stationary point of a 10-th degree polynomial
Optimizationmethods Steepest Descent (SD) • Optimization by searching for stationary points of Ψ based on first-order approximation (i.e. the gradient) Update rules : : learning steps . In this work : Gradient of ψ with respect to A and C, respectively. • Learning steps are optimal (optimal line search) Global optimum in the considered direction. • Gradients are given in a compact matrix form .
Optimizationmethods Steepest Descent (SD) A compact matrix form of • Computing the differential of ψ are immediat . where: Then :
Compact matrixform of derivatives Gradient computation of Ψ(A,C) Then : where : a commutation matrix of size (IP×IP) : N-dimensional vector of ones : Identity matrix of size (N×N)
Optimizationmethods Newton • Optimization by including the second-order approximation to accelerate the convergence Update rules : : Hessian of ψ with respect to A and C, respectively. In this work • Learning steps are also computed optimally (Global line search) . • Hessians are given in a compact matrix form .
EVD-based regularization U : Matrix of eigen - vectors Σ = diag{λ1,…,λNP} : diagonal matrix of eigen-values • Compute the ratio • Replace all negative eigen - values by one . • If • Problem : Lack of positive definiteness Lack of convergence & slowness mNewton1 mNewton2 Optimizationmethods Newton • Convergence requirement:Hessians are positive definite matrices • Solution : Necessity to regularization (i.e. Eigen-Value Decomposition (EVD) - based technique )
Based on a linear approximation to the components of , in the neighborhood of A / C. in A . where is the Jacobian of Jacobians are computed from : and Optimizationmethods Levenberg-Marquardt (LM) Update rules : with : : damped parameter influencing both the direction and the size of the step [Madsen et al. 2004]
: Zero-mean normally distributed noise Scalar controling the noise level • : Numericalresults Convergence speed VS SNR • Noise-free random 3-order tensor • Noisy 3-way array : • Results averaged over 200 Monte Carlo’s realizations.
Numericalresults Convergence speed VS SNR SNR = 0 dB
Numericalresults Convergence speed VS SNR SNR = 15 dB
Numericalresults Convergence speed VS SNR SNR = 30 dB
Conclusion • Solving an unconstrained semi-nonnegative INDSCAL problem . • Differential concept Powerful tool for compact matrix derivations forms • Global line search for symmetric case global optimum in the considered direction • Iterative algorithms with global line search suitable step to reach the global optimum Algebraic method + iterative method with global line search global optimum