20 likes | 124 Views
10. Local Projective Noise Reduction. Theory An m-1 dimensional map x(n)=F1(x(n-m+1), …,x(n-1)) can be written implicitly as F2(x(n),x(n-m+1), …,x(n-1)). This can be linearized as: a . (x(n) – x_mean) = 0
E N D
10. Local Projective Noise Reduction • Theory • An m-1 dimensional map x(n)=F1(x(n-m+1), …,x(n-1)) can be written implicitly as F2(x(n),x(n-m+1), …,x(n-1)). This can be linearized as: • a . (x(n) – x_mean) = 0 • where x_mean is the centre of mass of the delay vectors in a small neighborhood of x(n). • In the above eqn., we can also focus the noise reduction on the most stable middle coordinates by rewriting the eqn. as: • a . R (x(n) – x_mean) = 0 • where R is a diagonal matrix with large R11 and Rmm (relative to the centre elements. • To reduce noise, we find the eigenvectors in the nullspace (i.e., a’s that satisfy the above eqn.) of the covariance matrix of the weighted samples z(n)=R (s(n)-mean_s). If the actual dimension is m0, and the embedding dimension is m, then the dimension of the nullspace (or the number of eigenvectors in this space) Q=m - m0. • Noise removed component is : z_clean(n) = z(n) – sum(a (a.z(n) ) • where the sum is over the Q eigenvectors of the nullspace.
10. Local Projective Noise Reduction • Implementation • Embed time series (of dimension m0<m) in m dimensional space. Q = m-m0. • For each embedded vector s(n) find the neighbors. • Calculate the centre of mass at each point as the sample mean of the neighbors. • Set diagonal weighting matrix R appropriately. For e.g. • R11 = 1000, Rii = 1 for others. • Calculate the covariance matrix C, and the eigenvectors and eigenvalues (in the increasing order). • Using eigenvectors corresponding to the Q smallest eigenvalues, form the estimate of noise vector and remove this component from the actual vector. • Since each scalar time series observable appears in several embedded vectors, calculate the weighted average of the contribution from each predicted value as the clean estimate.