50 likes | 76 Views
This paper delves into Laplacian regularization in graph learning, explaining its benefits in achieving global and local consistency. It explores the motivation behind Laplacian regularization, discusses previous works on exposing information in social networks, and details how Laplacian regularization works. The objective of Laplacian regularization is to enhance global and local consistency, promoting better learning outcomes. The regularization term S is crucial, as it affects the results of learning. Comparisons are made between regularization with dimension reduction, regularization alone, and no regularization, showing the impact of regularization methods on learning.
E N D
Motivation • Previous Work on exposing info. of social network • Learn how laplacian regularization (LR) works
Objective of LR • Global Consistency • Local Consistency f2 f5 5 2 f1 f6 2 5 5 2 5 f7 5 f3 f4 f8
Regularization • Regularization Term S • Sample
Conclusion • Method of Regularization affects learning result • Reg. with Dim. Reduction > with Reg. > No Reg.