140 likes | 357 Views
On R ival P enalization C ontrolled C ompetitive L earning for Clustering with Automatic Cluster Number Selection. Advisor : Dr. Hsu Presenter : Ai-Chen Liao Authors : Yiu-ming Cheung. 2005 . TKDE . Page(s) : 1583 - 1588. Outline. Motivation Objective Method RPCL
E N D
On Rival Penalization Controlled Competitive Learning for Clustering with Automatic Cluster Number Selection Advisor : Dr. Hsu Presenter : Ai-Chen Liao Authors : Yiu-ming Cheung 2005 . TKDE . Page(s) : 1583 - 1588
Outline • Motivation • Objective • Method • RPCL • RPCCL • Experimental Results • Conclusion • Personal Opinions
Motivation • K-means algorithm has at least two major drawbacks: • It suffers from the dead-unit problem. • If the number of clusters is misspecified, i.e., k is not equal to the true cluster number k*, the performance of k-means algorithm deteriorates rapidly. • The performance of RPCL is sensitive to the value of the delearning rate.
Objective • We will concentrate on studying the RPCL algorithm and propose a novel technique to circumvent the selection of the delearning rate. • We further investigate the RPCL and present a mechanism to control the strength of rival penalization dynamically.
Method ─ RPCL • Advantage : RPCL can automatically select the correct cluster number by gradually driving redundant seed points far away from the input dense regions. • Drawback : RPCL is sensitive to the delearning rate. • Idea : ex. In a election campaign…..(more intense)….. candidates : A 40% B 35% C 5%
cluster center each input unchanged Rival (move away) Winner (move closer) Method ─ RPCL
compare Method ─ RPCCL This penalization control mechanism by with
Experimental Results RPCL : learning rate αC at 0.001, and αr at 0.0001 the number of seed points : 30 audience image : 128*128 pixels epoch :50 RPCL RPCCL original Audience Image
Conclusion • RPCCL has novelly circumvented the difficult selection of the deleaning rate.
Personal Opinions • Advantage • RPCCL can automatically select the correct cluster number. • The novel technique can circumvent the selection of the delearning rate. • Drawback • limitation : k >= k* • Application • clustering…
K-means example 1. Given : {2,4,10,12,3,20,30,11,25} k=2 2. Randomly assign means : m1=3 ; m2=4 k1={2,3} , k2={4,10,12,20,30,11,25} ,m1=2.5 , m2=16 k1={2,3,4} , k2={10,12,20,30,11,25} , m1=3 , m2=18 k1={2,3,4,10} , k2={12,20,30,11,25} , m1=4.75 , m2=19.6 k1={2,3,4,10,11,12} , k2={20,30,25} , m1=7 , m2=25 …..
Heuristic Frequency Sensitive Competitive Learning (FSCL) algorithm Dead-unit Dead-unit problem 1. Given : {2,4,10,12,3,20,30,11,25} , k=3 2. Randomly assign means : m1=30 ; m2=25 ; m3=10