270 likes | 423 Views
Infinite Hidden Relational Models. Zhao Xu 1 , Volker Tresp 2 , Kai Yu 2 , Shipeng Yu and Hans-Peter Kriegel 1 1 University of Munich, Germany 2 Siemens Corporate Technology, Munich, Germany. Motivation.
E N D
Infinite Hidden Relational Models Zhao Xu1, Volker Tresp2, Kai Yu2, Shipeng Yu and Hans-Peter Kriegel1 1 University of Munich, Germany 2 Siemens Corporate Technology, Munich, Germany
Motivation • Relational learning is an object oriented approach to representation and learning that clearly distinguishes between entities (e.g., objects), relationships and their respective attributes and represents an area of growing interest in machine learning • Learned dependencies encode probabilistic constraints in the relational domain • Many relational learning approaches involve extensive structural learning, which is makes RL somewhat tricky to apply in practice • The goal of this work is an easy to apply generic system which relaxes the need for extensive structural learning • In the infinite hidden relational model (IHRM) we introduce for each entity an infinite-dimensional latent variable, whose state is determined by a Dirichlet process • The resulting representation is a network of interacting DPs
Work on DPs in Relational Learning • C. Kemp, T. Griffiths, and J. R. Tenenbaum (2004). Discovering Latent Classes in Relational Data (Technical Report AI Memo 2004-019) • Kemp, C., Tenenbaum, J. B., Griffiths, T. L., Yamada, T. & Ueda, N. (2006). Learning systems of concepts with an infinite relational model. AAAI 2006 • Z. Xu, V. Tresp, K. Yu, S. Yu, and H.-P. Kriegel (2005). Dirichlet enhanced relational learning. In Proc. 22nd ICML, 1004-1011. ACM Press • Z. Xu, V. Tresp, K. Yu, and H.-P. Kriegel. Infinite hidden relational models. In Proc. 22nd UAI, 2006 • P. Carbonetto, J. Kisynski, N. de Freitas, and D. Poole. Nonparametric bayesian logic. In Proc. 21st UAI, 2005.
Ground Network A: entity attributes R: relational attributes (e.g., exist, not exist) Limitations Attributes locally predict the probability of a relational attribute Given the parent attributes, all relational attributes are independent To obtain non local dependency: structural learning might be involves Ground Network With an Image Structure
Z: latent variable Information can now flow through the network of latent variables In an IHRM, Z can be thought of as representing unknown attributes (such as a cluster attribute) Note, that in image processing, Z would correspond to the true pixel value, A to a noisy measurement and R would encode neighboring pixel value constraints Ground Network With an Image Structure and Latent Variables: The HRM
A Recommendation System • A relational attribute (like) only depends on the attributes of the user and the item • If both attributes are weak, we’re stuck users items • A relational attribute (like) only depends on the states of the latent variables of user and item • If entity attributes are weak, other known relations are exploits, we exploit collaborative information users items
The Infinite Hidden Relational Model (IHRM) For a DP model, the number of states becomes infinite; the prior distribution for is denoted as The Hidden Relational Model (HRM) • A relational attribute (like) only depends on the states of the latent variables of user and item • If entity attributes are weak, other known relations are exploits, we exploit collaborative information G0u users G0b items G0m Multinomial with Dirichlet priors; Three Base Distributions
Inference in the IHRM • Gibbs sampler derived from the Chinese restaurant process representation (Kemp et al. 2004, 2006, Xu et al. 2006); • Gibbs sampler derived finite approximations to the stick breaking representation • Dirichlet multinomial allocation • Truncated Dirichlet process • Two mean field approximations based on those procedures • A memory-based empirical approximation (EA) (2,3,4 in Xu et al 2006, submitted)
User Attributes User G0u Zu Like G0b R Movie Movie Attributes G0m Zm Experimental Analysis on Movie Recommendation (1) • Task description • To predict whether a user likes a movie given attributes of users and movies, as well as known ratings of users. • Data set: MovieLens • Model
Experimental Analysis on Movie Recommendation (2) • Result Note, for GS-TDP and MF-TDP, α0=100 943 users, 1680 movies
Experimental Analysis on Gene Function Prediction (1) • Task description • To predict functions of genes given the information on the gene-level and the protein-level, as well as interaction between genes. • Data set: KDD Cup 2001 • Model
Interact Function Phenotype Observe Have Gene Rg,c Rg,m Rg,f Rg,cl Rg,p Rg,g Zg Zf Gene Attributes Zcl Zp Zm Zc belong Motif Complex Form Contain Structural Category Experimental Analysis on Gene Function Prediction (2)
Experimental Analysis on Gene Function Prediction (3) • An example gene
Experimental Analysis on Gene Function Prediction (4) • Results
Experimental Analysis on Gene Function Prediction (5) • Results The importance of a variety of relationships in function prediction of genes
Experimental Analysis on Clinical Data (1) • Task description • To predict future procedures for patients given attributes of patients and procedures, as well as prescribed procedures and diagnosis of patients. • Model
G0pa G0pr G0dg Experimental Analysis on Clinical Data (2) Patient Attributes Patient pa Zpa pa α0pa Take pa,pr G0pa,pr Rpa,pr pr α0pr θpr Procedure Procedure Attributes Zpr Make pa,dg G0pa,dg Rpa,dg dg α0dg Diagnosis θdg Diagnosis Attributes Zdg
Experimental Analysis on Clinical Data (3) • Results ROC curves for predicting procedures, average on all patients ROC curves for predicting procedures, only considering patients with prime complaint circulatory problem E1: one-sided CF E2: 2-sided CF E3: full model E4: no hidden E5: content based BN
Conclusion • The IHRM is a new nonparametric hierarchical Bayes model for relational modeling • Advantages • Reducing the need for extensive structural learning • Expressive ability via coupling between heterogeneous relationships • The model decides itself about the optimal number of states for the latent variables. • Scaling: • # of entities times # of occupied states times # of known relations • Note: default relations (example: by default there is no relation) can often be treated as unknown and drop out • Conjugacy can be exploited
A memory-based empirical approximation • First, we assume the number of components to be equal to the corresponding entities in the corresponding entity class • Then in the training phase each entity contributes to its own class only • Based on this simplification the parameters in the attributes and relations can be learned very efficiently. Note that this approximation can be interpreted as relational memory-based learning • To predict a relational attributes we assume that only the states of the latent variables involved in the relation are unknown