1 / 26

Latent Feature Models for Network Data over Time Jimmy Foulds Advisor: Padhraic Smyth

Latent Feature Models for Network Data over Time Jimmy Foulds Advisor: Padhraic Smyth (Thanks also to Arthur Asuncion and Chris Dubois). Overview. The task Prior work – Miller, Van Gael, Indian Buffet Processes The DRIFT model Inference Preliminary results Future work. The Task.

oprah
Download Presentation

Latent Feature Models for Network Data over Time Jimmy Foulds Advisor: Padhraic Smyth

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Latent Feature Models for Network Data over Time Jimmy Foulds Advisor: Padhraic Smyth (Thanks also to Arthur Asuncion and Chris Dubois)

  2. Overview • The task • Prior work – Miller, Van Gael, Indian Buffet Processes • The DRIFT model • Inference • Preliminary results • Future work

  3. The Task • Modeling Dynamic (time-varying) Social Networks • Interested in prediction • Model interpretation for sociological understanding • Continuous time relational events versus panel data?

  4. Applications • Predicting Email Communications

  5. Applications • Predicting Paper Co-authorship • NIPS data

  6. Prior Work • Erdos-Renyi Models are “pseudo-dynamic” • Continuous Markov Process Models (Snijders 2006) • The network stochastically optimizes ERGM likelihood function • Dynamic Latent Space Model (Sarkar & Moore, 2005) • Each node (actor) is associated with a point in a low dimensional space (Raftery et al. 2002). Link probability is a function of distance between points • Gaussian jumps in latent space in each timestep

  7. Prior Work • Nonparametric Latent Feature Relational Model (Miller et al. 2009) • Each actor is associated with an unbounded sparse vector of binary latent features, generated from an Indian Buffet Process prior • The probability of a link between two actors is a function of the latent features of those actors (and additional covariates)

  8. Prior Work • Nonparametric Latent Feature Relational Model (Miller et al. 2009) generative process: • Z ~ IBP(a) • Wkk' ~ N(0,sw) • Yij ~ s(ZiWZjT + covariate terms) • A kind of blockmodel with overlapping classes

  9. How to Make this Model Dynamic For Longitudinal Data? • We would like the Zs to change over time, modeling changing interests, community memberships, … • Want to maintain sparsity property, but model persistence, generation of new features, ...

  10. Infinite Factorial Hidden Markov Models (Van Gael et al., 2010) • A variant of the IBP • A probability distribution over a potentially infinite number of binary Markov chains • Sparsity: At each timestep, introduce new features using the IBP distribution • Persistence: A coin flip determines whether each feature persists to the next timestep • Hidden Markov structure: the latent features are hidden but we observe something at each timestep.

  11. DRIFT: the Dynamic Relational Infinite FeaTure Model • The iFHMM models the evolution of one actor's features over time • We use an iFHMM for each actor, but share the transition probabilities • Observed graphs generated via (Miller et al. 2009)'s latent feature model Yij ~ s(ZiWZjT +...)

  12. DRIFT: the Dynamic Relational Infinite FeaTure Model

  13. Inference • Markov chain Monte Carlo inference • Use “slice sampling” trick with the stick-breaking construction of the IBP to effectively truncate num features but still perform exact inference • Blocked Gibbs sampling on the other variables • Forward-backward dynamic programming on each actor's feature chain • Metropolis-Hastings updates for W's since non-conjugate

  14. Group DRIFT • Clustering to reduce the number of chains • Each actor has hidden class variable c < C < N • The chains of infinite binary feature vectors are associated with classes rather than actors • Allows us to scale up to large numbers of actors • Clustering may be interpretable

  15. Group DRIFT i=1:C Cn β=1/C • Inference for a, b, exactly the same • Inference for z’s similar: • Slightly different “emission” probability • Run forward-backward sampler on M*C chains rather than M*N chains Inference for c’s (actor’s assignment to specific chain) is easy too Inference for W is similar (slightly different likelihood). Note we must now assume that the diagonal of W can be non-zero. n=1:N

  16. Preliminary Experimental Results (Synthetic Data)

  17. Preliminary Experimental Results (Synthetic Data)

  18. Preliminary Experimental Results (Synthetic Data)

  19. Future work • Extension to Continuous Time • It's easy to use IBP latent factor model as a covariate in Relational Event Model (Butts 2008) • How to model the Zs changing over time for continuous data?

  20. Thanks for Listening!

More Related