Distributed flexible nonlinear tensor factorization

  • Shandian Zhe
  • , Kai Zhang
  • , Pengyuan Wang
  • , Kuang Chih Lee
  • , Zenglin Xu
  • , Yuan Qi
  • , Zoubin Gharamani

Research output: Contribution to journalConference articlepeer-review

59 Scopus citations

Abstract

Tensor factorization is a powerful tool to analyse multi-way data. Recently proposed nonlinear factorization methods, although capable of capturing complex relationships, are computationally quite expensive and may suffer a severe learning bias in case of extreme data sparsity. Therefore, we propose a distributed, flexible nonlinear tensor factorization model, which avoids the expensive computations and structural restrictions of the Kronecker-product in the existing TGP formulations, allowing an arbitrary subset of tensorial entries to be selected for training. Meanwhile, we derive a tractable and tight variational evidence lower bound (ELBO) that enables highly decoupled, parallel computations and high-quality inference. Based on the new bound, we develop a distributed, key-value-free inference algorithm in the MAPREDUCE framework, which can fully exploit the memory cache mechanism in fast MAPREDUCE systems such as SPARK. Experiments demonstrate the advantages of our method over several state-of-the-art approaches, in terms of both predictive performance and computational efficiency.

Original languageEnglish
Pages (from-to)928-936
Number of pages9
JournalAdvances in Neural Information Processing Systems
StatePublished - 2016
Externally publishedYes
Event30th Annual Conference on Neural Information Processing Systems, NIPS 2016 - Barcelona, Spain
Duration: 5 Dec 201610 Dec 2016

Fingerprint

Dive into the research topics of 'Distributed flexible nonlinear tensor factorization'. Together they form a unique fingerprint.

Cite this