• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Fixed-point algorithms for learning determinantal point processes. (2015)

by Z Mariet, S Sra
Venue:In ICML,
Add To MetaCart

Tools

Sorted by:
Results 1 - 1 of 1

Kronecker Determinantal Point Processes

by Zelda Mariet , Suvrit Sra
"... Abstract Determinantal Point Processes (DPPs) are probabilistic models over all subsets a ground set of N items. They have recently gained prominence in several applications that rely on "diverse" subsets. However, their applicability to large problems is still limited due to O(N 3 ) comp ..."
Abstract - Add to MetaCart
Abstract Determinantal Point Processes (DPPs) are probabilistic models over all subsets a ground set of N items. They have recently gained prominence in several applications that rely on "diverse" subsets. However, their applicability to large problems is still limited due to O(N 3 ) complexity of core tasks such as sampling and learning. We enable efficient sampling and learning for DPPs by introducing KRONDPP, a DPP model whose kernel matrix decomposes as a tensor product of multiple smaller kernel matrices. This decomposition immediately enables fast exact sampling. But contrary to what one may expect, leveraging the Kronecker product structure for speeding up DPP learning turns out to be more difficult. We overcome this challenge, and derive batch and stochastic optimization algorithms for efficiently learning the parameters of a KRONDPP.
(Show Context)

Citation Context

...of problems such as document and video summarization [6, 21], sensor placement [14], recommender systems [31], and object retrieval [2]. More recently, they have been used to compress fullyconnected layers in neural networks [26] and to provide optimal sampling procedures for the Nyström method [20]. The more general study of DPP properties has also garnered a significant amount of interest, see e.g., [1, 5, 7, 12, 16–18, 23]. However, despite their elegance and tractability, widespread adoption of DPPs is impeded by the O(N3) cost of basic tasks such as (exact) sampling [12, 17] and learning [10, 12, 17, 25]. This cost has motivated a string of recent works on approximate sampling methods such as MCMC samplers [13, 20] or core-set based samplers [19]. The task of learning a DPP from data has received less attention; the methods of [10, 25] cost O(N3) per iteration, which is clearly unacceptable for realistic settings. This burden is partially ameliorated in [9], who restrict to learning low-rank DPPs, though at the expense of being unable to sample subsets larger than the chosen rank. These considerations motivate us to introduce KRONDPP, a DPP model that uses Kronecker (tensor) product kernels. ...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University