Vision and Around

Dedicated to my research and life

Dictionary Learning, Blind Deconvolution, Deep Learning

Learning dictionaries/atomic sets that induce structured representation on data. Applications are still explosively emerging, especially of deep learning, where one allows multi-level nonlinear cascading of representation. Hence formulations to the problems are fairly diverse. We will roughly organize the references according to the problem they try to solve, concentrated on recent literature of theoretical nature. (Update: Apr 13 2017)

[S] indicates my contribution.

Theory

\(\mathbf{Y} = \mathbf{A} \mathbf{X}\), \(\mathbf{A}\) Square, Invertible, Global Recovery

This problem can be reduced to a sequence of problems, each taking the form of finding sparsest vector in a linear subspace. See also Structured Element Pursuit.

  1. Complete Dictionary Recovery over the Sphere ([S], 2015)
  2. Dictionary Learning and Tensor Decomposition via the Sum-of-Squares Method (2014)
  3. Sum-of-squares proofs and the quest toward optimal algorithms (2014)
  4. Rounding Sum-of-Squares Relaxations (2013)
  5. Scaling law for recovering the sparsest element in a subspace (2013)
  6. Exact Recovery of Sparsely-Used Dictionaries (2012)

\(\mathbf{Y} = \mathbf{A} \mathbf{X}\), \(\mathbf{A}\) Overcomplete, Incoherent, Global Recovery

  1. Polynomial-time Tensor Decompositions with Sum-of-Squares (2016)
  2. Simple, Efficient, and Neural Algorithms for Sparse Coding (2015)
  3. Dictionary Learning and Tensor Decomposition via the Sum-of-Squares Method (2014)
  4. Sum-of-squares proofs and the quest toward optimal algorithms (2014)
  5. Rounding Sum-of-Squares Relaxations (2013)
  6. More Algorithms for Provable Dictionary Learning (2014)
  7. Exact Recovery of Sparsely Used Overcomplete Dictionaries (2013)
  8. New Algorithms for Learning Incoherent and Overcomplete Dictionaries (2013)
  9. Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization (2013)

\(\mathbf{Y} = \mathbf{A} \mathbf{X}\) Local Correctness

  1. On the Local Correctness of \(\ell^1\) Minimization for Dictionary Learning (2011, \(\mathbf{A}\) general)
  2. Dictionary Identification - Sparse Matrix-Factorisation via \(\ell^1\)-Minimisation (2009, \(\mathbf{A}\) square)

Single-Kernel Convolutional (aka Blind Deconvolution): \(\mathbf{y} = \mathbf{a} \otimes \mathbf{x}\)

  1. BranchHull: Convex bilinear inversion from the entrywise product of signals with known signs (2017)
  2. Self-Calibration via Linear Least Squares (2016)
  3. Through the Haze: A Non-Convex Approach to Blind Calibration for Linear Random Sensing Models (2016)
  4. Fast and guaranteed blind multichannel deconvolution under a bilinear system model (2016)
  5. Leveraging Diversity and Sparsity in Blind Deconvolution (2016)
  6. Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization (2016)
  7. A Non-Convex Blind Calibration Method for Randomised Sensing Strategies (2016)
  8. Empirical Chaos Processes and Blind Deconvolution (2016)
  9. Optimal Injectivity Conditions for Bilinear Inverse Problems with Applications to Identifiability of Deconvolution Problems (2016)
  10. RIP-like Properties in Subsampled Blind Deconvolution (2015)
  11. Blind Recovery of Sparse Signals from Subsampled Convolution (2015)
  12. Identifiability and Stability in Blind Deconvolution under Minimal Assumptions (2015)
  13. Identifiability in Blind Deconvolution with Subspace or Sparsity Constraints (2015)
  14. A Unified Framework for Identifiability Analysis in Bilinear Inverse Problems with Applications to Subspace and Sparsity Models (2015)
  15. Fundamental Limits of Blind Deconvolution Part II: Sparsity-Ambiguity Trade-offs (2015)
  16. Self-Calibration and Biconvex Compressive Sensing (2015)
  17. Fundamental Limits of Blind Deconvolution Part I: Ambiguity Kernel (2014)
  18. Sparse blind deconvolution: What cannot be done (2014)
  19. Identifiability Scaling Laws in Bilinear Inverse Problems (2014)
  20. Near Optimal Compressed Sensing of Sparse Rank-One Matrices via Sparse Power Factorization (2013)
  21. Blind Deconvolution using Convex Programming (2012)

Multi-Kernel Convolutional: \(\mathbf{Y} = \sum_i \mathbf{a}_i \otimes\)

  1. Blind Demixing and Deconvolution at Near-Optimal Rate (2017)
  2. Regularized Gradient Descent: A Nonconvex Recipe for Fast Joint Blind Deconvolution and Demixing (2017)
  3. Blind Deconvolution Meets Blind Demixing: Algorithms and Performance Bounds (2015)

Wavelet/General Scattering Network

  1. Lipschitz Properties for Deep Convolutional Networks (2017)
  2. Discrete Deep Feature Extraction: A Theory and New Architectures (2016)
  3. A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction (2015)
  4. Deep Convolutional Neural Networks Based on Semi-Discrete Frames (2015)
  5. Unsupervised Learning by Deep Scattering Contractions (2014)
  6. Invariant Scattering Convolution Network (2013)
  7. Group Invariant Scattering (2011)

Provable Learning of Deep Structure

  1. Sparse Matrix Factorization (2013)
  2. Provable Bounds for Learning Some Deep Representations (2013)

Algorithms and Applications

Dictionary Learning

  1. To get a taste of the applications of dictionary learning in signal and image processing (compression in these areas demands good bases/dictionaries), see the book by Michael Elad: Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing

Convolutional Dictionary Learning

  1. Convolutional Dictionary Learning through Tensor Factorization (2015)
  2. Fast Convolutional Sparse Coding (2013)
  3. Deconvolutional Network (2010)

Deep Learning

  1. Scattering Page maintained by Stephane Mallat’s group

Disclaimer - This page is meant to serve a hub for references on this problem, and does not represent in any way personal endorsement of papers listed here. So I do not hold any responsibility for quality and technical correctness of each paper listed here. The reader is advised to use this resource with discretion.

If you’d like your paper to be listed here - Just drop me a few lines via email (which can be found on “Welcome” page). If you don’t bother to spend a word, just deposit your paper on arXiv. I get email alert about new animals there every morning,  and will be happy to hunt one for this zoo if it seems fit.

Comments