Dictionary Learning, Blind Deconvolution, Deep Learning
Learning dictionaries/atomic sets that induce structured representation on data. Applications are still explosively emerging, especially of deep learning, where one allows multi-level nonlinear cascading of representation. Hence formulations to the problems are fairly diverse. We will roughly organize the references according to the problem they try to solve, concentrated on recent literature of theoretical nature. (Update: Feb 29 2020)
[S] indicates my contribution.
Theory
$\mathbf{Y} = \mathbf{A} \mathbf{X}$, $\mathbf{A}$ Square, Invertible, Global Recovery
This problem can be reduced to a sequence of problems, each taking the form of finding sparsest vector in a linear subspace. See also Structured Element Pursuit.
- Complete Dictionary Learning via $\ell_p$-norm Maximization (2020)
- A Nonconvex Approach for Exact and Efficient Multichannel Sparse Blind Deconvolution (2019)
- Subgradient Descent Learns Orthogonal Dictionaries ([S], 2018)
- Efficient Dictionary Learning with Gradient Descent (2018)
- Fast and robust tensor decomposition with applications to dictionary learning (2017)
- An improved analysis of the ER-SpUD dictionary learning algorithm (2016)
- A note on the sample complexity of the Er-SpUD algorithm by Spielman, Wang and Wright for exact recovery of sparsely used dictionaries (2016)
- Dictionary Learning with Few Samples and Matrix Concentration (2015)
- Complete Dictionary Recovery over the Sphere ([S], 2015)
- Dictionary Learning and Tensor Decomposition via the Sum-of-Squares Method (2014)
- Sum-of-squares proofs and the quest toward optimal algorithms (2014)
- Rounding Sum-of-Squares Relaxations (2013)
- Scaling law for recovering the sparsest element in a subspace (2013)
- Exact Recovery of Sparsely-Used Dictionaries (2012)
$\mathbf{Y} = \mathbf{A} \mathbf{X}$, $\mathbf{A}$ Overcomplete, Incoherent, Global Recovery
- Analysis of the Optimization Landscapes for Overcomplete Representation Learning (2019)
- Towards Learning Sparsely Used Dictionaries with Arbitrary Supports (2018)
- A Provable Approach for Double-Sparse Coding (2017)
- Alternating minimization for dictionary learning with random initialization (2017)
- Polynomial-time Tensor Decompositions with Sum-of-Squares (2016)
- Simple, Efficient, and Neural Algorithms for Sparse Coding (2015)
- Dictionary Learning and Tensor Decomposition via the Sum-of-Squares Method (2014)
- Sum-of-squares proofs and the quest toward optimal algorithms (2014)
- Rounding Sum-of-Squares Relaxations (2013)
- More Algorithms for Provable Dictionary Learning (2014)
- Exact Recovery of Sparsely Used Overcomplete Dictionaries (2013)
- New Algorithms for Learning Incoherent and Overcomplete Dictionaries (2013)
- Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization (2013)
$\mathbf{Y} = \mathbf{A} \mathbf{X}$ Local Correctness
- Complete Dictionary Learning via $\ell^4$-Norm Maximization over the Orthogonal Group (2019)
- Unique Sharp Local Minimum in $\ell_1$-minimization Complete Dictionary Learning (2019)
- On the Local Correctness of $\ell^1$ Minimization for Dictionary Learning (2011, $\mathbf{A}$ general)
- Dictionary Identification - Sparse Matrix-Factorisation via $\ell^1$-Minimisation (2009, $\mathbf{A}$ square)
Single-Kernel Convolutional (aka Blind Deconvolution): $\mathbf{y} = \mathbf{a} \otimes \mathbf{x}$
- Manifold Gradient Descent Solves Multi-Channel Sparse Blind Deconvolution Provably and Efficiently (2019)
- On the Global Geometry of Sphere-Constrained Sparse Blind Deconvolution (2019)
- Geometry and Symmetry in Short-and-Sparse Deconvolution (2019)
- Structured Local Optima in Sparse Blind Deconvolution (2018)
- On the Global Geometry of Sphere-Constrained Sparse Blind Deconvolution (2017)
- Blind Deconvolution by a Steepest Descent Algorithm on a Quotient Manifold (2017)
- From Blind deconvolution to Blind Super-Resolution through convex programming (2017)
- BranchHull: Convex bilinear inversion from the entrywise product of signals with known signs (2017)
- Self-Calibration via Linear Least Squares (2016)
- Through the Haze: A Non-Convex Approach to Blind Calibration for Linear Random Sensing Models (2016)
- Fast and guaranteed blind multichannel deconvolution under a bilinear system model (2016)
- Leveraging Diversity and Sparsity in Blind Deconvolution (2016)
- Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization (2016)
- A Non-Convex Blind Calibration Method for Randomised Sensing Strategies (2016)
- Empirical Chaos Processes and Blind Deconvolution (2016)
- Optimal Injectivity Conditions for Bilinear Inverse Problems with Applications to Identifiability of Deconvolution Problems (2016)
- RIP-like Properties in Subsampled Blind Deconvolution (2015)
- Blind Recovery of Sparse Signals from Subsampled Convolution (2015)
- Identifiability and Stability in Blind Deconvolution under Minimal Assumptions (2015)
- Identifiability in Blind Deconvolution with Subspace or Sparsity Constraints (2015)
- A Unified Framework for Identifiability Analysis in Bilinear Inverse Problems with Applications to Subspace and Sparsity Models (2015)
- Fundamental Limits of Blind Deconvolution Part II: Sparsity-Ambiguity Trade-offs (2015)
- Self-Calibration and Biconvex Compressive Sensing (2015)
- Fundamental Limits of Blind Deconvolution Part I: Ambiguity Kernel (2014)
- Sparse blind deconvolution: What cannot be done (2014)
- Identifiability Scaling Laws in Bilinear Inverse Problems (2014)
- Near Optimal Compressed Sensing of Sparse Rank-One Matrices via Sparse Power Factorization (2013)
- Blind Deconvolution using Convex Programming (2012)
Multi-Kernel Convolutional: $\mathbf{Y} = \sum_i \mathbf{a}_i \otimes \mathbf{x}_i$
- On the Reconstruction Risk of Convolutional Sparse Dictionary Learning (2017)
- Working Locally Thinking Globally: Theoretical Guarantees for Convolutional Sparse Coding (2017)
- Blind Demixing and Deconvolution at Near-Optimal Rate (2017)
- Regularized Gradient Descent: A Nonconvex Recipe for Fast Joint Blind Deconvolution and Demixing (2017)
- Blind Deconvolution Meets Blind Demixing: Algorithms and Performance Bounds (2015)
Generalized Dictionary Learning
- Learning Two Layer Rectified Neural Networks in Polynomial Time (2018)
- Learning Semidefinite Regularizers (2017)
Wavelet/General Scattering Network
- Lipschitz Properties for Deep Convolutional Networks (2017)
- Discrete Deep Feature Extraction: A Theory and New Architectures (2016)
- A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction (2015)
- Deep Convolutional Neural Networks Based on Semi-Discrete Frames (2015)
- Unsupervised Learning by Deep Scattering Contractions (2014)
- Invariant Scattering Convolution Network (2013)
- Group Invariant Scattering (2011)
Provable Learning of Deep Structure
Algorithms and Applications
Dictionary Learning
- To get a taste of the applications of dictionary learning in signal and image processing (compression in these areas demands good bases/dictionaries), see the book by Michael Elad: Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing
Transformation Learning
Convolutional Dictionary Learning
- Short-and-Sparse Deconvolution – A Geometric Approach (2019)
- Working Locally Thinking Globally - Part II: Stability and Algorithms for Convolutional Sparse Coding (2016)
- Working Locally Thinking Globally - Part I: Theoretical Guarantees for Convolutional Sparse Coding (2016)
- Convolutional Dictionary Learning through Tensor Factorization (2015)
- Fast Convolutional Sparse Coding (2013)
- Deconvolutional Network (2010)
Deep Learning
- Scattering Page maintained by Stephane Mallat’s group
Disclaimer - This page is meant to serve a hub for references on this problem, and does not represent in any way personal endorsement of papers listed here. So I do not hold any responsibility for quality and technical correctness of each paper listed here. The reader is advised to use this resource with discretion.
If you’d like your paper to be listed here - Just drop me a few lines via email (which can be found on “Welcome” page). If you don’t bother to spend a word, just deposit your paper on arXiv. I get email alert about new animals there every morning, and will be happy to hunt one for this zoo if it seems fit.