Where prior learning can and can't work in unsupervised inverse problems - A&O (Apprentissage et Optimisation)
Pré-Publication, Document De Travail Année : 2024

Where prior learning can and can't work in unsupervised inverse problems

Résumé

Linear inverse problems consist of recovering a signal from its noisy and incomplete (or compressed) observation in a lower dimensional space. Many popular resolution methods rely on data-driven algorithms that learn a prior from pairs of signals and observations to overcome the loss of information. However, these approaches are difficult, if not impossible, to adapt to unsupervised contexts -where no ground truth data are available -due to the need for learning from clean signals. This paper studies necessary and sufficient conditions that do or do not allow learning a prior in unsupervised inverse problems. First, we focus on dictionary learning and point out that recovering the dictionary is unfeasible without constraints when the signal is observed through only one measurement operator. It can, however, be learned with multiple operators, given that they are diverse enough to span the whole signal space. Then, we study methods where weak priors are made available either through optimization constraints or deep learning architectures. We empirically emphasize that they perform better than handcrafted priors only if they are adapted to the inverse problem.
Fichier principal
Vignette du fichier
IEEE___Prior_learning_in_inverse_problems-7.pdf (2.29 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04782335 , version 1 (14-11-2024)

Identifiants

  • HAL Id : hal-04782335 , version 1

Citer

Benoît Malézieux, Florent Michel, Thomas Moreau, Matthieu Kowalski. Where prior learning can and can't work in unsupervised inverse problems. 2024. ⟨hal-04782335⟩
63 Consultations
34 Téléchargements

Partager

More