Learning Multi-Layer Transform Models
Ravishankar, Saiprasad, Wohlberg, Brendt
Such models have been used in many applications including inverse problems, where they are often used to construct regularizers. In particular, the learning of signal models from training data, or even corrupted measurements has shown promise in various settings. Among sparsity-based models, the synthesis dictionary model [1] is perhaps the most well-known. Various methods have been proposed to learn synthesis dictionaries from signals or image patches [2-8] or in a convolutional framework [9, 10]. However, the sparse coding problem (i.e., representing a signal as a sparse combination of appropriate dictionary atoms or filters) in the synthesis model (or during learning) typically lacks a closed-form solution and can be NPhard in general.
Oct-18-2018