Goto

Collaborating Authors

 model optimization and deep propagation


A Bridging Framework for Model Optimization and Deep Propagation

Neural Information Processing Systems

Optimizing task-related mathematical model is one of the most fundamental methodologies in statistic and learning areas. However, generally designed schematic iterations may hard to investigate complex data distributions in real-world applications. Recently, training deep propagations (i.e., networks) has gained promising performance in some particular tasks. Unfortunately, existing networks are often built in heuristic manners, thus lack of principled interpretations and solid theoretical supports. In this work, we provide a new paradigm, named Propagation and Optimization based Deep Model (PODM), to bridge the gaps between these different mechanisms (i.e., model optimization and deep propagation). On the one hand, we utilize PODM as a deeply trained solver for model optimization. Different from these existing network based iterations, which often lack theoretical investigations, we provide strict convergence analysis for PODM in the challenging nonconvex and nonsmooth scenarios. On the other hand, by relaxing the model constraints and performing end-to-end training, we also develop a PODM based strategy to integrate domain knowledge (formulated as models) and real data distributions (learned by networks), resulting in a generic ensemble framework for challenging real-world applications. Extensive experiments verify our theoretical results and demonstrate the superiority of PODM against these state-of-the-art approaches.


Reviews: A Bridging Framework for Model Optimization and Deep Propagation

Neural Information Processing Systems

Paper summary: The paper proposed a learning based hybrid proximal gradient method for composite minimization problems. The iteration is divided into two modules: the learning module does data fidelity minimization with certain network-based priors; consequently the optimization module generates strict convergence propagations by applying proximal gradient feedback on the output of the learning module. The generated iterates were shown to be a Cauchy sequence converging to the critical points of the original objective. The method was applied to image restoration tasks with performance evaluated. Comments: The core idea is to develop a learning based optimization module to incorporate domain knowledge into conventional proximal gradient descent procedure.


A Bridging Framework for Model Optimization and Deep Propagation

Liu, Risheng, Cheng, Shichao, liu, xiaokun, Ma, Long, Fan, Xin, Luo, Zhongxuan

Neural Information Processing Systems

Optimizing task-related mathematical model is one of the most fundamental methodologies in statistic and learning areas. However, generally designed schematic iterations may hard to investigate complex data distributions in real-world applications. Recently, training deep propagations (i.e., networks) has gained promising performance in some particular tasks. Unfortunately, existing networks are often built in heuristic manners, thus lack of principled interpretations and solid theoretical supports. In this work, we provide a new paradigm, named Propagation and Optimization based Deep Model (PODM), to bridge the gaps between these different mechanisms (i.e., model optimization and deep propagation).