Goto

Collaborating Authors

 wgcp



A Proof of Theorem 1 Proof. null null null null null null loss

Neural Information Processing Systems

We consider four types of benchmarks in our experiments, i.e., random COPs, scale-free networks, Therefore, to be fair and practical, we do not consider existing DNN-based BP variants. We compare our DABP with the following state-of-the-art COP solvers: (1) DBP with a damping factor of 0.9 and its splitting constraint factor graph version (DBP-SCFG) with a splitting ratio of 0.95 [ However, it can be extremely tedious and time-consuming to tune the damping factor. Figure 9: Convergence rates under different iteration limits ( | X | = 100) 4 size (i.e., 5) and the constraint functions are highly structured, which allows effective pruning and It also can be concluded that our DABP converges much faster than DBP and DBP-SCFG. To demonstrate the necessity of heterogeneous hyperparameters of Eq. (6), we conduct extensive Figure 1-12 present the results on solution quality. Table 1 presents the GPU memory footprint of our DABP .


Deep Attentive Belief Propagation: Integrating Reasoning and Learning for Solving Constraint Optimization Problems

Deng, Yanchen, Kong, Shufeng, Liu, Caihua, An, Bo

arXiv.org Artificial Intelligence

Belief Propagation (BP) is an important message-passing algorithm for various reasoning tasks over graphical models, including solving the Constraint Optimization Problems (COPs). It has been shown that BP can achieve state-of-the-art performance on various benchmarks by mixing old and new messages before sending the new one, i.e., damping. However, existing methods of tuning a static damping factor for BP not only are laborious but also harm their performance. Moreover, existing BP algorithms treat each variable node's neighbors equally when composing a new message, which also limits their exploration ability. To address these issues, we seamlessly integrate BP, Gated Recurrent Units (GRUs), and Graph Attention Networks (GATs) within the message-passing framework to reason about dynamic weights and damping factors for composing new BP messages. Our model, Deep Attentive Belief Propagation (DABP), takes the factor graph and the BP messages in each iteration as the input and infers the optimal weights and damping factors through GRUs and GATs, followed by a multi-head attention layer. Furthermore, unlike existing neural-based BP variants, we propose a novel self-supervised learning algorithm for DABP with a smoothed solution cost, which does not require expensive training labels and also avoids the common out-of-distribution issue through efficient online learning. Extensive experiments show that our model significantly outperforms state-of-the-art baselines.