A Proof of Theorem 1 Proof. null null null null null null loss

Neural Information Processing Systems 

We consider four types of benchmarks in our experiments, i.e., random COPs, scale-free networks, Therefore, to be fair and practical, we do not consider existing DNN-based BP variants. We compare our DABP with the following state-of-the-art COP solvers: (1) DBP with a damping factor of 0.9 and its splitting constraint factor graph version (DBP-SCFG) with a splitting ratio of 0.95 [ However, it can be extremely tedious and time-consuming to tune the damping factor. Figure 9: Convergence rates under different iteration limits ( | X | = 100) 4 size (i.e., 5) and the constraint functions are highly structured, which allows effective pruning and It also can be concluded that our DABP converges much faster than DBP and DBP-SCFG. To demonstrate the necessity of heterogeneous hyperparameters of Eq. (6), we conduct extensive Figure 1-12 present the results on solution quality. Table 1 presents the GPU memory footprint of our DABP .