Supplement: Hybrid Models for Learning to Branch

Neural Information Processing Systems 

In this section, we argue that the GNN architecture looses its advantages in the face of solving multiple MILPs at the same time. GPU such that each GNN is dedicated to solving one MILP . GNNs on Tesla V100 32 GB GPU. Figure 1 shows the inefficient utilization of GPUs when multiple GNNs are packed on a single GPU. Figure 1: Packing several GNNs together on a GPU keeps it underutilized. The work was done during an internship at Mila and CERC. We use the features that were used by Gasse et al.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found