Learning and Optimization of Blackbox Combinatorial Solvers in Neural Networks
The use of blackbox solvers inside neural networks is a relatively new area which aims to improve neural network performance by including proven, efficient solvers for complex problems. Existing work has created methods for learning networks with these solvers as components while treating them as a blackbox. This work attempts to improve upon existing techniques by optimizing not only over the primary loss function, but also over the performance of the solver itself by using Time-cost Regularization. Additionally, we propose a method to learn blackbox parameters such as which blackbox solver to use or the heuristic function for a particular solver. We do this by introducing the idea of a hyper-blackbox which is a blackbox around one or more internal blackboxes. In computer science, neural networks continue to be more and more widely used. They can be used to solve many problems in a highly general way, allowing these problems to be dealt with primarily by using appropriate architecture and sufficient data. On the other hand, there are classical algorithmic techniques, such as graph algorithms and SATsolvers, which are highly optimized and studied. However, rather than being highly general, they are usually very specific to their exact problem and feature space.
Jun-6-2020
- Country:
- North America > United States
- New York > New York County
- New York City (0.04)
- Wisconsin > Dane County
- Madison (0.14)
- New York > New York County
- North America > United States
- Genre:
- Research Report (0.64)
- Technology: