row and column
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada (0.04)
- North America > United States (0.05)
- Asia > China (0.05)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Minnesota (0.04)
- (3 more...)
- Asia > Taiwan (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia > Middle East > Jordan (0.04)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.68)
Unsupervised Learning for Solving the Travelling Salesman Problem
We propose UTSP, an Unsupervised Learning (UL) framework for solving the Travelling Salesman Problem (TSP). We train a Graph Neural Network (GNN) using a surrogate loss. The GNN outputs a heat map representing the probability for each edge to be part of the optimal path. We then apply local search to generate our final prediction based on the heat map. Our loss function consists of two parts: one pushes the model to find the shortest path and the other serves as a surrogate for the constraint that the route should form a Hamiltonian Cycle. Experimental results show that UTSP outperforms the existing data-driven TSP heuristics. Our approach is parameter efficient as well as data efficient: the model takes 10% of the number of parameters and 0.2% of training samples compared with Reinforcement Learning or Supervised Learning methods.
- North America > United States > Colorado > Jefferson County > Golden (0.04)
- North America > United States > Ohio (0.04)
- North America > United States > New Jersey > Hudson County > Hoboken (0.04)
- Asia (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- North America > Canada (0.04)
Supplementary Material for " Partial Optimal Transport with Applications on Positive-Unlabeled Learning '
The proof involves 3 steps: 1. we first justify the definition of p and q in the extended problem formulation, and show that T It is straighforward to see that, by doing so, we ensure that Γ remains an admissible coupling (see Figure 1 for an illustration). Figure 1: Repartition of the mass for matrices Γ and T. Each of them has a total mass of q A 5 (with a constant A > 2ξ) the GW formulation involves pairs of points. This yields the following cases: Case 1: a > 0. In that case, φ(γ) is a convex function, whose minimum on [0, 1] is reached for γ We have φ(0) = c > 0 and φ(1) = a + b + c. The minimum is then obtained for 0 if a + b > 0, and 1 otherwise. This gives the desired result.
- Europe > France > Normandy > Seine-Maritime > Rouen (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)