Goto

Collaborating Authors

 row and column





TableRAG: Million-Token Table Understanding with Language Models Si-An Chen

Neural Information Processing Systems

This enables more efficient data encoding and precise retrieval, significantly reducing prompt lengths and mitigating information loss. We have developed two new million-token benchmarks from the Arcade and BIRD-SQL datasets to thoroughly evaluate TableRAG's effectiveness at scale.


Unsupervised Learning for Solving the Travelling Salesman Problem

Neural Information Processing Systems

We propose UTSP, an Unsupervised Learning (UL) framework for solving the Travelling Salesman Problem (TSP). We train a Graph Neural Network (GNN) using a surrogate loss. The GNN outputs a heat map representing the probability for each edge to be part of the optimal path. We then apply local search to generate our final prediction based on the heat map. Our loss function consists of two parts: one pushes the model to find the shortest path and the other serves as a surrogate for the constraint that the route should form a Hamiltonian Cycle. Experimental results show that UTSP outperforms the existing data-driven TSP heuristics. Our approach is parameter efficient as well as data efficient: the model takes 10% of the number of parameters and 0.2% of training samples compared with Reinforcement Learning or Supervised Learning methods.






Supplementary Material for " Partial Optimal Transport with Applications on Positive-Unlabeled Learning '

Neural Information Processing Systems

The proof involves 3 steps: 1. we first justify the definition of p and q in the extended problem formulation, and show that T It is straighforward to see that, by doing so, we ensure that Γ remains an admissible coupling (see Figure 1 for an illustration). Figure 1: Repartition of the mass for matrices Γ and T. Each of them has a total mass of q A 5 (with a constant A > 2ξ) the GW formulation involves pairs of points. This yields the following cases: Case 1: a > 0. In that case, φ(γ) is a convex function, whose minimum on [0, 1] is reached for γ We have φ(0) = c > 0 and φ(1) = a + b + c. The minimum is then obtained for 0 if a + b > 0, and 1 otherwise. This gives the desired result.