Goto

Collaborating Authors

 general case


14da15db887a4b50efe5c1bc66537089-AuthorFeedback.pdf

Neural Information Processing Systems

We are grateful for all the reviewers' valuable suggestions and questions. The results are displayed in Figure 1. " stands for equality up to zero-valued paddings. ICLR2019), but with the top layer to be zero. We will clarify this in the revised version.



Optimal Decision Tree with Noisy Outcomes

Neural Information Processing Systems

A fundamental task in active learning involves performing a sequence of tests to identify an unknown hypothesis that is drawn from a known distribution. This problem, known as optimal decision tree induction, has been widely studied for decades and the asymptotically best-possible approximation algorithm has been devised for it. We study a generalization where certain test outcomes are noisy, even in the more general case when the noise is persistent, i.e., repeating the test on the scenario gives the same noisy output, disallowing simple repetition as a way to gain confidence. We design new approximation algorithms for both the non-adaptive setting, where the test sequence must be fixed a-priori, and the adaptive setting where the test sequence depends on the outcomes of prior tests. Previous work in the area assumed at most a constant number of noisy outcomes per test and per scenario and provided approximation ratios that were problem dependent (such as the minimum probability of a hypothesis). Our new approximation algorithms provide guarantees that are nearly best-possible and work for the general case of a large number of noisy outcomes per test or per hypothesis where the performance degrades smoothly with this number. Our results adapt and generalize methods used for submodular ranking and stochastic set cover. We evaluate the performance of our algorithms on two natural applications with noise: toxic chemical identification and active learning of linear classifiers. Despite our logarithmic theoretical approximation guarantees, our methods give solutions with cost very close to the information theoretic minimum, demonstrating the effectiveness of our methods.


5f7695debd8cde8db5abcb9f161b49ea-AuthorFeedback.pdf

Neural Information Processing Systems

In Theorem 2, the projection is only weakened to the left side. In Lemma 2, we prove that the optimal dual variable is bounded under the Slater condition. 's*, *dimension-free*: They are dependent. R#2: Thank you for your positive comments and constructive suggestions. R#3: Thank you for recognizing the contributions/strengths of our paper and for providing valuable comments.



would like to address all concerns raised

Neural Information Processing Systems

We would like to thank all of the reviewers for their valuable time and their constructive comments. We will incorporate the proposed minor corrections in the final version of the paper. On whether support set changes during iterations, we observe that in experiments (subsection 4.1) IHT changes support, We thank the reviewer for the supportive and constructive review. Regarding the comment in lines 198-202, we apologize for any confusion. Regarding variance in experiments, we have observed high variance is not enough for the algorithm to get "lucky".


fcdf698a5d673435e0a5a6f9ffea05ca-AuthorFeedback.pdf

Neural Information Processing Systems

We thank all the reviewers for the valuable insights and feedback. Below please see our response to the questions. Brief description of SAEM: Thank you for the suggestion. Causal direction flipping is not an assumption. It is hard to handle with traditional methods.



A Convex and Global Solution for the P$n$P Problem in 2D Forward-Looking Sonar

Su, Jiayi, Qian, Jingyu, Yang, Liuqing, Yuan, Yufan, Fu, Yanbing, Wu, Jie, Wei, Yan, Qu, Fengzhong

arXiv.org Artificial Intelligence

The perspective-$n$-point (P$n$P) problem is important for robotic pose estimation. It is well studied for optical cameras, but research is lacking for 2D forward-looking sonar (FLS) in underwater scenarios due to the vastly different imaging principles. In this paper, we demonstrate that, despite the nonlinearity inherent in sonar image formation, the P$n$P problem for 2D FLS can still be effectively addressed within a point-to-line (PtL) 3D registration paradigm through orthographic approximation. The registration is then resolved by a duality-based optimal solver, ensuring the global optimality. For coplanar cases, a null space analysis is conducted to retrieve the solutions from the dual formulation, enabling the methods to be applied to more general cases. Extensive simulations have been conducted to systematically evaluate the performance under different settings. Compared to non-reprojection-optimized state-of-the-art (SOTA) methods, the proposed approach achieves significantly higher precision. When both methods are optimized, ours demonstrates comparable or slightly superior precision.


PartialLoading: User Scheduling and Bandwidth Allocation for Parameter-sharing Edge Inference

Qu, Guanqiao, Chen, Qian, Chen, Xianhao, Huang, Kaibin, Fang, Yuguang

arXiv.org Artificial Intelligence

By provisioning inference offloading services, edge inference drives the rapid growth of AI applications at the network edge. However, achieving high task throughput with stringent latency requirements remains a significant challenge. To address this issue, we develop a parameter-sharing AI model loading (PartialLoading) framework for multi-user edge inference, which exploits two key insights: 1) the majority of latency arises from loading AI models into server GPU memory, and 2) different AI models can share a significant number of parameters, for which redundant loading should be avoided. Towards this end, we formulate a joint multi-user scheduling and spectrum bandwidth allocation problem to maximize task throughput by exploiting shared parameter blocks across models. The intuition is to judiciously schedule user requests to reuse the shared parameter blocks between consecutively loaded models, thereby reducing model loading time substantially. To facilitate solution finding, we decouple the problem into two sub-problems, i.e., user scheduling and bandwidth allocation, showing that solving them sequentially is equivalent to solving the original problem. Due to the NP-hardness of the problem, we first study an important special case called the "bottom-layer-sharing" case, where AI models share some bottom layers within clusters, and design a dynamic programming-based algorithm to obtain the optimal solution in polynomial time. For the general case, where shared parameter blocks appear at arbitrary positions within AI models, we propose a greedy heuristic to obtain the sub-optimal solution efficiently. Simulation results demonstrate that the proposed framework significantly improves task throughput under deadline constraints compared with user scheduling without exploiting parameter sharing.