Goto

Collaborating Authors

 loe


Landmark Ordinal Embedding

Neural Information Processing Systems

In this paper, we aim to learn a low-dimensional Euclidean representation from a set of constraints of the form "item j is closer to item i than item k". Existing approaches for this "ordinal embedding" problem require expensive optimization procedures, which cannot scale to handle increasingly larger datasets. To address this issue, we propose a landmark-based strategy, which we call Landmark Ordinal Embedding (LOE).



Cluster and then Embed: A Modular Approach for Visualization

Coda, Elizabeth, Arias-Castro, Ery, Mishne, Gal

arXiv.org Machine Learning

Dimensionality reduction methods such as t-SNE and UMAP are popular methods for visualizing data with a potential (latent) clustered structure. They are known to group data points at the same time as they embed them, resulting in visualizations with well-separated clusters that preserve local information well. However, t-SNE and UMAP also tend to distort the global geometry of the underlying data. We propose a more transparent, modular approach consisting of first clustering the data, then embedding each cluster, and finally aligning the clusters to obtain a global embedding. We demonstrate this approach on several synthetic and real-world datasets and show that it is competitive with existing methods, while being much more transparent.



Improving robot understanding using conversational AI: demonstration and feasibility study

Kumar, Shikhar, Edan, Yael

arXiv.org Artificial Intelligence

Explanations constitute an important aspect of successful human robot interactions and can enhance robot understanding. To improve the understanding of the robot, we have developed four levels of explanation (LOE) based on two questions: what needs to be explained, and why the robot has made a particular decision. The understandable robot requires a communicative action when there is disparity between the human s mental model of the robot and the robots state of mind. This communicative action was generated by utilizing a conversational AI platform to generate explanations. An adaptive dialog was implemented for transition from one LOE to another. Here, we demonstrate the adaptive dialog in a collaborative task with errors and provide results of a feasibility study with users.


Levels of explanation -- implementation and evaluation of what and when for different time-sensitive tasks

Kumar, Shikhar, Keidar, Omer, Edan, Yael

arXiv.org Artificial Intelligence

In this work, we focused on constructing and evaluating levels of explanation(LOE) that address two basic aspect of HRI: 1. What information should be communicated to the user by the robot? 2. When should the robot communicate this information? For constructing the LOE, we defined two terms, verbosity and explanation patterns, each with two levels (verbosity -- high and low, explanation patterns -- dynamic and static). Based on these parameters, three different LOE (high, medium, and low) were constructed and evaluated in a user study with a telepresence robot. The user study was conducted for a simulated telerobotic healthcare task with two different conditions related to time sensitivity, as evaluated by two different user groups -- one that performed the task within a time limit and the other with no time limit. We found that the high LOE was preferred in terms of adequacy of explanation, number of collisions, number of incorrect movements, and number of clarifications when users performed the experiment in the without time limit condition. We also found that both high and medium LOE did not have significant differences in completion time, the fluency of HRI, and trust in the robot. When users performed the experiment in the with time limit condition, high and medium LOE had better task performances and were preferred to the low LOE in terms of completion time, fluency, adequacy of explanation, trust, number of collisions, number of incorrect movements and number of clarifications. Future directions for advancing LOE are discussed.


Landmark Ordinal Embedding

Neural Information Processing Systems

In this paper, we aim to learn a low-dimensional Euclidean representation from a set of constraints of the form "item j is closer to item i than item k". Existing approaches for this "ordinal embedding" problem require expensive optimization procedures, which cannot scale to handle increasingly larger datasets. To address this issue, we propose a landmark-based strategy, which we call Landmark Ordinal Embedding (LOE). We derive bounds establishing the statistical consistency of LOE under the popular Bradley- Terry-Luce noise model. Through a rigorous analysis of the computational complexity, we show that LOE is significantly more efficient than conventional ordinal embedding approaches as the number of items grows.


Landmark Ordinal Embedding

Ghosh, Nikhil, Chen, Yuxin, Yue, Yisong

Neural Information Processing Systems

In this paper, we aim to learn a low-dimensional Euclidean representation from a set of constraints of the form "item j is closer to item i than item k". Existing approaches for this "ordinal embedding" problem require expensive optimization procedures, which cannot scale to handle increasingly larger datasets. To address this issue, we propose a landmark-based strategy, which we call Landmark Ordinal Embedding (LOE). We derive bounds establishing the statistical consistency of LOE under the popular Bradley- Terry-Luce noise model. Through a rigorous analysis of the computational complexity, we show that LOE is significantly more efficient than conventional ordinal embedding approaches as the number of items grows.


Landmark Ordinal Embedding

Ghosh, Nikhil, Chen, Yuxin, Yue, Yisong

arXiv.org Machine Learning

In this paper, we aim to learn a low-dimensional Euclidean representation from a set of constraints of the form "item j is closer to item i than item k". Existing approaches for this "ordinal embedding" problem require expensive optimization procedures, which cannot scale to handle increasingly larger datasets. To address this issue, we propose a landmark-based strategy, which we call Landmark Ordinal Embedding (LOE). Our approach trades off statistical efficiency for computational efficiency by exploiting the low-dimensionality of the latent embedding. We derive bounds establishing the statistical consistency of LOE under the popular Bradley-Terry-Luce noise model. Through a rigorous analysis of the computational complexity, we show that LOE is significantly more efficient than conventional ordinal embedding approaches as the number of items grows. We validate these characterizations empirically on both synthetic and real datasets. We also present a practical approach that achieves the "best of both worlds", by using LOE to warm-start existing methods that are more statistically efficient but computationally expensive.


Succinct Set-Encoding for State-Space Search

Schmidt, Tim (Palo Alto Research Center, Inc. and Technische Universität München) | Zhou, Rong (Palo Alto Research Center, Inc.)

AAAI Conferences

We introduce the level-ordered edge sequence (LOES), a suc- cinct encoding for state-sets based on prefix-trees. For use in state-space search, we give algorithms for member testing and element hashing with runtime dependent only on state- size, as well as space and memory efficient construction of and iteration over such sets. Finally we compare LOES to binary decision diagrams (BDDs) and explicitly packed set- representation over a range of IPC planning problems. Our results show LOES produces succinct set-encodings for a wider range of planning problems than both BDDs and ex- plicit state representation, increasing the number of problems that can be solved cost-optimally.