Collaborating Authors


AAAI Conferences

There is interest in artificial intelligence for principled techniques to analyze inconsistent information. This stems from the recognition that the dichotomy between consistent and inconsistent sets of formulae that comes from classical logics is not sufficient for describing inconsistent information. We review some existing proposals and make new proposals for measures of inconsistency and measures of information, and then prove that they are all pairwise incompatible. This shows that the notion of inconsistency is a multi-dimensional concept where different measures provide different insights. We then explore relationships between measures of inconsistency and measures of information in terms of the trade-offs they identify when using them to guide resolution of inconsistency.


AAAI Conferences

Effective visualization resizing is important for many visualization tasks, where users may have display devices with different sizes and aspect ratios. Our recently designed framework can adapt a visualization to different displays by transforming the resizing problem into a non-linear optimization problem. However, it is not scalable to a large amount of dense information. Undesired cluttered results would be produced if dense information is presented in the target display. We present an extension to our resizing framework with a seamless integration of a sampling-based data abstraction mechanism, such that it is scalable with not only different display sizes, but also different amounts of information.

Intelligent Information Laboratory

AITopics Original Links

The InfoLab's mission is to make information access relevant to the specific moment and task at hand. We construct systems that connect users with information, services, people, and community on the basis of the context of their in-the-moment activity.


AAAI Conferences

We study the problem of ranking a set of items from nonactively chosen pairwise preferences where each item has feature information with it. We propose and characterize a very broad class of preference matrices giving rise to the Feature Low Rank (FLR) model, which subsumes several models ranging from the classic Bradley–Terry–Luce (BTL) (Bradley and Terry 1952) and Thurstone (Thurstone 1927) models to the recently proposed blade-chest (Chen and Joachims 2016) and generic low-rank preference (Rajkumar and Agarwal 2016) models. We use the technique of matrix completion in the presence of side information to develop the Inductive Pairwise Ranking (IPR) algorithm that provably learns a good ranking under the FLR model, in a sample-efficient manner. In practice, through systematic synthetic simulations, we confirm our theoretical findings regarding improvements in the sample complexity due to the use of feature information. Moreover, on popular real-world preference learning datasets, with as less as 10% sampling of the pairwise comparisons, our method recovers a good ranking.