Goto

Collaborating Authors

 maximal




GraphLearningAssistedMulti-objectiveInteger Programming(Appendix) A.1 Searchregionupdate

Neural Information Processing Systems

Ontheother hand, the reference set could also be a (good) approximated Pareto front for assessment. In this paper,we use the exact Pareto front for MOKP(3-100) tocompute IGD since theyare easy tobe solvedoptimally.


Dynamic Logic of Trust-Based Beliefs

Jiang, Junli, Naumov, Pavel, Zhang, Wenxuan

arXiv.org Artificial Intelligence

Traditionally, an agent's beliefs would come from what the agent can see, hear, or sense. In the modern world, beliefs are often based on the data available to the agents. In this work, we investigate a dynamic logic of such beliefs that incorporates public announcements of data. The main technical contribution is a sound and complete axiomatisation of the interplay between data-informed beliefs and data announcement modalities. We also describe a non-trivial polynomial model checking algorithm for this logical system.





f1404c2624fa7f2507ba04fd9dfc5fb1-Supplemental.pdf

Neural Information Processing Systems

The single-step formulation does not account for changes in the student's internal state over In the multi-step formulation, effort put towards studying accumulates in the form of knowledge. We demonstrate this by revisiting the classroom example. The student's grade is then the summation of all scores across time. B.1 Agent's best-response effort sequence A rational agent solves the following optimization to determine his best-response effort policy: { e Recall that the agent's score A dominated effort policy is formally defined as follows: Lemma C.1 Next we look at the complementary slackness condition. From Lemma D.1, we know the form a rational agent's effort Substituting this into Equation 6, we obtain the following characterization of the principal's assessment policy: { E.1 The set of incentivizable effort policies is convex Proof.


OLMoTrace: Tracing Language Model Outputs Back to Trillions of Training Tokens

Liu, Jiacheng, Blanton, Taylor, Elazar, Yanai, Min, Sewon, Chen, YenSung, Chheda-Kothary, Arnavi, Tran, Huy, Bischoff, Byron, Marsh, Eric, Schmitz, Michael, Trier, Cassidy, Sarnat, Aaron, James, Jenna, Borchardt, Jon, Kuehl, Bailey, Cheng, Evie, Farley, Karen, Sreeram, Sruthi, Anderson, Taira, Albright, David, Schoenick, Carissa, Soldaini, Luca, Groeneveld, Dirk, Pang, Rock Yuren, Koh, Pang Wei, Smith, Noah A., Lebrecht, Sophie, Choi, Yejin, Hajishirzi, Hannaneh, Farhadi, Ali, Dodge, Jesse

arXiv.org Artificial Intelligence

We present OLMoTrace, the first system that traces the outputs of language models back to their full, multi-trillion-token training data in real time. OLMoTrace finds and shows verbatim matches between segments of language model output and documents in the training text corpora. Powered by an extended version of infini-gram (Liu et al., 2024), our system returns tracing results within a few seconds. OLMoTrace can help users understand the behavior of language models through the lens of their training data. We showcase how it can be used to explore fact checking, hallucination, and the creativity of language models. OLMoTrace is publicly available and fully open-source.


An Explainable and Interpretable Composite Indicator Based on Decision Rules

Corrente, Salvatore, Greco, Salvatore, Słowiński, Roman, Zappalà, Silvano

arXiv.org Artificial Intelligence

Composite indicators are widely used to score or classify units evaluated on multiple criteria. Their construction involves aggregating criteria evaluations, a common practice in Multiple Criteria Decision Aiding (MCDA). In MCDA, various methods have been proposed to address key aspects of multiple criteria evaluations, such as the measurement scales of the criteria, the degree of acceptable compensation between them, and their potential interactions. However, beyond producing a final score or classification, it is essential to ensure the explainability and interpretability of results as well as the procedure's transparency. This paper proposes a method for constructing explainable and interpretable composite indicators using " if..., then... " decision rules. We consider the explainability and interpretability of composite indicators in four scenarios: (i) decision rules explain numerical scores obtained from an aggregation of numerical codes corresponding to ordinal qualifiers; (ii) an obscure numerical composite indicator classifies units into quantiles; (iii) given preference information provided by a Decision Maker in the form of classifications of some reference units, a composite indicator is constructed using decision rules; (iv) the classification of a set of units results from the application of an MCDA method and is explained by decision rules. To induce the rules from scored or classified units, we apply the Dominance-based Rough Set Approach. The resulting decision rules relate the class assignment or unit's score to threshold conditions on values of selected indicators in an intelligible way, clarifying the underlying rationale. Moreover, they serve to recommend composite indicator assessment for new units of interest.