Goto

Collaborating Authors

 dolce


DOLCE: Decomposing Off-Policy Evaluation/Learning into Lagged and Current Effects

Tamano, Shu, Nojima, Masanori

arXiv.org Machine Learning

Off-policy evaluation (OPE) and off-policy learning (OPL) for contextual bandit policies leverage historical data to evaluate and optimize a target policy. Most existing OPE/OPL methods--based on importance weighting or imputation--assume common support between the target and logging policies. When this assumption is violated, these methods typically require unstable extrapolation, truncation, or conservative strategies for individuals outside the common support assumption. However, such approaches can be inadequate in settings where explicit evaluation or optimization for such individuals is required. To address this issue, we propose DOLCE: Decomposing Off-policy evaluation/learning into Lagged and Current Effects, a novel estimator that leverages contextual information from multiple time points to decompose rewards into lagged and current effects. By incorporating both past and present contexts, DOLCE effectively handles individuals who violate the common support assumption. We show that the proposed estimator is unbiased under two assumptions--local correctness and conditional independence. Our experiments demonstrate that DOLCE achieves substantial improvements in OPE and OPL, particularly as the proportion of individuals outside the common support assumption increases.


DOLCE: A Descriptive Ontology for Linguistic and Cognitive Engineering

Borgo, Stefano, Ferrario, Roberta, Gangemi, Aldo, Guarino, Nicola, Masolo, Claudio, Porello, Daniele, Sanfilippo, Emilio M., Vieu, Laure

arXiv.org Artificial Intelligence

DOLCE, the first top-level (foundational) ontology to be axiomatized, has remained stable for twenty years and today is broadly used in a variety of domains. DOLCE is inspired by cognitive and linguistic considerations and aims to model a commonsense view of reality, like the one human beings exploit in everyday life in areas as diverse as socio-technical systems, manufacturing, financial transactions and cultural heritage. DOLCE clearly lists the ontological choices it is based upon, relies on philosophical principles, is richly formalized, and is built according to well-established ontological methodologies, e.g. OntoClean. Because of these features, it has inspired most of the existing top-level ontologies and has been used to develop or improve standards and public domain resources (e.g. CIDOC CRM, DBpedia and WordNet). Being a foundational ontology, DOLCE is not directly concerned with domain knowledge. Its purpose is to provide the general categories and relations needed to give a coherent view of reality, to integrate domain knowledge, and to mediate across domains. In these 20 years DOLCE has shown that applied ontologies can be stable and that interoperability across reference and domain ontologies is a reality. This paper briefly introduces the ontology and shows how to use it on a few modeling cases.


A Unified Gradient-Descent/Clustering Architecture for Finite State Machine Induction

Neural Information Processing Systems

Although recurrent neural nets have been moderately successful in learning to emulate finite-state machines (FSMs), the continu(cid:173) ous internal state dynamics of a neural net are not well matched to the discrete behavior of an FSM. We describe an architecture, called DOLCE, that allows discrete states to evolve in a net as learn(cid:173) ing progresses. DOLCE consists of a standard recurrent neural net trained by gradient descent and an adaptive clustering technique that quantizes the state space. DOLCE is based on the assumption that a finite set of discrete internal states is required for the task, and that the actual network state belongs to this set but has been corrupted by noise due to inaccuracy in the weights. DOLCE learns to recover the discrete state with maximum a posteriori probabil(cid:173) ity from the noisy state.


Sweetening WORDNET with DOLCE

Gangemi, Aldo, Guarino, Nicola, Masolo, Claudio, Oltramari, Alessandro

AI Magazine

Despite its original intended use, which was very different, WORDNET is used more and more today as an ontology, where the hyponym relation between word senses is interpreted as a subsumption relation between concepts. In this article, we discuss the general problems related to the semantic interpretation of WORDNET taxonomy in light of rigorous ontological principles inspired by the philosophical tradition. Then we introduce the DOLCE upper-level ontology, which is inspired by such principles but with a clear orientation toward language and cognition. We report the results of an experimental effort to align WORDNET's upper level with DOLCE. We suggest that such alignment could lead to an "ontologically sweetened" WORDNET, meant to be conceptually more rigorous, cognitively transparent, and efficiently exploitable in several applications.


A Unified Gradient-Descent/Clustering Architecture for Finite State Machine Induction

Das, Sreerupa, Mozer, Michael C.

Neural Information Processing Systems

Researchers often try to understand-post hoc-representations that emerge in the hidden layers of a neural net following training. Interpretation is difficult because these representations are typically highly distributed and continuous. By "continuous," we mean that if one constructed a scatterplot over the hidden unit activity space of patterns obtained in response to various inputs, examination at any scale would reveal the patterns to be broadly distributed over the space.


A Unified Gradient-Descent/Clustering Architecture for Finite State Machine Induction

Das, Sreerupa, Mozer, Michael C.

Neural Information Processing Systems

Researchers often try to understand-post hoc-representations that emerge in the hidden layers of a neural net following training. Interpretation is difficult because these representations are typically highly distributed and continuous. By "continuous," we mean that if one constructed a scatterplot over the hidden unit activity space of patterns obtained in response to various inputs, examination at any scale would reveal the patterns to be broadly distributed over the space.


A Unified Gradient-Descent/Clustering Architecture for Finite State Machine Induction

Das, Sreerupa, Mozer, Michael C.

Neural Information Processing Systems

Researchers often try to understand-post hoc-representations that emerge in the hidden layers of a neural net following training. Interpretation is difficult because these representations are typically highly distributed and continuous. By "continuous," wemean that if one constructed a scatterplot over the hidden unit activity space of patterns obtained in response to various inputs, examination at any scale would reveal the patterns to be broadly distributed over the space.