Goto

Collaborating Authors

 forward approach


Domain-Independent Automatic Generation of Descriptive Texts for Time-Series Data

Dohi, Kota, Ito, Aoi, Purohit, Harsh, Nishida, Tomoya, Endo, Takashi, Kawaguchi, Yohei

arXiv.org Artificial Intelligence

Due to scarcity of time-series data annotated with descriptive texts, training a model to generate descriptive texts for time-series data is challenging. In this study, we propose a method to systematically generate domain-independent descriptive texts from time-series data. We identify two distinct approaches for creating pairs of time-series data and descriptive texts: the forward approach and the backward approach. By implementing the novel backward approach, we create the Temporal Automated Captions for Observations (TACO) dataset. Experimental results demonstrate that a contrastive learning based model trained using the TACO dataset is capable of generating descriptive texts for time-series data in novel domains.


A Practical Introduction to Sequential Feature Selection

#artificialintelligence

Sequential feature selection is a supervised approach to feature selection. It makes use of a supervised model and it can be used to remove useless features from a large dataset or to select useful features by adding them sequentially. This is a forward approach because we start with 1 feature and then we add other features. There's a backward approach as well, that starts from all the features and removes the less relevant ones according to the same maximization criteria. Since, at each step, we check the performance of the model with the same dataset with the addition of each remaining feature (one by one), it's a greedy approach. The algorithm stops when the desired number of features is reached or if the performance doesn't increase above a certain threshold.


Empirical Policy Evaluation with Supergraphs

Vial, Daniel, Subramanian, Vijay

arXiv.org Machine Learning

We devise and analyze algorithms for the empirical policy evaluation problem in reinforcement learning. Our algorithms explore backward from high-cost states to find high-value ones, in contrast to forward approaches that work forward from all states. While several papers have demonstrated the utility of backward exploration empirically, we conduct rigorous analyses which show that our algorithms can reduce average-case sample complexity from $O(S \log S)$ to as low as $O(\log S)$.