Not enough data to create a plot.
Try a different view from the menu above.
Murena, Pierre-Alexandre
Cooperative Bayesian Optimization for Imperfect Agents
Khoshvishkaie, Ali, Mikkola, Petrus, Murena, Pierre-Alexandre, Kaski, Samuel
We introduce a cooperative Bayesian optimization problem for optimizing black-box functions of two variables where two agents choose together at which points to query the function but have only control over one variable each. This setting is inspired by human-AI teamwork, where an AI-assistant helps its human user solve a problem, in this simplest case, collaborative optimization. We formulate the solution as sequential decision-making, where the agent we control models the user as a computationally rational agent with prior knowledge about the function. We show that strategic planning of the queries enables better identification of the global maximum of the function as long as the user avoids excessive exploration. This planning is made possible by using Bayes Adaptive Monte Carlo planning and by endowing the agent with a user model that accounts for conservative belief updates and exploratory sampling of the points to query.
A Neural Approach for Detecting Morphological Analogies
Alsaidi, Safa, Decker, Amandine, Lay, Puthineath, Marquer, Esteban, Murena, Pierre-Alexandre, Couceiro, Miguel
Analogical proportions are statements of the form "A is to B as C is to D" that are used for several reasoning and classification tasks in artificial intelligence and natural language processing (NLP). For instance, there are analogy based approaches to semantics as well as to morphology. In fact, symbolic approaches were developed to solve or to detect analogies between character strings, e.g., the axiomatic approach as well as that based on Kolmogorov complexity. In this paper, we propose a deep learning approach to detect morphological analogies, for instance, with reinflexion or conjugation. We present empirical results that show that our framework is competitive with the above-mentioned state of the art symbolic approaches. We also explore empirically its transferability capacity across languages, which highlights interesting similarities between them.
On the Transferability of Neural Models of Morphological Analogies
Alsaidi, Safa, Decker, Amandine, Lay, Puthineath, Marquer, Esteban, Murena, Pierre-Alexandre, Couceiro, Miguel
Analogical proportions are statements expressed in the form "A is to B as C is to D" and are used for several reasoning and classification tasks in artificial intelligence and natural language processing (NLP). In this paper, we focus on morphological tasks and we propose a deep learning approach to detect morphological analogies. We present an empirical study to see how our framework transfers across languages, and that highlights interesting similarities and differences between these languages. In view of these results, we also discuss the possibility of building a multilingual morphological model.
Improving Artificial Teachers by Considering How People Learn and Forget
Nioche, Aurélien, Murena, Pierre-Alexandre, de la Torre-Ortiz, Carlos, Oulasvirta, Antti
Applications for self-regulated teaching are very popular (e.g., with Duolingo estimates of 100M downloads from Google Play at the time of writing). One of the central challenges for research on intelligent user interfaces is to identify algorithmic principles that can pick the best interventions for reliably improving human learning toward stated objectives in light of realistically obtainable data on the user. The computational problem we study is how, when given some learning materials, we can organize them into lessons and reviews such that, over time, human learning is maximized with respect to a set learning objective. Predicting the effects of teaching interventions on human learning is challenging, however. Firstly, the state of user memory is both latent (that is, not directly observable) and non-stationary (that is, evolving over time, on account of such effects as loss of activation and interference), and an intervention that is ideal for one user may be a poor choice for another user -- there are large individual-to-individual differences in forgetting and recall.
Teaching to Learn: Sequential Teaching of Agents with Inner States
Celikok, Mustafa Mert, Murena, Pierre-Alexandre, Kaski, Samuel
In sequential machine teaching, a teacher's objective is to provide the optimal sequence of inputs to sequential learners in order to guide them towards the best model. In this paper we extend this setting from current static one-data-set analyses to learners which change their learning algorithm or latent state to improve during learning, and to generalize to new datasets. We introduce a multi-agent formulation in which learners' inner state may change with the teaching interaction, which affects the learning performance in future tasks. In order to teach such learners, we propose an optimal control approach that takes the future performance of the learner after teaching into account. This provides tools for modelling learners having inner states, and machine teaching of meta-learning algorithms. Furthermore, we distinguish manipulative teaching, which can be done by effectively hiding data and also used for indoctrination, from more general education which aims to help the learner become better at generalization and learning in new datasets in the absence of a teacher.