Goto

Collaborating Authors

 machine teaching


Understanding the Role of Adaptivity in Machine Teaching: The Case of Version Space Learners

Yuxin Chen, Adish Singla, Oisin Mac Aodha, Pietro Perona, Yisong Yue

Neural Information Processing Systems

In real-world applications of education, an effective teacher adaptively chooses the next example to teach based on the learner's current state. However, most existing work in algorithmic machine teachingfocuses on the batch setting, where adaptivity plays no role. In this paper, we study the case of teaching consistent, version space learners in an interactive setting. At any time step, the teacher provides an example, the learner performs an update, and the teacher observes the learner'snew state.


Machine Teaching of Active Sequential Learners

Tomi Peltola, Mustafa Mert Çelikok, Pedram Daee, Samuel Kaski

Neural Information Processing Systems

On the other hand, for goal-oriented tasks, humans create mental models of the environment for planning their actions to achieve their goals [1,2]. In AI systems, recent research has shown that usersformmentalmodelsoftheAI'sstateandbehaviour[ 3,4].


Teaching Inverse Reinforcement Learners via Features and Demonstrations

Luis Haug, Sebastian Tschiatschek, Adish Singla

Neural Information Processing Systems

Weintroduceanaturalquantity,the teaching risk, which measures the potential suboptimality of policies that look optimal to the learner in this setting. We show that bounds on the teaching risk guarantee that the learner is able to find a near-optimal policy using standard algorithms basedoninversereinforcement learning. Basedonthesefindings, we suggest a teaching scheme in which the expert can decrease the teaching risk by updating the learner's worldview, and thus ultimately enable her to find a near-optimalpolicy.


IterativeTeacher-AwareLearning

Neural Information Processing Systems

In human pedagogy, teachers and students can interact adaptively to maximize communication efficiency. Theteacher adjusts herteaching method fordifferent students, and the student, after getting familiar with the teacher's instruction mechanism,caninfertheteacher'sintentiontolearnfaster.






ThinkBig, TeachSmall: DoLanguageModelsDistilOccam'sRazor?

Neural Information Processing Systems

Large language models have recently shown a remarkable ability for few-shot learning, including patterns of algorithmic nature. However, it is still an open question to determine what kind of patterns these models can capture and how manyexamples theyneedintheirprompts.


Locality Sensitive Teaching

Neural Information Processing Systems

The emergence of the Internet-of-Things (IoT) sheds light on applying the machine teaching (MT) algorithms for online personalized education on home devices. This direction becomes more promising during the COVID-19 pandemic when in-person education becomes infeasible. However, as one of the most influential and practical MT paradigms, iterative machine teaching (IMT) is prohibited on IoT devices due to its inefficient and unscalable algorithms. IMT is a paradigm where a teacher feeds examples iteratively and intelligently based on the learner's status. In each iteration, current IMT algorithms greedily traverse the whole training set to find an example for the learner, which is computationally expensive in practice. We propose a novel teaching framework, Locality Sensitive Teaching (LST), based on locality sensitive sampling, to overcome these challenges. LST has provable near-constant time complexity, which is exponentially better than the existing baseline.