Goto

Collaborating Authors

 mismatch




Teaching Inverse Reinforcement Learners via Features and Demonstrations

Luis Haug, Sebastian Tschiatschek, Adish Singla

Neural Information Processing Systems

Weintroduceanaturalquantity,the teaching risk, which measures the potential suboptimality of policies that look optimal to the learner in this setting. We show that bounds on the teaching risk guarantee that the learner is able to find a near-optimal policy using standard algorithms basedoninversereinforcement learning. Basedonthesefindings, we suggest a teaching scheme in which the expert can decrease the teaching risk by updating the learner's worldview, and thus ultimately enable her to find a near-optimalpolicy.





RobustInverseReinforcementLearningunder TransitionDynamicsMismatch

Neural Information Processing Systems

Leveraginginsights from theRobustRLliterature, wepropose arobustMCEIRLalgorithm, which is a principled approach to help with this mismatch. Finally, we empirically demonstrate the stable performance of our algorithm compared to the standard MCEIRL algorithm under transition dynamics mismatches in both finite and continuousMDPproblems.



LearningTransferableFeaturesforPointCloud Detectionvia3DContrastiveCo-training

Neural Information Processing Systems

Most existing point cloud detection models require large-scale, densely annotated datasets. They typically underperform in domain adaptation settings, due to geometry shifts caused by different physical environments or LiDAR sensor configurations. Therefore, itischallenging butvaluable tolearn transferable features between a labeled source domain and a novel target domain, without any access to target labels. To tackle this problem, we introduce the framework of 3DContrastiveCo-training (3D-CoCo) with two technical contributions. First, 3D-CoCo is inspired by our observation that the bird-eye-view (BEV) features are more transferable than low-levelgeometry features.