Jun, Miru
Estuary: A Framework For Building Multimodal Low-Latency Real-Time Socially Interactive Agents
Lin, Spencer, Rizk, Basem, Jun, Miru, Artze, Andy, Sullivan, Caitlin, Mozgai, Sharon, Fisher, Scott
The rise in capability and ubiquity of generative artificial intelligence (AI) technologies has enabled its application to the field of Socially Interactive Agents (SIAs). Despite rising interest in modern AI-powered components used for real-time SIA research, substantial friction remains due to the absence of a standardized and universal SIA framework. To target this absence, we developed Estuary: a multimodal (text, audio, and soon video) framework which facilitates the development of low-latency, real-time SIAs. Estuary seeks to reduce repeat work between studies and to provide a flexible platform that can be run entirely off-cloud to maximize configurability, controllability, reproducibility of studies, and speed of agent response times. We are able to do this by constructing a robust multimodal framework which incorporates current and future components seamlessly into a modular and interoperable architecture.
Trajectory Improvement and Reward Learning from Comparative Language Feedback
Yang, Zhaojing, Jun, Miru, Tien, Jeremy, Russell, Stuart J., Dragan, Anca, Bıyık, Erdem
Learning from human feedback has gained traction in fields like robotics and natural language processing in recent years. While prior works mostly rely on human feedback in the form of comparisons, language is a preferable modality that provides more informative insights into user preferences. In this work, we aim to incorporate comparative language feedback to iteratively improve robot trajectories and to learn reward functions that encode human preferences. To achieve this goal, we learn a shared latent space that integrates trajectory data and language feedback, and subsequently leverage the learned latent space to improve trajectories and learn human preferences. To the best of our knowledge, we are the first to incorporate comparative language feedback into reward learning. Our simulation experiments demonstrate the effectiveness of the learned latent space and the success of our learning algorithms. We also conduct human subject studies that show our reward learning algorithm achieves a 23.9% higher subjective score on average and is 11.3% more time-efficient compared to preference-based reward learning, underscoring the superior performance of our method. Our website is at https://liralab.usc.edu/comparative-language-feedback/
Do Bayesian Neural Networks Improve Weapon System Predictive Maintenance?
Potter, Michael, Jun, Miru
This approach lacks the extra information on individual systems with interval-censored data and time-varying weapon system characteristics. A recent method introduced the covariates. We analyze and benchmark our approach, Weibull-Cox Bayesian Neural Network tested on several LaplaceNN, on synthetic and real datasets with standard weapon systems, albeit requiring a held-out validation set [7]. classification metrics such as Receiver Operating Characteristic Moreover, while understanding the population reliability trends (ROC) Area Under Curve (AUC) Precision-Recall (PR) AUC, via a Weibull distribution is informative, this formulation does and reliability curve visualizations.