Martin, Daniel
Perceptions and Detection of AI Use in Manuscript Preparation for Academic Journals
Chemaya, Nir, Martin, Daniel
The emergent abilities of Large Language Models (LLMs), which power tools like ChatGPT and Bard, have produced both excitement and worry about how AI will impact academic writing. In response to rising concerns about AI use, authors of academic publications may decide to voluntarily disclose any AI tools they use to revise their manuscripts, and journals and conferences could begin mandating disclosure and/or turn to using detection services, as many teachers have done with student writing in class settings. Given these looming possibilities, we investigate whether academics view it as necessary to report AI use in manuscript preparation and how detectors react to the use of AI in academic writing.
AI Oversight and Human Mistakes: Evidence from Centre Court
Almog, David, Gauriot, Romain, Page, Lionel, Martin, Daniel
Powered by the increasing predictive capabilities of machine learning algorithms, artificial intelligence (AI) systems have begun to be used to overrule human mistakes in many settings. We provide the first field evidence this AI oversight carries psychological costs that can impact human decision-making. We investigate one of the highest visibility settings in which AI oversight has occurred: the Hawk-Eye review of umpires in top tennis tournaments. We find that umpires lowered their overall mistake rate after the introduction of Hawk-Eye review, in line with rational inattention given psychological costs of being overruled by AI. We also find that umpires increased the rate at which they called balls in, which produced a shift from making Type II errors (calling a ball out when in) to Type I errors (calling a ball in when out). We structurally estimate the psychological costs of being overruled by AI using a model of rational inattentive umpires, and our results suggest that because of these costs, umpires cared twice as much about Type II errors under AI oversight.
Fast Lifelong Adaptive Inverse Reinforcement Learning from Demonstrations
Chen, Letian, Jayanthi, Sravan, Paleja, Rohan, Martin, Daniel, Zakharov, Viacheslav, Gombolay, Matthew
Learning from Demonstration (LfD) approaches empower end-users to teach robots novel tasks via demonstrations of the desired behaviors, democratizing access to robotics. However, current LfD frameworks are not capable of fast adaptation to heterogeneous human demonstrations nor the large-scale deployment in ubiquitous robotics applications. In this paper, we propose a novel LfD framework, Fast Lifelong Adaptive Inverse Reinforcement learning (FLAIR). Our approach (1) leverages learned strategies to construct policy mixtures for fast adaptation to new demonstrations, allowing for quick end-user personalization, (2) distills common knowledge across demonstrations, achieving accurate task inference; and (3) expands its model only when needed in lifelong deployments, maintaining a concise set of prototypical strategies that can approximate all behaviors via policy mixtures. We empirically validate that FLAIR achieves adaptability (i.e., the robot adapts to heterogeneous, user-specific task preferences), efficiency (i.e., the robot achieves sample-efficient adaptation), and scalability (i.e., the model grows sublinearly with the number of demonstrations while maintaining high performance). FLAIR surpasses benchmarks across three control tasks with an average 57% improvement in policy returns and an average 78% fewer episodes required for demonstration modeling using policy mixtures. Finally, we demonstrate the success of FLAIR in a table tennis task and find users rate FLAIR as having higher task (p<.05) and personalization (p<.05) performance.
Athletic Mobile Manipulator System for Robotic Wheelchair Tennis
Zaidi, Zulfiqar, Martin, Daniel, Belles, Nathaniel, Zakharov, Viacheslav, Krishna, Arjun, Lee, Kin Man, Wagstaff, Peter, Naik, Sumedh, Sklar, Matthew, Choi, Sugju, Kakehi, Yoshiki, Patil, Ruturaj, Mallemadugula, Divya, Pesce, Florian, Wilson, Peter, Hom, Wendell, Diamond, Matan, Zhao, Bryan, Moorman, Nina, Paleja, Rohan, Chen, Letian, Seraj, Esmaeil, Gombolay, Matthew
Athletics are a quintessential and universal expression of humanity. From French monks who in the 12th century invented jeu de paume, the precursor to modern lawn tennis, back to the K'iche' people who played the Maya Ballgame as a form of religious expression over three thousand years ago, humans have sought to train their minds and bodies to excel in sporting contests. Advances in robotics are opening up the possibility of robots in sports. Yet, key challenges remain, as most prior works in robotics for sports are limited to pristine sensing environments, do not require significant force generation, or are on miniaturized scales unsuited for joint human-robot play. In this paper, we propose the first open-source, autonomous robot for playing regulation wheelchair tennis. We demonstrate the performance of our full-stack system in executing ground strokes and evaluate each of the system's hardware and software components. The goal of this paper is to (1) inspire more research in human-scale robot athletics and (2) establish the first baseline for a reproducible wheelchair tennis robot for regulation singles play. Our paper contributes to the science of systems design and poses a set of key challenges for the robotics community to address in striving towards robots that can match human capabilities in sports.
An Image Analysis Environment for Species Identification of Food Contaminating Beetles
Martin, Daniel (Arizona State University) | Ding, Hongjian (US Food and Drug Adminstration) | Wu, Leihong (US Food and Drug Administration) | Semey, Howard (US Food and Drug Adminstration) | Barnes, Amy (US Food and Drug Adminstration) | Langley, Darryl (US Food and Drug Adminstration) | Park, Su Inn (Samsung Austin Semiconductor LLC) | Liu, Zhichao (US Food and Drug Administration) | Tong, Weida (US Food and Drug Administration) | Xu, Joshua (US Food and Drug Administration)
Food safety is vital to the well-being of society; therefore, it is important to inspect food products to ensure minimal health risks are present. The presence of certain species of insects, especially storage beetles, is a reliable indicator of possible contamination during storage and food processing. However, the current approach of identifying species by visual examination of insect fragments is rather subjective and time-consuming. To aid this inspection process, we have developed in collaboration with FDA food analysts some image analysis-based machine intelligence to achieve species identification with up to 90% accuracy. The current project is a continuation of this development effort. Here we present an image analysis environment that allows practical deployment of the machine intelligence on computers with limited processing power and memory. Using this environment, users can prepare input sets by selecting images for analysis, and inspect these images through the integrated panning and zooming capabilities. After species analysis, the results panel allows the user to compare the analyzed images with reference images of the proposed species. Further additions to this environment should include a log of previously analyzed images, and eventually extend to interaction with a central cloud repository of images through a web-based interface.