Goto

Collaborating Authors

 utensil


3D printer transforms food waste into coffee mugs and coasters

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. A new type of 3D printer could help households do their part to reduce food waste while also producing some nifty household accessories. In 2019 alone, the US generated 66 million tons of food waste. The majority of that waste (60 percent) ended up in landfills. According to one EPA report, the carbon dioxide generated from food waste is equivalent to the emissions of 42 coal-fired power plants.


FEAST: A Flexible Mealtime-Assistance System Towards In-the-Wild Personalization

Jenamani, Rajat Kumar, Silver, Tom, Dodson, Ben, Tong, Shiqin, Song, Anthony, Yang, Yuting, Liu, Ziang, Howe, Benjamin, Whitneck, Aimee, Bhattacharjee, Tapomayukh

arXiv.org Artificial Intelligence

Physical caregiving robots hold promise for improving the quality of life of millions worldwide who require assistance with feeding. However, in-home meal assistance remains challenging due to the diversity of activities (e.g., eating, drinking, mouth wiping), contexts (e.g., socializing, watching TV), food items, and user preferences that arise during deployment. In this work, we propose FEAST, a flexible mealtime-assistance system that can be personalized in-the-wild to meet the unique needs of individual care recipients. Developed in collaboration with two community researchers and informed by a formative study with a diverse group of care recipients, our system is guided by three key tenets for in-the-wild personalization: adaptability, transparency, and safety. FEAST embodies these principles through: (i) modular hardware that enables switching between assisted feeding, drinking, and mouth-wiping, (ii) diverse interaction methods, including a web interface, head gestures, and physical buttons, to accommodate diverse functional abilities and preferences, and (iii) parameterized behavior trees that can be safely and transparently adapted using a large language model. We evaluate our system based on the personalization requirements identified in our formative study, demonstrating that FEAST offers a wide range of transparent and safe adaptations and outperforms a state-of-the-art baseline limited to fixed customizations. To demonstrate real-world applicability, we conduct an in-home user study with two care recipients (who are community researchers), feeding them three meals each across three diverse scenarios. We further assess FEAST's ecological validity by evaluating with an Occupational Therapist previously unfamiliar with the system. In all cases, users successfully personalize FEAST to meet their individual needs and preferences. Website: https://emprise.cs.cornell.edu/feast


Collab-Overcooked: Benchmarking and Evaluating Large Language Models as Collaborative Agents

Sun, Haochen, Zhang, Shuwen, Ren, Lei, Xu, Hao, Fu, Hao, Yuan, Caixia, Wang, Xiaojie

arXiv.org Artificial Intelligence

Large language models (LLMs) based agent systems have made great strides in real-world applications beyond traditional NLP tasks. This paper proposes a new LLM-powered Multi-Agent System (LLM-MAS) benchmark, Collab-Overcooked, built on the popular Overcooked-AI game with more applicable and challenging tasks in interactive environments. Collab-Overcooked extends existing benchmarks from two novel perspectives. First, it provides a multi-agent framework supporting diverse tasks and objectives and encourages collaboration through natural language communication. Second, it introduces a spectrum of process-oriented evaluation metrics to assess the fine-grained collaboration capabilities of different LLM agents, a dimension often overlooked in prior work. We conduct extensive experiments over 10 popular LLMs and show that, while the LLMs present a strong ability in goal interpretation, there is a significant discrepancy in active collaboration and continuous adaption that are critical for efficiently fulfilling complicated tasks. Notably, we highlight the strengths and weaknesses in LLM-MAS and provide insights for improving and evaluating LLM-MAS on a unified and open-sourced benchmark. Environments, 30 open-ended tasks, and an integrated evaluation package are now publicly available at https://github.com/YusaeMeow/Collab-Overcooked.


Kiri-Spoon: A Kirigami Utensil for Robot-Assisted Feeding

Keely, Maya, Franco, Brandon, Grothoff, Casey, Jenamani, Rajat Kumar, Bhattacharjee, Tapomayukh, Losey, Dylan P., Nemlekar, Heramb

arXiv.org Artificial Intelligence

For millions of adults with mobility limitations, eating meals is a daily challenge. A variety of robotic systems have been developed to address this societal need. Unfortunately, end-user adoption of robot-assisted feeding is limited, in part because existing devices are unable to seamlessly grasp, manipulate, and feed diverse foods. Recent works seek to address this issue by creating new algorithms for food acquisition and bite transfer. In parallel to these algorithmic developments, however, we hypothesize that mechanical intelligence will make it fundamentally easier for robot arms to feed humans. We therefore propose Kiri-Spoon, a soft utensil specifically designed for robot-assisted feeding. Kiri-Spoon consists of a spoon-shaped kirigami structure: when actuated, the kirigami sheet deforms into a bowl of increasing curvature. Robot arms equipped with Kiri-Spoon can leverage the kirigami structure to wrap-around morsels during acquisition, contain those items as the robot moves, and then compliantly release the food into the user's mouth. Overall, Kiri-Spoon combines the familiar and comfortable shape of a standard spoon with the increased capabilities of soft robotic grippers. In what follows, we first apply a stakeholder-driven design process to ensure that Kiri-Spoon meets the needs of caregivers and users with physical disabilities. We next characterize the dynamics of Kiri-Spoon, and derive a mechanics model to relate actuation force to the spoon's shape. The paper concludes with three separate experiments that evaluate (a) the mechanical advantage provided by Kiri-Spoon, (b) the ways users with disabilities perceive our system, and (c) how the mechanical intelligence of Kiri-Spoon complements state-of-the-art algorithms. Our results suggest that Kiri-Spoon advances robot-assisted feeding across diverse foods, multiple robotic platforms, and different manipulation algorithms.


Process-aware Human Activity Recognition

Zheng, Jiawei, Papapanagiotou, Petros, Fleuriot, Jacques D., Hillston, Jane

arXiv.org Artificial Intelligence

Humans naturally follow distinct patterns when conducting their daily activities, which are driven by established practices and processes, such as production workflows, social norms and daily routines. Human activity recognition (HAR) algorithms usually use neural networks or machine learning techniques to analyse inherent relationships within the data. However, these approaches often overlook the contextual information in which the data are generated, potentially limiting their effectiveness. We propose a novel approach that incorporates process information from context to enhance the HAR performance. Specifically, we align probabilistic events generated by machine learning models with process models derived from contextual information. This alignment adaptively weighs these two sources of information to optimise HAR accuracy. Our experiments demonstrate that our approach achieves better accuracy and Macro F1-score compared to baseline models.


SODA: a Soft Origami Dynamic utensil for Assisted feeding

Song, Yuxin Ray, Wang, Shufan

arXiv.org Artificial Intelligence

SODA aims to revolutionize assistive feeding systems by designing a multi-purpose utensil using origami-inspired artificial muscles. Traditional utensils, such as forks and spoons,are hard and stiff, causing discomfort and fear among users, especially when operated by autonomous robotic arms. Additionally, these systems require frequent utensil changes to handle different food types. Our innovative utensil design addresses these issues by offering a versatile, adaptive solution that can seamlessly transition between gripping and scooping various foods without the need for manual intervention. Utilizing the flexibility and strength of origami-inspired artificial muscles, the utensil ensures safe and comfortable interactions, enhancing user experience and efficiency. This approach not only simplifies the feeding process but also promotes greater independence for individuals with limited mobility, contributing to the advancement of soft robotics in healthcare applications.


Handling abort commands for household kitchen robots

Has, Darius, Groza, Adrian, Pomarlan, Mihai

arXiv.org Artificial Intelligence

The task of aborting commands is essentially a problem of planning, or rather replanning whose core value comes when A. Handling abort commands a robotic system is able to autonomously infer a fallback plan without a human in the loop. A robot enhanced with capabilities The key challenges of handling cancel commands have of handling abort commands will able to reconfigure been acknowledged by Haarland et al. [1]. These challenges and replan its actions so that it can leave its environment in a include: the complex relationship between a goal and its clean state represents a step towards a more robust solution, subgoals, highlighting a need for a recursive approach for given the fact that in the world of robotics malfunctions and them, while also taking into account the plans in progress, unresponsiveness are risks that can be mitigated by having a i.e. the actual state of the world. Handling such scenarios is fallback mechanism.


You are what you eat? Feeding foundation models a regionally diverse food dataset of World Wide Dishes

Magomere, Jabez, Ishida, Shu, Afonja, Tejumade, Salama, Aya, Kochin, Daniel, Yuehgoh, Foutse, Hamzaoui, Imane, Sefala, Raesetje, Alaagib, Aisha, Semenova, Elizaveta, Crais, Lauren, Hall, Siobhan Mackenzie

arXiv.org Artificial Intelligence

Foundation models are increasingly ubiquitous in our daily lives, used in everyday tasks such as text-image searches, interactions with chatbots, and content generation. As use increases, so does concern over the disparities in performance and fairness of these models for different people in different parts of the world. To assess these growing regional disparities, we present World Wide Dishes, a mixed text and image dataset consisting of 765 dishes, with dish names collected in 131 local languages. World Wide Dishes has been collected purely through human contribution and decentralised means, by creating a website widely distributed through social networks. Using the dataset, we demonstrate a novel means of operationalising capability and representational biases in foundation models such as language models and text-to-image generative models. We enrich these studies with a pilot community review to understand, from a first-person perspective, how these models generate images for people in five African countries and the United States. We find that these models generally do not produce quality text and image outputs of dishes specific to different regions. This is true even for the US, which is typically considered to be more well-resourced in training data - though the generation of US dishes does outperform that of the investigated African countries. The models demonstrate a propensity to produce outputs that are inaccurate as well as culturally misrepresentative, flattening, and insensitive. These failures in capability and representational bias have the potential to further reinforce stereotypes and disproportionately contribute to erasure based on region. The dataset and code are available at https://github.com/oxai/world-wide-dishes/.


How Much You Ate? Food Portion Estimation on Spoons

Sharma, Aaryam, Czarnecki, Chris, Chen, Yuhao, Xi, Pengcheng, Xu, Linlin, Wong, Alexander

arXiv.org Artificial Intelligence

Monitoring dietary intake is a crucial aspect of promoting healthy living. In recent years, advances in computer vision technology have facilitated dietary intake monitoring through the use of images and depth cameras. However, the current state-of-the-art image-based food portion estimation algorithms assume that users take images of their meals one or two times, which can be inconvenient and fail to capture food items that are not visible from a top-down perspective, such as ingredients submerged in a stew. To address these limitations, we introduce an innovative solution that utilizes stationary user-facing cameras to track food items on utensils, not requiring any change of camera perspective after installation. The shallow depth of utensils provides a more favorable angle for capturing food items, and tracking them on the utensil's surface offers a significantly more accurate estimation of dietary intake without the need for post-meal image capture. The system is reliable for estimation of nutritional content of liquid-solid heterogeneous mixtures such as soups and stews. Through a series of experiments, we demonstrate the exceptional potential of our method as a non-invasive, user-friendly, and highly accurate dietary intake monitoring tool.


CalliRewrite: Recovering Handwriting Behaviors from Calligraphy Images without Supervision

Luo, Yuxuan, Wu, Zekun, Lian, Zhouhui

arXiv.org Artificial Intelligence

Human-like planning skills and dexterous manipulation have long posed challenges in the fields of robotics and artificial intelligence (AI). The task of reinterpreting calligraphy presents a formidable challenge, as it involves the decomposition of strokes and dexterous utensil control. Previous efforts have primarily focused on supervised learning of a single instrument, limiting the performance of robots in the realm of cross-domain text replication. To address these challenges, we propose CalliRewrite: a coarse-to-fine approach for robot arms to discover and recover plausible writing orders from diverse calligraphy images without requiring labeled demonstrations. Our model achieves fine-grained control of various writing utensils. Specifically, an unsupervised image-to-sequence model decomposes a given calligraphy glyph to obtain a coarse stroke sequence. Using an RL algorithm, a simulated brush is fine-tuned to generate stylized trajectories for robotic arm control. Evaluation in simulation and physical robot scenarios reveals that our method successfully replicates unseen fonts and styles while achieving integrity in unknown characters.