Not enough data to create a plot.
Try a different view from the menu above.
Ocker, Felix
From Idea to CAD: A Language Model-Driven Multi-Agent System for Collaborative Design
Ocker, Felix, Menzel, Stefan, Sadik, Ahmed, Rios, Thiago
In modern product development, Computer Aided Design and Engineering (CAD/E) plays a key role to turn innovative ideas and visions into tangible and manufacturable designs. Digital 2D and 3D geometry representations of objects on different levels of granularity are required in various intermediate development steps, for example aesthetic discussions, design quality evaluations based on simulation tools, and design feasibility checks. For these steps, development teams include various roles such as requirement engineers, style designers, Computer-Aided Design (CAD) experts, simulation domain experts, and quality assurance teams who create a product cooperatively. Stakeholders in these roles utilize software tools to implement digital representations of products, also referred to as digital twins. This process receives an increasing amount of support in the form of Artificial Intelligence (AI) methods. For example, data science methods provide efficient ways to improve the problem understanding, e.g., by calculating design sensitivities towards a certain performance aspect [Grรคning and Sendhoff, 2014], or displaying the distribution of design variations in the solution space using clustering [Lanfermann et al., 2020].
To Help or Not to Help: LLM-based Attentive Support for Human-Robot Group Interactions
Tanneberg, Daniel, Ocker, Felix, Hasler, Stephan, Deigmoeller, Joerg, Belardinelli, Anna, Wang, Chao, Wersing, Heiko, Sendhoff, Bernhard, Gienger, Michael
How can a robot provide unobtrusive physical support within a group of humans? We present Attentive Support, a novel interaction concept for robots to support a group of humans. It combines scene perception, dialogue acquisition, situation understanding, and behavior generation with the common-sense reasoning capabilities of Large Language Models (LLMs). In addition to following user instructions, Attentive Support is capable of deciding when and how to support the humans, and when to remain silent to not disturb the group. With a diverse set of scenarios, we show and evaluate the robot's attentive behavior, which supports and helps the humans when required, while not disturbing if no help is needed.
Large Language Models for Multi-Modal Human-Robot Interaction
Wang, Chao, Hasler, Stephan, Tanneberg, Daniel, Ocker, Felix, Joublin, Frank, Ceravola, Antonello, Deigmoeller, Joerg, Gienger, Michael
This paper presents an innovative large language model (LLM)-based robotic system for enhancing multi-modal human-robot interaction (HRI). Traditional HRI systems relied on complex designs for intent estimation, reasoning, and behavior generation, which were resource-intensive. In contrast, our system empowers researchers and practitioners to regulate robot behavior through three key aspects: providing high-level linguistic guidance, creating "atomics" for actions and expressions the robot can use, and offering a set of examples. Implemented on a physical robot, it demonstrates proficiency in adapting to multi-modal inputs and determining the appropriate manner of action to assist humans with its arms, following researchers' defined guidelines. Simultaneously, it coordinates the robot's lid, neck, and ear movements with speech output to produce dynamic, multi-modal expressions. This showcases the system's potential to revolutionize HRI by shifting from conventional, manual state-and-flow design methods to an intuitive, guidance-based, and example-driven approach.
Exploring Large Language Models as a Source of Common-Sense Knowledge for Robots
Ocker, Felix, Deigmรถller, Jรถrg, Eggert, Julian
Service robots need common-sense knowledge to help humans in everyday situations as it enables them to understand the context of their actions. However, approaches that use ontologies face a challenge because common-sense knowledge is often implicit, i.e., it is obvious to humans but not explicitly stated. This paper investigates if Large Language Models (LLMs) can fill this gap. Our experiments reveal limited effectiveness in the selective extraction of contextual action knowledge, suggesting that LLMs may not be sufficient on their own. However, the large-scale extraction of general, actionable knowledge shows potential, indicating that LLMs can be a suitable tool for efficiently creating ontologies for robots. This paper shows that the technique used for knowledge extraction can be applied to populate a minimalist ontology, showcasing the potential of LLMs in synergy with formal knowledge representation.
CoPAL: Corrective Planning of Robot Actions with Large Language Models
Joublin, Frank, Ceravola, Antonello, Smirnov, Pavel, Ocker, Felix, Deigmoeller, Joerg, Belardinelli, Anna, Wang, Chao, Hasler, Stephan, Tanneberg, Daniel, Gienger, Michael
In the pursuit of fully autonomous robotic systems capable of taking over tasks traditionally performed by humans, the complexity of open-world environments poses a considerable challenge. Addressing this imperative, this study contributes to the field of Large Language Models (LLMs) applied to task and motion planning for robots. We propose a system architecture that orchestrates a seamless interplay between multiple cognitive levels, encompassing reasoning, planning, and motion generation. At its core lies a novel replanning strategy that handles physically grounded, logical, and semantic errors in the generated plans. We demonstrate the efficacy of the proposed feedback architecture, particularly its impact on executability, correctness, and time complexity via empirical evaluation in the context of a simulation and two intricate real-world scenarios: blocks world, barman and pizza preparation.
Ontology-Based Feedback to Improve Runtime Control for Multi-Agent Manufacturing Systems
Lim, Jonghan, Pfeiffer, Leander, Ocker, Felix, Vogel-Heuser, Birgit, Kovalenko, Ilya
Improving the overall equipment effectiveness (OEE) of machines on the shop floor is crucial to ensure the productivity and efficiency of manufacturing systems. To achieve the goal of increased OEE, there is a need to develop flexible runtime control strategies for the system. Decentralized strategies, such as multi-agent systems, have proven effective in improving system flexibility. However, runtime multi-agent control of complex manufacturing systems can be challenging as the agents require extensive communication and computational efforts to coordinate agent activities. One way to improve communication speed and cooperation capabilities between system agents is by providing a common language between these agents to represent knowledge about system behavior. The integration of ontology into multi-agent systems in manufacturing provides agents with the capability to continuously update and refine their knowledge in a global context. This paper contributes to the design of an ontology for multi-agent systems in manufacturing, introducing an extendable knowledge base and a methodology for continuously updating the production data by agents during runtime. To demonstrate the effectiveness of the proposed framework, a case study is conducted in a simulated environment, which shows improvements in OEE during runtime.